Niovi Vavoula
This post synthesises key insights from the AFAR symposium on fairness and AI in asylum, migration and border management. While EU policy frames fairness as a core requirement of trustworthy AI, contributors show how discrimination, opacity, privatisation and weak procedural safeguards undermine that promise. Examining risks from biased data to secrecy in high-risk systems, the post highlights structural obstacles to fair decision-making and calls for sustained oversight, participatory governance, and research grounded in the lived experiences of those most affected.
Continue reading >>
Ben Hayes
OHCHR’s forthcoming guidance on human rights-based digital border governance consolidates legal standards for data-intensive migration and border control. This contribution identifies where such guidance can help, and where a significant shift in current State practice is needed: clear legal basis and safeguards for intrusive practices, data sharing and interoperability, oversight of algorithmic systems, human rights impact assessment, and the use of security and emergency regimes that dilute rights protections. Each area is framed by the need to ensure legality, necessity and proportionality, non-discrimination, and effective remedy.
Continue reading >>
Natalie Welfens
Digitisation and the growing reliance on private intermediaries are transforming visa systems from paper-based procedures into opaque digital-commercial infrastructures. Drawing on findings from the AFAR project, this contribution shows how automation, outsourcing, and fragmented accountability reshape rights protection in migration governance. Using visa applications as a case study, it argues that existing regulatory frameworks remain insufficient and calls for new methods of scrutiny — including participatory, co-designed approaches that centre migrants’ perspectives and reimagine transparency, oversight, and responsibility in digital border regimes.
Continue reading >>
Angelika Adensamer, Laura Jung
Across Europe, asylum authorities increasingly deploy AI tools in the name of speed and efficiency—from automated translation to LLM-based text processing and chatbot-assisted country-of-origin research. Yet Austria’s experience, mapped by the AISYL project, shows how these technologies amplify a wider political trend: the erosion of the right to asylum. Far from neutral administrative aids, AI systems introduce errors, bias, and opacity into high-stakes procedures, risking further harm in an already restrictive asylum landscape.
Continue reading >>
Herwig C. H. Hofmann
As automated decision-making reshapes administrative procedures, long-standing guarantees like access to the file, the right to be heard, and effective judicial remedies risk losing their meaning. When AI systems collect, process and weigh information invisibly, individuals and courts cannot understand or contest how decisions are made. EU procedural law must rethink the very notion of “the file” to ensure traceability, human oversight, and genuine accountability.
Continue reading >>
Matija Kontak
Biometric data qualifies as particularly sensitive personal data under the GDPR, and its processing must meet strict legal requirements. Frontex’s exploration of novel biometric technologies, including DNA profiling and vein recognition, raises concerns in the absence of demonstrated necessity or proportionality. Such developments require prior Fundamental Rights and Data Protection Impact Assessments. Overall, the legal and technical prerequisites for expanding the use of these technologies, particularly in light of interoperability challenges and fundamental rights protections, are not yet sufficiently established.
Continue reading >>
Deirdre Curtin, Ludivine Stewart
It is now widely acknowledged that new technologies, such as artificial intelligence (AI), are becoming integral to migration and asylum governance. This contribution argues, using the AI Act as an illustration, that migration and asylum governance suffer from a culture of information deficit, which is exacerbated by the increasing use of modern technology. It therefore advocates for a shift towards a culture of transparency, which is necessary for ensuring both accountability and fairness.
Continue reading >>
Lenka Dražanová
As the EU prepares to launch the new Entry/Exit System, biometric technologies are set to shape how millions move across Europe’s borders. But what do citizens consider fair when states collect fingerprints, facial images, and automated risk scores? A five-country survey experiment shows that public views are fragmented: people do not consistently prefer automation over human border guards, nor do they judge different traveller groups uniformly. These findings challenge assumptions that “smart borders” automatically enhance objectivity, legitimacy, or trust.
Continue reading >>
Maya Ellen Hertz, William Hamilton Byrne, Thomas Gammeltoft-Hansen
European asylum systems increasingly rely on AI tools—from identity checks to case summarisation—promising fairness and efficiency but also raising significant human rights and transparency issues. Because Refugee Status Determination depends heavily on credibility assessments amid limited evidence, AI risks replicating bias, introducing new proxy discrimination, and deepening epistemic uncertainty. This contribution asks how current AI models may generate foundational uncertainties in Refugee Status Determination and what, if anything, can be salvaged from AI going forward.
Continue reading >>
Mirko Dukovic, Cathryn Costello
The deep challenge of equality by design is that it is no mere technical matter. Underlying equality law commitments is a contextual assessment of the impact of distributive systems on disadvantaged groups. This goes beyond the standard approach to “debiasing” in computer science, as the impactful contribution of Sandra Wachter and her team has demonstrated. However, applying these insights to the discriminatory borders requires even greater efforts.
Continue reading >>
Cathryn Costello, Mirko Dukovic
How are digital and algorithmic systems reshaping asylum and refugee protection in Europe? Based at the Centre for Fundamental Rights at the Hertie School, the AFAR project brings together scholars across Europe to map the growing use of “newtech” in asylum and border governance—from automated decision-making to digital evidence and biometric tools. This symposium traces the project’s findings.
Continue reading >>