Angelika Adensamer, Laura Jung
Across Europe, asylum authorities increasingly deploy AI tools in the name of speed and efficiency—from automated translation to LLM-based text processing and chatbot-assisted country-of-origin research. Yet Austria’s experience, mapped by the AISYL project, shows how these technologies amplify a wider political trend: the erosion of the right to asylum. Far from neutral administrative aids, AI systems introduce errors, bias, and opacity into high-stakes procedures, risking further harm in an already restrictive asylum landscape.
Continue reading >>
Herwig C. H. Hofmann
As automated decision-making reshapes administrative procedures, long-standing guarantees like access to the file, the right to be heard, and effective judicial remedies risk losing their meaning. When AI systems collect, process and weigh information invisibly, individuals and courts cannot understand or contest how decisions are made. EU procedural law must rethink the very notion of “the file” to ensure traceability, human oversight, and genuine accountability.
Continue reading >>
Matija Kontak
Biometric data qualifies as particularly sensitive personal data under the GDPR, and its processing must meet strict legal requirements. Frontex’s exploration of novel biometric technologies, including DNA profiling and vein recognition, raises concerns in the absence of demonstrated necessity or proportionality. Such developments require prior Fundamental Rights and Data Protection Impact Assessments. Overall, the legal and technical prerequisites for expanding the use of these technologies, particularly in light of interoperability challenges and fundamental rights protections, are not yet sufficiently established.
Continue reading >>
Deirdre Curtin, Ludivine Stewart
It is now widely acknowledged that new technologies, such as artificial intelligence (AI), are becoming integral to migration and asylum governance. This contribution argues, using the AI Act as an illustration, that migration and asylum governance suffer from a culture of information deficit, which is exacerbated by the increasing use of modern technology. It therefore advocates for a shift towards a culture of transparency, which is necessary for ensuring both accountability and fairness.
Continue reading >>
Lenka Dražanová
As the EU prepares to launch the new Entry/Exit System, biometric technologies are set to shape how millions move across Europe’s borders. But what do citizens consider fair when states collect fingerprints, facial images, and automated risk scores? A five-country survey experiment shows that public views are fragmented: people do not consistently prefer automation over human border guards, nor do they judge different traveller groups uniformly. These findings challenge assumptions that “smart borders” automatically enhance objectivity, legitimacy, or trust.
Continue reading >>
Maya Ellen Hertz, William Hamilton Byrne, Thomas Gammeltoft-Hansen
European asylum systems increasingly rely on AI tools—from identity checks to case summarisation—promising fairness and efficiency but also raising significant human rights and transparency issues. Because Refugee Status Determination depends heavily on credibility assessments amid limited evidence, AI risks replicating bias, introducing new proxy discrimination, and deepening epistemic uncertainty. This contribution asks how current AI models may generate foundational uncertainties in Refugee Status Determination and what, if anything, can be salvaged from AI going forward.
Continue reading >>
Mirko Dukovic, Cathryn Costello
The deep challenge of equality by design is that it is no mere technical matter. Underlying equality law commitments is a contextual assessment of the impact of distributive systems on disadvantaged groups. This goes beyond the standard approach to “debiasing” in computer science, as the impactful contribution of Sandra Wachter and her team has demonstrated. However, applying these insights to the discriminatory borders requires even greater efforts.
Continue reading >>
Cathryn Costello, Mirko Dukovic
How are digital and algorithmic systems reshaping asylum and refugee protection in Europe? Based at the Centre for Fundamental Rights at the Hertie School, the AFAR project brings together scholars across Europe to map the growing use of “newtech” in asylum and border governance—from automated decision-making to digital evidence and biometric tools. This symposium traces the project’s findings.
Continue reading >>