Introduction to the Symposium on Algorithmic Fairness for Asylum Seekers and Refugees
The AFAR (Algorithmic Fairness for Asylum Seekers and Refugees) is a collaborative research project based at the Centre for Fundamental Rights at the Hertie School, Berlin. It emerged out of conversations with Dr Derya Ozkul, then a postdoctoral researcher on the RefMig project. In her research on refugee recognition and reception practices, she was encountering increasing use of digital and algorithmic systems, in particular forms of automated (mostly part-automated) decision-making in the refugee regime – which we referred to as “newtech” for short. Derya Ozkul and Cathryn Costello developed the application for a collaborative project, drawing together a team from across Europe – Professor Thomas Gammeltoft-Hansen, assistant professor William Hamilton-Byrne and doctoral researcher Asta Sofie Stage Jarlner at MOBILE, University of Copenhagen; Professor Iris Goldner-Lang, and PhDs Matija Kontak and Ana Kršinić, University of Zagreb; and Professor Martin Ruhs and Dr Lenka Dražanová, at the Migration Policy Centre, European University Institute. The project launched in 2021, funded by the VW Foundation through the “Challenges for Europe” program.
Origins, Approaches and Concepts
Around the same time as we turned to study newtech, Petra Molnar and Lex Gill published “Bots at the Gate”, a report on automation of borders in Canada. In 2019, Philip Alston had published his important report on the automated welfare state, under the headline “World stumbling zombie-like into a digital welfare dystopia”, followed by E. Tendayi Achiume’s critical 2020 report on race discrimination in automated borders. By 2020, many human rights scholars were examining algorithmic systems – Lorna McGregor, Daragh Murray amongst others. The UK’s post-Brexit settlement scheme, heavily automated, had been the focus of important studies and scholarship by Joe Tomlinson. Ana Beduschi and Niovi Vavloula’s work was also emerging at that time, as the Covid pandemic swept in greater digitization and automation of borders.
Meanwhile, data science was exploding as a field, and new tools of big data analytics enabling empirical legal scholars and political scientists to study entire corpuses of asylum decisions, demonstrating high degrees of arbitrariness and discrimination within these systems. Studies such as Chen & Eagel’s 2017 paper on the US system and Mathilde Emeriau’s 2019 award-winning paper on the French system stood out. Preference-matching tools were emerging which promised to enable better allocation of asylum-seekers and refugees to their places of refuge, including by enabling their choice in these matters. Newtech seemed like both the problem and potentially a solution to the problems of migration-asylum governance.
The central normative concept in AFAR was “fairness”, including but not limited to the concept of procedural fairness familiar to lawyers. We also aimed to consider allocative and distributive fairness, and wider questions about the human rights and rule of law impacts of AI. At times, even this broad normative lens seems too constraining. AI’s impact on human cognition, working life, climate and planetary justice are also screaming for greater attention. The term “AI” itself seems profoundly misleading – not particularly intelligent, and absolutely material – built on energy and water guzzling infrastructure, and armies of underpaid human workers in the Global South. Many AI experts and global leaders, including Nobel prize-winner, Geoffrey Hinton, warned of a possible AI apocalypse in their open letters, including that presented to the UN General Assembly in September 2025. The ongoing environmental, labour and cognitive degradations attract less attention. The AFAR project’s relatively modest aims felt at times provincial and legalistic, and yet, treating AI and other digital technologies in their material infrastructural context, focusing on the actual workings within governance and public administration, seems less hype-inflected and probably wiser overall.
Contributions & Development
Derya Ozkul led the AFAR mapping of the use of newtech in the field, and the research on asylum seekers’ and refugees’ fairness perceptions. In the course of the project, she took up an assistant professorship at the University of Warwick, just one of the project’s many placement successes. She has continued to research digital technologies in their wider context, co-edited a recent special issue aiming to look “behind, beyond and around the black box.”
The first postdoctoral researcher to join the project, Francesca Palmiotto, researched and published on AI and fairness in asylum, the concept of automation decision-making, and (with Derya Ozkul) the hurdles to strategic litigation. She also took the lead in establishing the TechLitigation Database. At the end of her term, she went on to an assistant professorship at IE Law School in Madrid. Upon completion of his postdoctoral research at EUI, Mirko Đuković joined the AFAR team, who has been working on technology regulation and non-discrimination.
Within the framework of this collaborative project, the Zagreb team investigated specific questions of fundamental rights protection at the EU’s external borders, focusing respectively on the design and practice of national independent monitoring mechanisms and on the legality of Frontex’s biometric practices. Drawing on his recent PhD thesis, Matija Kontak’s blogpost “Biometric Technologies, Frontex and Fundamental Rights”, demonstrates that many biometric border practices lack sufficient legal basis and justification to be legal under EU law, including some of the novel biometric practices of Frontex.
The EUI team led the work on fairness perceptions, making important contributions on the issue of public perceptions on asylum in general, as well as on the role of AI. In her blogpost here, Dr Lenka Drazanova shows that public perceptions of the fairness of biometric border checks are fragmented and context-dependent, which challenges policy narratives of “objective” smart borders and underscores the need to ground biometric systems in EU fundamental rights standards of necessity, proportionality and effective redress. The Copenhagen team broke new conceptual ground with their infrastructural turn in this field, as well as work on credibility assessment in asylum, mobile phone data extraction and digital evidence. With many data science projects already underway at MOBILE, we were able to benefit from their expertise.
Challenges
The research faced three main challenges:
Firstly, the use of newtech is often shrouded in secrecy – governmental and business. This meant undertaking the mapping was challenging, as is any attempt to understand the workings of newtech. As Ludivine Stewart and Deirdre Curtin explore in their blogpost in this symposium, “Beyond AI Secrecy: The Struggle for Transparency in European Migration Governance” the transparency obligations in the AI Act need urgent clarification to curtail the potentially sweeping exception for migration governance. Over time, the project clarified the diversity of “newtech” tools and use cases. While we still focus on automated (and part-automated) decision-making, our focus also moved to include various forms of automated and digital evidence, as explored by Francesca Palmiotto in her work clarifying “when is a decision automated”; and Thomas Gammeltoft-Hansen and William Hamilton Byrne in their contribution on Digital Evidence in Refugee Status Determination. Natalie Welfen’s blogpost “Privatised Digital Borders” identifies the “digital privatised borders” in this field, and the additional regulatory and accountability challenges posed by the role of private commercial actors. In her new research project, she plans to explore participatory design as a way to reconfigure digital border tools.
Secondly, the project coincided with the “generative AI wave” and “AI hype”. We started the project with certain tools in mind (admittedly some based on junk science), which governments bought or developed to meet specific needs in specific contexts – mobile phone data extraction tools are the exemplar here. In November 2022, ChatGPT was launched, and the generative AI wave took off. It now seems that AI’s takeover of administrative practices is inexorable. In joint (on-going) work, Francesca Palmiotto and Cathryn Costello sought to clarify just which administrative tasks are suitable for automation. This proved no mere preliminary question, but rather necessitated a deep dive into the capacities of AI, bursting the AI hype bubble. Engaging with the work of computer scientists who also excel at public communication – Arvind Narayanan and Sayash Kapoor stand out – has proved a vital antidote to the “AI snakeoil” peddled in policy circles. Some scholars’ work stood out for not only looking at the newtech with the lawyer’s gaze, but also demonstrating a deep understanding of the tech itself. The work of Sandra Wachter and her team continues to inspire, bringing together law, ethics and computer science to great effect. In hindsight, a shortcoming of the project was the lack of a formal role for colleagues from computer science, and our one recommendation for any legal scholars embarking on new work in this field would be to draw in computer scientists and philosophers from the outset. In this Symposium, Angelika Adensamer and Laura Jung in their contribution “Navigating Technologies in Asylum Procedures in Austria” reveal Austria’s use of a range of AI tools, including commercial LLMs in country of origin research, with profound implications for the reliability of evidential assessment in asylum.
A third challenge relates to the legal framework: over the course of the project, the EU framework of great complexity emerged – data protection law, the AI Act, and new asylum measures – a giant legal mess. In the AFAR project, Francesca Palmiotto contributed important work on the legislative history of the EU AI Act. The application of the AI Act has barely started, and under great pressure from bigtech, the EU already announced the “digital omnibus package” which will “simplify existing rules on Artificial Intelligence, cybersecurity, and data.” The Center for AI and Digital Policy warned that the package will “let loose unsafe AI systems in the EU that will threaten public safety and fundamental rights, the very interests the EU AI Act was designed to protect” reminding that many jurisdictions started following the EU’s approach in classification of the AI systems.
To make sense of the legislative complexity, the lawyer’s tendency is often to search for principles to lend doctrinal clarity to complex overlapping legislation. In this vein, Herwig Hofmann’s blog here argues for a “Rethinking the Notion of the File – Access, Fair Hearing and Effective Remedies in the Age of Automation.” His approach is to reassert legal fundamentals in the face of technological change. In our own contribution, “Digital Visas – Deepening Discriminatory Borders?” we draw on the body of work on algorithmic systems’ propensity to exacerbate discrimination, at vast scale, as well as the key features of proposed digital visas to sound a warning call. In this work, we draw on the best of scholarship seeking to draw on the principles underlying EU equality law to challenge algorithmic practices, rather than diluting equality into the thin data science-driven concept of “debiasing”.
Going deeper, in their contribution “What ‘Real Risk’ Means For AI-Assisted Refugee Status Determination,” Maya Ellen Hertz, William Hamilton Byrne and Thomas Gammeltoft-Hansen argue against any wholesale automation in the asylum field, given its intrinsic lack of ground truth. However, they do see some possible support roles for AI to nudge decision-makers away from common errors, or support applications through the process. Whether such tools are likely to be commissioned or developed under current political environments remains to be seen.
Looking beyond EU law to international human rights law, Ben Hayes presents “Human Rights and Digital Border Governance”. The AFAR team engaged with OHCHR guidance at different stages. Like the Council of Europe’s HUDEIRA methodology, there is much emphasis here on ex ante scrutiny of algorithmic systems. By integrating the EU AI Act’s high-risk checklist with the Council of Europe AI Convention’s contextual HUDERIA methodology, EU migration and asylum authorities can transform formal risk compliance into genuine, enforceable protection of fundamental rights.
Conclusion
Although the project nears its formal end, the AFAR research continues. Fairness remains an important normative consideration for assessing the workings of AI in public decision-making, and our work-in-progress continues to examine its implications for decision-making in both asylum and visa decision-making. The generative AI wave has brought with it an unprecedented increase in corporate power, and an alignment between big tech and the Trump administration. This symposium emerged out of the AFAR final conference, held at the Hertie School on 18–19 September 2025. The conference opened with a keynote by Dr Matt Mahmoudi, “Algorithms as Borders: Race, Border and Capital Entanglements,” which situated current deployments of digital systems within broader political economies of migration control and urged closer attention to distributive and procedural consequences for affected communities, drawing on his recent book. Such approaches, attuned to the political economy of big tech and borders, remain pressing lest the march of automation and autocracy fall further into lockstep.



