Afterword
Systemic Unfairness for Migrants, Asylum Seekers and Refugees in the Algorithmic Era
In 2019, the independent AI High Level Expert Group appointed by the Commission developed ethics guidelines for trustworthy AI encompassing seven non-binding ethical principles for AI which are intended to help ensure that AI is trustworthy and ethically sound. Fairness features both as a foundational ethical principle of trustworthy AI and as a fundamental requirement throughout the AI’s life cycle together with diversity and non-discrimination. According to Recital 27 of the AI Act, the concept of fairness means that AI systems must be “developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law”. Yet, in the field of asylum, migration and border management, there are serious doubts that fairness can be upheld, stemming not only from the peculiarity of this domain and its widespread understanding as a matter of security, but also from its continuous use as a testing ground for the most controversial technologies.
This blog aims to bring together some of the findings of the contributors in this blog symposium, examine them through the lens of the AFAR project, and share some insights into the challenges in safeguarding fairness in the era of increased reliance on automation and AI.
Fairness as Non-Discrimination
In a field of immigration, law characterised by its default discriminatory nature via the creation of a divide between the citizen and the foreigner, AI has the potential to unveil potential systemic biases and mitigate arbitrary decision-making. However, as highlighted by Hertz, Byrne and Gammeltoft-Hansen as well as Adensamer and Jung, AI risks replicating pre-existing human biases and inserting new ones; these biases may be embedded in the input historical data fed into the AI model for training computational tools, or in the algorithms themselves, as the product of engineers. The danger for both direct and indirect -proxy- discrimination via the selection of variables that seem neutral but rather correlate with a legally protected attribute like gender or race is real. Dukovic and Costello also stress the danger of perpetuating discriminatory treatment of visa applicants through the digitalisation of visas; a process that is likely to be more complex and burdensome on the applicant. While digital procedures seem more efficient, moving to digitalised visas where applications are submitted online, applicants become increasingly responsible for ensuring the quality of the data provided, which will then be automatically processed via interoperable components, risking erroneous decisions.
One might counter-argue that ultimately AI tools only assist in decision-making, with human officers being ultimately responsible for the final decision-making. Even though the ‘human in the loop’ constitutes a significant safeguard, also recognised by the CJEU, human reviewers may also display biases, influenced by what has been termed as automation bias, selective adherence bias or anchoring bias. The former concept refers to situations where there is a propensity to favour suggestions from automated decision-making tools. Rather pessimistically, Leese notes that ‘the human reviewers lose true agency, as they can only enact what algorithmic categorizations indicate’. Selective adherence or confirmation bias refers to situations where the human in the loop may have their own, culturally and historically grown biases against certain groups of individuals, and therefore they may have more or less incentive to manually examine an application if the algorithmic result confirms or is at odds with their own prejudices and biases. Anchoring bias refers to situations in which the human decision-maker takes into account the first piece of evidence that is available to them, which tends to be the output of the automated processing. These biases create additional layers of potential indirect or intersectional discrimination, an issue which is only now picking up interest in research in the field of algorithmic decision-making, which may be particularly difficult to detect due to the inherent opacity in decision-making.
Ultimately, the boundaries between supporting and substituting decision-making can be blurry. This has a significant impact particularly on refugee status determination, which aims to determine whether an asylum applicant has a ‘well-founded fear’ of being persecuted, or faces a real risk of certain very serious harms if returned. The risk of replacing the assessment of the subjective risk faced by an applicant with an assessment of a more objective risk based on historical data and past decision making in relation to other asylum seekers, thus redrafting the very definition of refugee, is apparent.
Unfairness as Secrecy, Opacity and Deception
Hayes notes the reliance of border and migration authorities “on broad enabling statutes”, such as in the case of the dialect recognition system deployed in Germany (DIAS). Furthermore, Stewart and Curtin stress the cloak of secrecy surrounding the very existence of the technology and/or the inner workings of already opaque technologies. For example, in the context of EU large-scale IT systems, the relevant EU legislation deliberately refers to ‘algorithms’ either to avoid the loaded term of ‘AI’ or to leave this issue open for future determination. It is only through a (non-publicly available) report by eu-LISA that we learn that one of the interoperability components, the Common Repository for Reporting and Statistics (CRRS), can use AI tools to enhance the identification of risks for specific groups of travellers by identifying patterns or a set of common characteristics from the analysis of historical data in the CRRS related to security, irregular migration and high epidemic risks (for an analysis see here). Similarly, as regards the UK visa streaming tool, the Home Office deliberately sealed the ‘black box’ retaining the opacity of the algorithms and the subsequent decision-making by deciding to withdraw the algorithm rather than disclose the inner workings of the streaming tool (see here).
Regrettably, as highlighted by Stewart and Curtin, the AI Act solidifies the exceptionality of immigration law by crafting various exceptions on transparency. According to its Article 49, whereas most registrations will be publicly available to ensure transparency, high-risk AI systems in migration, asylum, and border control management (will be registered in a ‘secure non-public section of the EU database’, accessible only to the Commission and national authorities. Furthermore, central information, such as training data used or a summary of the fundamental rights impact assessment, need not be entered at all (Article 49(4)). These exceptions apply equally to AI systems for both law enforcement and migration, which is reflective of a highly securitised approach fusing the two fields. This fusion entails double standards for third-country nationals, legitimising opacity and leaving them with vague – if any – knowledge of how decisions are made. Such knowledge is a pre-requisite for meaningfully contesting a potentially discriminatory or fundamental rights-violating outcome. The Digital Omnibus package further obscures transparency; under a revision of Article 6 of the AI Act that allows providers to self-assess their systems as non-high risk under specific conditions. A targeted revision of Article 6(4) removes the transparency obligation for providers to register AI systems in the EU database for high-risk AI systems.
In light of the above, how can fairness be promoted, as mandated by Recital 27 of the AI Act, when the law itself instils opacity? As long as the understanding of immigration and law enforcement as two heavily intertwined domains is sustained, secrecy will be considered as mandated, obscuring accountability and the right to know and access information.
Loopholes in Promoting Procedural Fairness
This brings me to my next point on the limits of procedural fairness. Hoffman’s blog reconceptualises the notion of the administrative file, calling for increased safeguards; a file must list or refer to all information that was actually used or could have a real impact on the decision as well as interfaces for human judgments so that algorithms can be questioned, their outcomes explained and, where necessary, their conclusions overturned. The challenges begin when the law itself does not preserve these guarantees. For example, as argued by Karaiskou and myself in a forthcoming article with the Common Market Law Review, in the case of the European Travel Information and Authorisation System (ETIAS) and the Visa Information System (VIS), the form that an applicant with the final decision may not provide them with sufficient information to seek judicial review. The AI Act follows a similar approach delimiting the protective safeguards of the right to obtain explanation under Article 86(1), as noted by Stewart and Curtin.
Importantly, blurring the boundaries between migration and law enforcement has spillover impact also on procedural safeguards. In Ligue des droits humains, the CJEU has opined that the court responsible for reviewing the legality of the decision adopted by the competent authorities as well as, except in the case of threats to State security, the persons concerned themselves must have had an opportunity to examine ‘both all the grounds and the evidence on the basis of which the decision was taken’. A broad interpretation of state security, as stressed also by Hayes, would mean that an individual may not receive meaningful information to be able to assess how a decision concerning them may have been taken, thus delimiting their avenues of contesting decisions potentially discriminatory or otherwise non-compliant with fundamental rights. However, a definition of the concept is currently illusive; in various judgments, the CJEU has interpreted ‘public security’ narrowly (e.g. in relation to free movement see Tsakouridis). As noted by Thym, the standard rule it that an interpretation of the ‘public security’ exception has a parallel meaning for Union citizens and third country nationals to start with, before assessing whether and if so to what extent the legislative and constitutional context supports differentiation. Furthermore, in the field of law enforcement, the Luxembourg Court has also distinguished between national and public security. However, the precise contours of state security remain unclarified.
Fairness, Input and Output Legitimacy
Welfens’ contribution brings to the fore another dimension in migration management, namely the increasing recourse to private actors, such as external service providers for the processing of visa applications or tech companies for the design and testing of new technological tools. Privatisation constitutes a major challenge for the rule of law as it diffuses responsibility and adds to the secrecy mentioned earlier. Her blog also highlights the need for more input legitimacy, in the form of fair, representative, and participatory processes where the voice of affected individuals and civil society actors is taken more into consideration. This approach could also start repairing the eroded trust between states and foreigners, increase external oversight, and overall promote transparency and accountability.
In turn, Dražanová’s blog relates more to the concept of output legitimacy, which measures public acceptance based on its effective performance and ability to deliver desirable outcomes. She demonstrates that there is no universal agreement that the turn towards automation increases fairness through the move towards more objective and standardised decision-making. Her findings put an asterisk to one of the key arguments underpinning the migration policy-making; that technological tools, perceived as embracing neutrality, increase efficiency and thus fairness. The lack of universal perceptions on whether, for example, a biometric check is fair and who is fairly targeted serves as a reminder that output legitimacy is not a given. Measuring output legitimacy in this context is an ongoing process, which will be measured by ongoing glitches, errors and unfair outputs. It also necessitates ongoing supervision by relevant national and EU authorities and bodies, which, as argued elsewhere, must be on the ground and a continuous process following data throughout their life cycle, from initial collection to decision making and their use as statistical data to develop algorithms.
Conclusion
To reverse systemic unfairness, future research should aim to continuously monitor and study the underlying technologies employed (automation, machine learning, etc.), as this will dictate the applicable legal framework (EU data protection law, AI Act, etc.) and the means of contestation. In this respect, the amendments to the legal regimes underpinning both domains in the Digital Omnibus Package are notable. Therefore, also in view of the limitations of the AI Act in transparency, this in itself constitutes a challenging task. The FRA has just published a study aiming to shed light into high risk AI systems, which will help in correctly evaluating new technologies employed also in the field of migration. Of particular interest is also the role (and potential liability) of the private actors involved in developing these tools. Furthermore, overseeing how oversight takes place is also of utmost importance. To unveil discriminatory treatment and other violations of fundamental rights, national and supranational authorities must “get their hands dirty” and research should assist in this quest. To do so, and in view of the increasing reliance on algorithmic tools, researchers should concentrate on the lived experiences of the persons affected by modern technologies and the practical impact they have on their lives as well as in actively supporting legal avenues of contestation before supervisory authorities and national supranational bodies.




