Technology Multiplies Secrecy
The Transparency Gap in EU Migration and Asylum Governance
It is now widely acknowledged that new technologies, such as artificial intelligence (AI), are becoming integral to migration and asylum governance. A substantial body of research has emerged that seeks to shed light on how the use of various new technologies affects the individual rights of migrants and refugees. Yet, the use of new technologies, including AI, in these areas remains “surrounded by secrecy” (Curtin and Fia, 2025). Academics and civil society actors researching these technologies often face significant barriers in accessing information, as illustrated for example by the case of ETIAS. This secrecy affects not only public access to information but also dominates individual procedures, as research indicates that affected persons are not (at least not systemically) informed about the use of technologies in decision-making processes (see e.g. here and here).
Secrecy, in essence, refers to the concealment of information by its holders. Secrecy is intentional and (usually) legal, in the sense that “it is grounded on a plethora of legal frameworks/ to keep information private” (Curtin and Fia, 2025). Secrecy in the context of new technologies can operate on different levels: it can concern the very existence of the technology and/or the inner workings of already opaque technologies (see on the different forms of algorithmic opacity Burrell). Secrecy can also be selective. Ozkul observes that “[t]he way asylum authorities choose to share their use of new technologies with the public depends on the specific practice.”
Several factors may explain this intensification of secrecy. It can shield administrations from scrutiny and criticism. In the field of migration and asylum, secrecy is often legitimised by discourses of risks and insecurity. Researchers and journalists repeatedly report difficulties in obtaining information. Secrecy thus stands in direct opposition to transparency, which refers to the accessibility and visibility of information. Transparency is not only tied to the democratic right to know and access information but is also a prerequisite to ensure public accountability.
This contribution argues, using the AI Act as an illustration, that migration and asylum governance suffer from a culture of information deficit, which is exacerbated by the increasing use of modern technology. It therefore advocates for a shift towards a culture of transparency, which is necessary for ensuring both accountability and fairness.
Migration technologies under the AI Act: A symptom of a wider transparency problem
The importance of transparency is explicitly acknowledged in the AI Act. Recital 60 notes that AI systems used in migration, asylum and border control management “affect persons who are often in a particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities”. Recital 27 translates transparency inter alia with “traceability and explainability” as well as the possibility for affected persons to be informed about their rights. The AI Act is a welcome piece of legislation as it constitutes the first comprehensive regulation of AI explicitly aimed at advancing fundamental rights. While it should be lauded for these achievements, we argue that the Regulation simultaneously maintains and even facilitates secrecy in migration and asylum contexts. This occurs primarily for two reasons: first, the exclusion of high-risk AI systems in certain areas including migration from the public section of the new EU database, and second, the legal ambiguity surrounding key procedural protections in the Act.
Migration as a regime of exception
Under the AI Act, public authorities deploying certain high-risk AI systems listed in Annex III in the EU database must register their use in the new EU database, which will be publicly available (see recital 131). However, Article 49(4) AI Act introduces a specific regime for high-risk AI systems in the areas of law enforcement, migration, asylum and border control management, which are to be registered in a “secure non-public section of the EU database”. This exception has been criticised for reinforcing the “existing lack of transparency” about the use of technologies in these fields. As argued elsewhere, it removes an important tool for civil society to obtain information. In lieu of a general derogation, a more transparency-friendly alternative would have been to allow for case-by-case withholding of information from the public section where disclosure would affect a public interest. The introduction of a specific regulatory framework for AI systems in the domain of migration, asylum and border control management is indicative of the current highly restrictive approach that disclosure of information in these fields by default already poses a risk to the public interest.
Uncertain transparency procedural requirements
The AI Act also introduces important procedural safeguards which could promote transparency. Article 26(11) AI Act requires deployers of high-risk AI systems listed in Annex III that “make decisions or assist in making decisions related to natural persons” to inform those individuals that they are subject to a high-risk AI system. According to recital 93 AI Act, this information should include “the intended purpose and the type of decisions it makes”. In addition, the AI Act introduces a right to obtain explanation for certain AI systems. In accordance with Article 86(1), the right to an explanation applies where there is a decision which is taken on the basis of the output from a high-risk system listed in Annex III (with the exception of critical infrastructure) and that decision “produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights”. Deployers should inform individuals about the right to an explanation (see recital 93 AI Act).
However, the scope of these provisions remains uncertain, potentially undermining their protective value. Regarding, first, the right to be informed, it applies in accordance with Article 26(11) to high-risk AI systems that “make decisions or assist in making decisions”. The interpretation of “assist in making decisions” will be crucial, and leaves at this stage room for interpretation. An argument for broad interpretation of this right can be derived from the German and French versions of the text, which more broadly refer to systems which facilitate (facilitent) or support (Unterstüztung leisten) decisions. The scope of the right to an explanation in the AI Act is even more puzzling (see for a detailed analysis of this provision here). Article 86(1) only applies to decisions which are adopted ‘on the basis of the output from a high-risk AI system listed in Annex III’. Recital 171 limits this further to cases where the deployer’s decision is ‘based mainly on the output’ of certain high-risk AI systems. This wording implies that when an AI system contributes significantly—but not predominantly—to a decision, the individual is not entitled to an explanation. This arguably excludes many AI systems used to derive evidence, which are unlikely to meet this threshold.
Reconfiguring the balance between secrecy and transparency: A (not so) radical plea for a culture of transparency
As new technologies are increasingly assisting decisions in the areas of migration and asylum, transparency regarding their use is imperative. As Stewart’s recently finalised PhD research explores, the lack of transparency around the use of AI systems in asylum decisions already takes place in the context of informational and structural asymmetries, which further undermine the possibility for individuals to participate in such procedures, and thus affect fair procedures. Transparency is palpably not absolute and can be subject to certain limitations in the name of public interest or third parties’ interest, including data protection. It is also not a binary concept and can be supported through a variety of mechanisms simultaneously (see for a review of instruments to promote algorithmic transparency here).
Our blog contribution (and respective wider research) calls for a reversal of the prevailing logic in migration and asylum governance. Currently, secrecy tends to be the default, and transparency the exception. The blanket exemption in the AI Act’s database illustrates this dynamic. The main claim advanced here is that while procedural transparency at the individual level (e.g. through the right to be informed) is essential for fair procedures, it must be complemented by broader public transparency to ensure accountability. This claim is supported by existing research, which identifies the role of civil society in the areas of migration and asylum as “the primary actors contesting the use of automated systems”. The leading role of civil society is unsurprising given the reality that individuals seeking international protection, and more generally people on the move, find themselves quasi-existentially outside of the bureaucratic and legal systems. They represent the “other”. The use of new technologies adds yet another layer to already existing informational and power asymmetries. Palmiotto and Ozkul’s research also highlights the challenges faced by civil society to contest the use of new technologies, including difficulties in obtaining information and building strategic litigation. Given these very real obstacles, the AI Act’s specific regime for migration and asylum technologies is particularly unfair, as it reinforces existing patterns of secrecy and limits possibilities for oversight. Given the systemic obstacles individuals and civil society encounter in challenging these systems, there is a real risk that the use of new technologies in these highly sensitive fields for individuals will escape any or effective scrutiny. As one of us argued elsewhere, in the absence of “comprehensive and accurate information on the actor’s actions, it becomes very problematic for the forum to overcome informational asymmetries and to be able to hold the actor to account for those actions in a meaningful manner”. After all, how can one contest something whose existence is unknown? Transparency is not merely a concern for investigative journalists and researchers—it is, above all, essential for the fair treatment of the individuals most affected by these technologies.




