28 April 2026

Invisible by Design

How the EU Asylum AI Systems Evade the Regulatory Gaze

The EU AI Act will fail to adequately protect trans asylum seekers because it regulates system outputs while the harm lies in the binary assumptions that make their exclusion appear technically compliant. When the Act’s high-risk regime becomes fully applicable on August 2nd, 2026, AI systems for automated decision-making in migration and asylum processes will need to meet stricter compliance requirements. Risk assessments. Conformity checks. Human oversight. Transparency obligations. They will all become mandatory for tools used to profile asylum seekers, assess credibility, or feed into Eurodac workflows. This is an advance in regulatory and constitutional accountability under the EU Charter. It nonetheless leaves untouched the administrative architecture the Act takes for granted.

Imagine a trans woman arriving at a European border. She’s fleeing persecution. Her documents list a sex marker that doesn’t match her lived identity. There’s no legal gender recognition available to her in her country of origin. When her data enters the system, she doesn’t appear as a trans woman seeking protection. She appears as a data anomaly, a verification failure, a possible fraud flag. She may well receive protection, as trans asylum seekers do get approved. But she’ll have cleared verification checkpoints that couldn’t recognize her. This is not a bug in the system. It is the system actually working as designed, the architecture processing her claim was never built to see her. And in August 2026, the EU will likely certify it as compliant.

The problem that Article 10 cannot see

Article 10 of the EU AI Act obliges providers to ensure that training and validation datasets are “relevant”, “sufficiently representative”, and to the best extent possible, free of errors and complete. But everything turns on what counts as the system’s “intended purpose”. If that purpose assumes a binary, document-driven model of identity – one already embedded in European administrative practice – then excluding trans asylum seekers doesn’t register as a failure. By those internal standards, the dataset works fine.

The binary categorization problem predates AI (Spade, 2015). Administrative logic has long presumed the applicant’s need to be boxed within a rigid sex cluster and to match their identity documents (Currah & Mulqueen, 2011). So, what changes with AI? It’s the scale and automation. Previously, we had the caseworker’s judgement and their ability to contextualize prima facie anomalies, which has now been replaced by automated tools that computationally verify thousands of profiles through opaque internal processes. AI systems do not create discrimination, but they inherit and automate it. And after August 2026, the EU’s compliance seal will legitimize their harm.

AI systems don’t just reflect the world; they actively shape what counts as a recognizable subject, stabilizing the idea of cisgender embodiment as the default condition. Legally, this matters because Article 10 evaluates datasets against their intended purpose. Conceptually, this is cisproduction. Cisproduction operates through three mechanisms in computational systems which we can categorize as exclusion, non-recognition and the making of the subject as unintelligible. Administrative archives that encode sex as binary, biometrically verifiable, and immutable will populate the data found in those systems. Trans asylum seekers from countries that oppose legal gender recognition (LGR) often arrive with documents that misrecognize them and, in some cases, asylum databases cannot register their self-identified name or gender as the official entry. They are absent from training data because data collection systems were never made to properly record them. Even in situations where there is available data, the categories lack sufficient representational variety. Sex is listed as a binary and gender does not always appear as an option. Identity documents are assumed to settle the question. A trans woman, in that schema, shows up as a mismatched record requiring reconciliation or a signal that something’s wrong. The AI system doesn’t just fail to recognize that trans woman but will also produce unintelligibility. The result is that the applicant gets translated into anomalies, flags, inconsistencies. Pieces of data that no longer resemble a person with a protectable claim. From the system’s perspective, nothing has gone wrong. That is precisely the issue. The Regulation doesn’t ask whether the intended purpose itself encodes discriminatory architecture. Only whether the data serves that purpose well. Article 10 compliance will certify as adequate the very exclusions that produce the harm. Under EU constitutional doctrine (Article 52(1) of the EU Charter), limitations on fundamental rights must satisfy proportionality: it must be necessary, employ the least restrictive means available, and impose burdens that are not disproportionate to legitimate aims. But Article 10 makes compliance relative to an “intended purpose”, one that already assumes cisgender subjects as its baseline. Binary sex classification never faces scrutiny as a rights limitation because it appears as a purpose requirement. Trans asylum seekers don’t register as individuals bearing disproportionate burdens but as outside the purpose scope entirely, which limits the ability of those rights to be meaningfully assessed under the EU proportionality test.

Why Article 27 falls short

Looking at the fundamental-rights impact assessments under Article 27 of the EU AI Act brings the same structural limit. These measure performance against an expected baseline. But the issue is that the baseline already assumes the kind of subject that the system was built to recognize. Trans asylum claimants cannot fit that model subject. The architecture that excludes them is the same one that sets the metric. It falls under the configuration of the cisnormative apparatus. We are not referring to a single system or one institution. It is something more distributed: the relational network of legal, technical, administrative practices and also agents as well as discourses which, taken together, work to stabilize cisgender embodiment as the default. The cisnormative apparatus functions as what Foucault called a dispositif, operating across border infrastructures, AI systems, legal categories, databases, credibility assessments. Together they produce a baseline, what might be called cisdefault, the point against which all other identities are measured and found deviant. In European asylum governance, the apparatus does not simply reflect pre-existing bias. It actively constitutes who can appear as a legible subject. Article 27 assessments will return clean results because the harm occurs at a level the assessment cannot register. The apparatus that produces the exclusion is the apparatus that validates the result.

How discrimination becomes output

These systems don’t just operate but also need their determination to be accepted, validated. Cisproduction alone does not explain why these systems persist when their exclusions are documented. The outputs must be recognized as legitimate, neutral, or authoritative, through something that can be framed as cisvalidation. Once the system produces an output, such as a risk score or a match probability, that output circulates through expert reports or internal reviews and gets taken up as neutral. Algorithmic legal laundering is a specific form of cisvalidation: validation through apparent computational neutrality. Ideological claims about who counts as a recognizable gender/sexual identity or what constitutes a credible persecution narrative are catalogued, processed, and converted into the prima facie neutral outputs of a technical system. The structure mirrors financial laundering: contested input enters one end of an opaque process, often described as a black box, then emerges acceptable at the other end. What is laundered is not money but normative judgment.

Caseworkers aren’t reading ideology. They’re reading outputs that present themselves as technical findings. Objective, data-driven and thus hard to challenge. It extends beyond asylum law to welfare adjudication, predictive policing, public health governance. Algorithmic legal laundering can occur wherever a contested political claim enters a model and exits as a technical fact.

Article 47 promises an effective remedy. But that only works if the applicant can know and challenge the reasons for an adverse decision. But when reasons have been laundered through an architecture that does not register the applicant as a recognizable subject in the first place, there are no reasons left to contest. The harm is not in the decision but in the conditions that determine whether a positive decision about this person was possible at all. The analytical distinctions matter here. Algorithmic legal laundering tells us how harm is rendered invisible to legal review – by converting normative content into technical output through cisvalidation. Cisproduction tells us where harm is produced – in the material configurations whereas the cisnormative dispositif tells us what coordinates these operations into a self-reinforcing system. It is a sort of catch-22 where a regulatory regime that sees only the output will continue to certify as compliant the very architectures that produce exclusion.

Finding relief through a CEAS reform

If the AI Act is bound to certify this discriminatory architecture as compliant, where might redress lie? EU constitutional law offers one path: The EU Charter Article 52(1) proportionality review could require that binary sex classification in asylum systems satisfy strict necessity. The issue is that constitutional arguments require sophisticated legal representation, time, and resources that many asylum seekers do not have. A trans woman who is contesting a decision needs law that is accessible to her legal aid attorney, her clinic advocate and, in the absence thereof, to her. A more practical approach could be a revision within the Common European Asylum System (CEAS) itself, specifically the Eurodac Regulation (EU) 2024/1358. Article 13(2) requires that Member States respect the dignity and physical integrity of the person during the biometric procedure. A protocol or a guidance could be established to ensure that claimants with a physical appearance that does not align with the stated sex on their identity document are not entered in the system as “inconclusive” or similar status for additional review. Article 17 currently mandates the collection of “sex” (Article 17(1)(h)) as well as a copy of an identity or travel document (Article 17(1)(j)). That Article could be amended to include a gender identity that is distinct from the assigned sex and, when applicable, to recognize a sex that does not match the identity document as an acceptable recorded status rather than a verification failure. This would also apply to Articles 22 and 23 which cover third-party nationals (Article 22, 23). Neither the EU Charter nor the CEAS path offers an easy remedy. But they reveal how the AI Act missed an opportunity to address the ongoing underlying issue of cisnormative assumptions found in regulatory systems.

What August 2026 actually tests

August 2026 is often framed as a key application date; marking the moment when certain forms of exclusion risk are being formally endorsed. If the EU AI Act is to do meaningful work for the populations who most need it, it must treat cisproduction as a regulatory object, not just an abstract concern. And it must recognize that some harms are not aberrations of automated systems working badly. They are predictable consequences of automated systems working as designed. The transgender asylum seeker is the most legible subject of this argument, but not its only subject. Trafficking survivors, racialized claimants and anyone whose existence sits transversal to the recognized categories built in the asylum infrastructure, stand in a similarly structurally precarious position.

What is at stake in August 2026 is not whether the EU regulates AI in asylum. It is whether European law is willing to see the architecture as something it is permitted to reach. The current regulatory framework treats outputs as the object of scrutiny while leaving untouched the infrastructures that determine whose claims can register as claims at all. The requirements in the Regulation regarding the infrastructure’s documentation will mandate additional transparency about the design choices that were made. It just won’t require interrogating whether those choices encode structural exclusion. And because of that, systems that produce exclusion will continue to pass as compliant.


SUGGESTED CITATION  H. Alexis, William: Invisible by Design: How the EU Asylum AI Systems Evade the Regulatory Gaze, VerfBlog, 2026/4/28, https://verfassungsblog.de/invisible-by-design/.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.