Digital Visas Risk Deepening Discriminatory Borders
If the words “discrimination” and “visas” resonate, you may have waited months for a Schengen visa, been left without your passport, and absorbed costs for fees and long trips to file in person. Many applicants, especially from African countries, face markedly higher refusal rates, as the LAGO collective shows. Some give up after a refusal for fear it will trigger more. For self-employed creatives or consultants, work prospects hinge on officials’ reading of their bank balance and perceived “overstay” risk; for researchers and scientists, visas condition access to in-person scholarly exchange. By contrast, those with a “good passport” like one of the authors of this blog post move with ease, rarely needing visas. Global mobility is unequally distributed, privileging the already privileged. Visa listing reflects historical patterns of inclusion and exclusion (den Heijer 2018) that are deeply racialized (Achiume 2022). Amnesty International reports that for people of African descent, racism in migration is both overt and systemic, reflected in racial profiling and exploitative tied-visa regimes and reinforced by structural barriers like excessive fees and onerous documentation, while ostensibly neutral distinctions based on nationality or immigration status often serve as pretexts for racially discriminatory practices that disproportionately burden them without justification, contrary to international human rights law.
Risk assessment sits at the center of short-stay visa decisions. Unlike asylum, the applicant is presumed to pose a “migratory risk” and must prove both the will and means to travel and not overstay. This inversion is perverse (Megret 2024), yet applicants must comply. Ethnographic work documents stereotyping and indeterminacy in street-level practice (Infantino 2019). EU law unusually provides for judicial review of refusals, and Article 47 of the Charter reinforces that right. In practice, complex multi-actor procedures optuse effective protection. Few Schengen refusals are appealed; the sparse Luxembourg case law, including Koushkaki, El Hassani, and Vethanayagam, confirms that review remains exceptional.
Against this background, the EU plan to digitize visas may appear welcome. “Bots at the Gate” seems to promise accessibility, efficiency, and consistency, yet E. Tendayi Achiume’s reports on digital borders warn that new technologies can intensify racial discrimination. The EU’s digital overhaul, centered on the forthcoming Visa Application Portal (VAP), is part of a broader shift to automation and AI in border governance (Ozkul 2023). Enabled by Regulation (EU) 2019/817 on interoperability, it will consolidate a one-stop application system, link data across EU databases, and pilot tools such as eu-LISA’s “VisaChat.” Given what we know about algorithmic bias, and these three features in particular, one-stop consolidation, interoperability, and a drift to automation, we argue for doctrinal clarifications and institutional reforms to prevent deepening discriminatory borders.
Discriminatory Algorithms – Data, Form and Scale
Studies show how AI systems in the public sector have often put racialized people at a grave disadvantage. Pioneering work in the field by many scholars, including Joy Buolamwini, Timnit Gebru, Abebe Birhane, and Virginia Eubanks, demonstrates the particular propensity of algorithmic systems to discriminate at scale.
This discriminatory propensity is structural rather than accidental. First, models learn from training data that encode historical and institutional biases, so they reproduce and often amplify past patterns of exclusion at scale. This is documented across domains, from subgroup error disparities in face analysis to the broader problem of “big data’s disparate impact.” (Boulamwini, Gebru 2018) Second, once deployed in governance, these systems tend to create self-reinforcing feedback loops: when an area or group is tagged as higher risk, additional enforcement generates more recorded incidents, which in turn confirm and intensify the original signal. (Lum, Isaac 2016) Third, algorithmic prediction proceeds through probabilistic profiling that relies on correlated proxies. Where a proxy is tightly linked to a protected characteristic and used in decision-making, recent work argues this ought to be understood as direct discrimination on prohibited grounds. (Adams-Prassl et al. 2022, Weerts et al. 2024) Fourth, opacity inhibits reason-giving and accountability. Technical and organisational forms of opacity limit the ability of affected persons to identify and prove discrimination. (Burrell 2016) This concern is highlighted, for example, in the German Federal Anti-Discrimination Agency’s 2023 legal opinion on ADM systems and the AGG.
Automating Borders – Amplifying Discrimination
As well as the discriminatory features identified above, two further design features of the current moves to digitize visas present three particular risks. The first is the one-stop shop, the second is “interoperability”, and the third de facto automation of decision-making.
One-Stop Shop
The EU presents the creation of a one-stop shop for Schengen visas as self-evidently a good thing. “Harmonising and unifying visa application procedures within the Schengen area will help to avoid so called ‘visa shopping’ by applicants who may be tempted to lodge an application with a Schengen country that offers faster visa application processing than.” However, regulatory competition in other contexts is a desirable feature of a diverse EU. But not for travellers from outside the EU it appears.
Concretely, at present, if a racist or sexist consular official refuses visas on a discriminatory basis, applicants can conceivably make that known through their own networks, and try to get Schengen visas from another consulate. This local form of ‘visa shopping’ may be viewed as rights-protective. The move to a one-stop shop presented in Regulation (EC) No 767/2008 allows for five years of data retention in VIS. With an automated one-stop shop, not only is the applicant’s ability to avoid the racist officer gone, their past racially biased decisions are now part of the historic data that creates the training data and profiles that at scale, frame the risk profiling that will determine future decisions.
Legal and Technical Infrastructure: Interoperability
The technical term “interoperability” has been an EU buzzword for many years. The EU created several purpose-limited data sources in the migration field. The interoperability framework Regulation (EU) 2019/817 (along with its twin 2019/818 for law enforcement databases) establishes an automated framework allowing EU information systems in the field of borders, visas, asylum and crime to “talk” to each other and share data. Traditionally, data on third-country nationals was siloed: the Visa Information System (VIS) recorded visa histories, Eurodac held asylum fingerprints, SIS (Schengen Information System) listed alerts, etc. Interoperability interlinks these silos to allow cross-matches. For visa processing, this means a consular officer checking an application can, with a single query, retrieve any relevant data on the applicant across all EU systems. The Interoperability Regulation explicitly aims to “improve the implementation of the common visa policy” by closing information gaps, and it is a legal enablement of big-data-driven vetting (Wahl 2019). The Regulation introduces safeguards, stating that visa officers cannot refuse applications solely on the basis of an automated match. However, we share the concerns that interoperability creates a mega-database of third-country nationals, heightening risks of profiling and mission creep.
De Facto Automation
Whether a decision is automated as a matter of law and practice is a complex question. In the AFAR project, Francesca Palmiotto has provided crucial guidance on that question. (Palmiotto 2024). In the digitization of visas, it appears that there is a drift towards de facto automation. We have seen this already with ETIAS—the Parliament’s study and subsequent analyses describe a profiling algorithm that matches inputs against predefined risk rules with significant rule-of-law implications. (Derave et al. 2022) (Csatlos 2024) (Velasco Rico, Laukyte 2024) Evidence from migration practice and behavioral studies shows that when AI generates recommendations, human decision-makers seldom override them. (Alon-Barkat, Busuioc 2023) In controlled studies, people frequently over-rely on AI by default, and only cognitive forcing interventions reduce this tendency. (Buçinca et al. 2021). Against this backdrop, the risk of de facto automation is clear, and the system interface needs to be designed to avoid this risk. While fully automated positive decisions may bring many benefits, the patterns of inclusion and exclusion matter. And rejections require human reasons.
Legal & Infrastructural Responses
In our forthcoming work, we argue for a range of legal and infrastructural responses – doctrinal, evidential and design.
As a matter of legal doctrine, we draw on insights from the Discriminatory Borders project to clarify when disadvantaging groups of nationals may be race discrimination. The process of unmasking the racialized migration control practices has begun, both in law and practice. Although the barriers to litigating are many (Ozkul and Palmiotto 2025) we have also gathered in the Tech Litigation Database many examples of successful equality litigation against automated systems.
Ex post litigation is always a poor substitute for equality by design. Again, here, the EU AI Act’s approach to human rights impact assessment (HRIA) and regulatory sandboxes could, under the right conditions, provide for better systems. We share the widespread concerns that these practices can all too easily become empty self-assessment by powerful actors. However, we do see potential in our view, HRIA grounded in the Council of Europe’s HUDERIA methodology. (Đuković 2025) Guidelines emerging from the UN High Commissioner for Human Rights also place great emphasis on ex ante HRIA.
The deep challenge of equality by design is that it is no mere technical matter. Underlying equality law commitments is a contextual assessment of the impact of distributive systems on disadvantaged groups. This goes beyond the standard approach to “debiasing” in computer science, as the impactful contribution of Sandra Wachter and her team has demonstrated. (Wachter et al 2021). However, applying these insights to the discriminatory borders requires even greater efforts. Ours are ongoing.
Conclusion: The politics of equality of mobility opportunities
Human mobility undeniably supports human flourishing – enabling interactions across borders, economic activity, creative, cultural, friendship and family life. At present, the distribution of this freedom is highly unequal. Envisaging global equality of mobility opportunities entails treating mobility not as a right, but as a valuable pursuit that should be distributed as equally as practicable. For too long, migration control practices have been obscured from the equality law’s scrutiny and subject to sovereigntist logics. The move to automate, without a decisive doctrinal and technical shift, signals exponentially amplified discriminatory borders.



