This article belongs to the debate » Algorithmic Fairness for Asylum Seekers and Refugees
08 December 2025

Privatised Digital Borders

Responding to the Regulatory and Accountability Challenge with Participatory Design

The AFAR project explored the role of digitisation and automation in migration and asylum governance in Europe, identifying many concerning practices and accountability deficits. A further trend exacerbates the accountability challenge: states increasingly outsource migration administration to private companies, such as in the case of visa applications. While the privatisation of migration control is not in itself a new phenomenon, privatisation and digitisation in their current constellations pose new challenges. Together, these trends are transforming what used to be a bureaucratic, paper-based process into a digital-commercial infrastructure, which profoundly affects questions of fundamental rights, transparency, and democratic oversight.

In this post, I use visa applications as a concrete example to illustrate these broader dynamics and argue that we need new methods to study them, and new regulatory responses. To preserve accountability and rights protection, we must analyse visa systems by examining how technology and private actors are governed. I close by suggesting that participatory, co-designed technology, as explored in my project Tetramag, can offer a path toward more democratic migration infrastructures.

From paper dossiers to digital portals

Until recently, visa applications were largely analogue: bundles of documents submitted in person, examined by consular frontline clerks, and decided in a mix of administrative routine and discretionary judgment. Today, the process is being re-engineered. On the national level, several EU states have launched ambitious digitisation programmes. Germany’s Visa Digitalisation Initiative, for instance, aims to replace paper forms entirely, moving to electronic processing, digital uploads, and automated internal workflows. On the EU level, the revised Visa Code (Regulation 810/2009, as amended in 2019) and its implementing Handbook encourage harmonisation and digitisation, including online application submission and centralised databases such as the Visa Information System (VIS), or digital overhaul of the visa application process. Moreover, AI-assisted and automated decision-making as well as risk-assessments in digital visa systems are gaining traction.

Digitisation and automation promise efficiency, reduced waiting times, and data-driven coordination. Yet, as scholars and civil society critics highlighted, digitisation also introduces new inequalities: access depends on digital literacy, reliable internet connections, and the ability to navigate complex online forms. For many applicants – for instance, those in regions with unstable connectivity and applicants with limited digital literacy or with disabilities – these systems create additional barriers rather than remove them. As recent research within the context of the AFAR project has shown, from the viewpoint of migrants as the central “data users”, in some instances, digitising migration-related administration processes tends to be seen as a hurdle rather than an improvement.

The growth of private intermediaries

Parallel to digitisation runs the externalisation of visa processing to private companies. Across Europe, governments contract firms such as TLScontact, VFS Global, or BLS International to manage application intake, processing and document transmission. Applicants often interact exclusively with these service centres rather than with state officials. For instance, in 2017, Germany delegated visa matters in 17 states to private intermediaries, a number that has grown to 57 in the meantime, with more countries lined up. This delegation bears the risk of blurring accountability. Moreover, sensitive personal data, including fingerprints and financial records, are processed by entities that lack democratic oversight and control.

Obviously, the trend of outsourcing migration control to private actors predates the rise of digitisation and AI-based solutions. This longstanding delegation already calls for careful attention to the ex ante inclusion of migrants and ex post mechanisms for accountability. What is new, however, is the growing intertwining of privatisation and digital technologies: the push towards automation and AI is not only changing how existing service providers operate but also driving further externalisation, drawing in new private actors such as tech companies. This convergence further complicates questions of transparency, accountability, and oversight, making robust safeguards for migrants more urgent than ever.

The challenge of regulating private actors and digitalisation

The different players involved and their roles are, of course, subject to regulation both on the national and the EU level. Regarding EU law, the EU Visa Code and its Handbook for Processing Visa Applications remain the central legal instruments. Article 43 explicitly allows Member States to cooperate with “external service providers”. The cooperation shall be based on a legal instrument, the requirements for which are set out in Annex X to the EU Visa Code. Article 43 para 6 restricts the tasks that external service providers may be entrusted with, and para 4 makes clear that the examination of applications, interviews (where appropriate), the decision on applications and the printing and affixing of visa stickers shall be carried out only by the consulate itself. Member States shall scrutinise the solvency and reliability of the company (para. 7), provide training to the external service provider (para. 10) and closely monitor the implementation of the legal instrument on which the service provider operates, inter alia by carrying out “spot checks” (para. 11). More generally, EU data protection (the GDPR) and AI law apply to both public and private actors. While these regulatory frameworks are vital, how they impact the design and working of infrastructures is an open question, one that my new project will explore.

Ex ante scrutiny: anticipating rights infringements in the age of automation

Next to laws regulating the work of private actors, there are also frameworks that pertain specifically to the digitisation and automation in visa systems and migration management at large and subject them to ex ante scrutiny. As research from the AFAR project has shown, existing legal instruments approach this scrutiny through different logics. The EU Artificial Intelligence Act (AI Act) classifies AI systems used in migration, asylum, and border control as “high-risk” and requires conducting a Fundamental Rights Impact Assessment (FRIA) to anticipate potential adverse consequences. Yet, the trialogues on the AI Act have substantially weakened the FRIAs in scope and potential impact, e.g. by dismissing the proposition to involve representatives of the persons that are likely to be most affected by high-risk technologies. Overall, the AI Act offers little guidance on the content or scope of assessments, reducing them to a formalistic box-ticking exercise. By contrast, the Council of Europe’s Framework Convention on AI introduces the HUDERIA methodology, which asks about the context in which tools are used, how they operate in practice (rather than on paper), and how roles and responsibilities are distributed across their lifecycle. As AFAR researcher Đuković (2025) argues, where the AI Act’s FRIAs lack precision, HUDERIA provides detailed indicators, requires publication, and embeds user participation through a stakeholder engagement process, which is why combining the AI Act’s legal certainty with the Convention’s richer variables could yield both clearer obligations for authorities and stronger protection of fundamental rights. In practice, however, neither framework may be sufficient. Within the AI Act, the classification of “high-risk” technologies remains ambiguous, not including certain migration-related technologies and allowing deployers to sidestep FRIA obligations by strategically redefining a system’s purpose. HUDERIA’s approach is more comprehensive and context-sensitive, yet it applies only to states that have ratified the Convention.

Still, we know very little about how ex ante scrutiny actually unfolds in practice in the domain of digitised migration and border control, visa processing being a prime example. States typically engage external law firms to ensure compliance with data protection, administrative, and fundamental rights standards. These firms can identify legal challenges, but they depend on their public-sector clients and on private service providers for the technical and factual information needed to conduct their analyses. Even highly specialised lawyers often lack the technical expertise to evaluate complex digital architectures, relying instead on documentation from developers who may omit crucial details or do not feel responsible for the system as a whole.

As a result, there is a risk that legal assessments are conducted on partial information, reproducing the opacity they are intended to remedy. In this configuration, the state’s due diligence risks becoming largely procedural – a box-ticking exercise that fulfils formal compliance requirements without engaging substantively with technological design or the lived realities of people on the move. There is still remarkably little empirical research on how these advisory chains operate, and how they influence the legal and technical design of visa systems and migration governance more broadly. Understanding these dynamics is essential if FRIAs are to become genuine instruments of rights protection rather than bureaucratic rituals.

Towards more participatory forms of regulating digital borders

The limits of current regulatory approaches point to a broader challenge. Ex post regulation is not enough to ensure that privatised, digitised systems do not violate human rights on a massive scale. Relying on ex ante impact assessment depends on the robustness of those assessments and how they are carried out in practice, which too often become tick-box exercises.

My new research takes these observations as a starting point to ask a wider question: who participates in shaping digital migration infrastructures, and whose perspectives are heard or excluded? Within the Tetramag project, we will look not only at how ex ante assessments are implemented but also at how design, implementation, and evaluation processes unfold across public and private actors — including government agencies, law firms, technology contractors, and civil-society organisations. At the centre of this inquiry are migrants themselves, both as data subjects and as users of these systems.

Before proposing participatory alternatives, Tetramag seeks to examine how existing digital infrastructures already shape, and often silence, perspectives of people on the move, building and expanding on the initial AFAR report on Automating Migration and Asylum by Derya Ozkul. Who has access to design discussions, and whose experiences inform usability testing or human rights impact assessments? Who can question data practices or audit algorithmic outcomes? Only by mapping these absences and partial inclusions can we meaningfully imagine alternatives. From there, focussing on digitisation migration pathways and the post-arrival phase, the project explores how more participatory and transparent approaches might emerge such as

  • Collaborative design: involving migrants, diaspora groups, or legal aid providers in developing applications and decision tools, not as end-users but as co-designers.
  • Shared oversight and coding literacy: giving civil-society, data subjects and researchers access to learning data, codes and decision logs as a resource to understand, critique, decode and recode digital systems
  • Responsibility and transparency by design: ensuring that applicants can understand which data shape their applications and outcomes, and can contest decisions

Participation, in this sense, is not a decorative add-on to regulation. It is a method for revealing and rebalancing power within digital infrastructures.

Concluding remarks: toward participatory sovereignty

Digitised visa systems of today exemplify how migration control has become a joint enterprise between states and private contractors, implemented through code, contracts, and algorithms. This hybrid infrastructure redefines sovereignty and responsibility: decisions once made by consular officers are now mediated by software and service providers; assessments of data and AI compliance are outsourced to law firms, and the lines of meaningful accountability and oversight become further blurred.

Existing European frameworks — the Visa Code, the AI Act, and the Council of Europe’s AI Convention – address parts of this digital landscape but fall short of offering a coherent response. They regulate functions, not relationships, compliance, not power dynamics. What the Council of Europe’s AI Convention already includes but is missing in the larger system is a framework of participatory sovereignty: mechanisms that embed those most affected by digital migration governance – migrants, in this case – into its design, scrutiny, and redress. Opening the black box of visa systems is thus both a democratic and a technical task. It requires collaboration between lawyers, technologists, and the communities whose mobility is at stake.


SUGGESTED CITATION  Welfens, Natalie: Privatised Digital Borders: Responding to the Regulatory and Accountability Challenge with Participatory Design, VerfBlog, 2025/12/08, https://verfassungsblog.de/privatised-digital-borders/, DOI: 10.17176/20251209-172111-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.