This article belongs to the debate » Algorithmic Fairness for Asylum Seekers and Refugees
01 December 2025

Public Perceptions Of Biometric Checks And Their Legal Significance

The EU Commission has set October 2025 as the start date for the roll-out of the long-awaited new Entry/Exit System (EES). European countries using the EES will introduce the system gradually at their external borders. Established by Regulation (EU) 2017/2226, the ESS system will capture four fingerprints, a facial image and travel data at Schengen-area borders for short-stay third-country nationals (travellers not holding the nationality of any European Union country or the nationality of Iceland, Liechtenstein, Norway or Switzerland). These records are stored for about three years (plus one day), or five years for anyone who overstays. The system replaces traditional passport stamping, automatically registering each entry and exit, and arguably facilitating faster border processing. Visitors with biometric passports who are authorised to enter can then use automated e-gates.

The European Asylum Dactyloscopy Database EURODAC is the EU’s large-scale biometric database that records the fingerprints of asylum seekers and certain categories of irregular stay. Its primary function is to help identify the first Member State where an individual applied for protection under the Dublin Regulation. In practice, this allows authorities to track and, if necessary, return asylum seekers to the country of first entry. The 2024 recast expands data categories, lowers the age threshold to six years, and introduces limited access for law-enforcement comparisons, all under the EU Charter’s purpose limitation and necessity–proportionality principles. Critics argue that the system functions less as a tool of coordination and more as a mechanism of immobilisation and control within the EU. (Tazzioli 2019)

The UK’s Electronic Travel Authorisation (ETA) scheme represents another development in biometric border management. ETA is a pre-travel permit required for non-visa nationals wishing to visit or transit the UK. The scheme applies to EU citizens and requires a facial image submitted remotely with the application, while fingerprints are not required for ETA. This made ETA a useful real-world case for studying EU citizens’ fairness perceptions, as it involves biometric collection from EU citizens themselves. A parallel system within the EU, ETIAS, is often compared to ETA, but under its current design, it will not involve biometric data collection.

The expanding reliance on biometric technologies at borders raises substantial questions about fundamental rights, proportionality, and safeguards around the use of biometrics and the potential for abuse of the technology, leading to stigmatisation or discrimination. Researchers from different fields have pointed out that biometrics can significantly impact privacy, bodily autonomy, and equal treatment. Furthermore, these technologies are frequently applied to vulnerable populations, such as asylum seekers and immigrants, which can heighten the level of categorisation and control these groups experience.

There are also high-stakes security issues since biometric data is incredibly sensitive. For example, one can change a stolen password or credit card, but not biometrics because it cannot be altered once it is compromised. The concept of technological efficiency, accuracy and reliability at the border can also be questioned. In many cases, the technology fails, the automatic gate does not recognise the passport, and efficiency may not be achieved by digitisation, but by the border police, i.e. humans. False positives and negatives can have serious consequences for individuals and border security alike. Both the Court of Justice of the EU (CJEU) and the European Court of Human Rights (ECtHR) have developed robust jurisprudence on bulk and systematic data processing. Together, these rulings provide a judicial framework for assessing biometric border management to protect individuals from discriminatory or intrusive applications of technology.

Public fairness perceptions of the use of biometrics at the border

Biometric borders using fingerprints, facial images, and automated risk scores, therefore, shape who moves and how across Europe’s frontiers. But what does the public consider fair when states collect and process people’s biometrics at the border? And does “fairness” depend on who is being checked and how the decision is made? This post distils findings from a five-country survey experiment on public perceptions of fairness regarding biometric border controls. The results complicate common assumptions. Unlike in many other domains, people do not uniformly prefer automated systems to human border officers. Moreover, cues of travellers’ perceived “deservingness” (whether they are asylum seekers, regularly arriving third-country nationals, or co-nationals) lead to relatively similar reactions across countries with only minor nuances.

The use of ADM has led to debates among policymakers, academics, civil society and tech developers about the opportunities and threats of using ADM systems. A key challenge has been to find a balance between leveraging technological advancements and efficiency and addressing the significant ethical, legal, and societal challenges these technologies present. Nevertheless, the implications of AI extend beyond technical issues and affect state accountability in democratic politics and policymaking. (Omand 2010) Ultimately, ADM systems are affecting people and claims to the legitimacy of automated decisions need to be publicly accepted and considered fair to be successfully implemented in the long run. (Helberger 2020) The perceived fairness and acceptability of these technologies by the citizens can play a role in guiding whether and how ADM solutions should be employed in a given context. (Kern et al. 2022)

Given how widely biometric technologies are used in general, and in border control management in particular, it is surprising that so little is known about the perspectives held by citizens regarding them. In a new study, I analyse citizens’ perceptions of fairness vis-à-vis the implementation of biometric data collection in three concrete systems citizens may have heard about: the new EU’s  EES, EURODAC, and the ETA in the United Kingdom. I examine public perceptions of fairness in using biometrics for border management across five countries: Germany, Italy, the United Kingdom, Estonia, and Poland.

The survey experiment was conducted with approximately 1,500 respondents per country between mid-December 2024 and mid-January 2025. Participants were randomly assigned to one of nine possible vignettes, each depicting a scenario where biometric data collection and processing differ in terms of the target group (asylum seekers, third-country nationals1), or own co-nationals) and the degree of automation in the decision-making process (mainly human, semi-automated decision-making, and highly automated decision-making).

Each person saw one short vignette describing a border scenario that varied in two factors:

  1. Target of biometric collection: either asylum seekers (EURODAC), or third-country nationals (EES), or co-nationals (ETA in the United Kingdom for all other countries and EES for the citizens of the United Kingdom);
  2. Decision-making mode: mainly human, semi-automated (AI recommendation, human border guard decides), or highly automated (AI leads; human border guards step in only for complex cases).

Respondents then rated how fair they found the practice on a 0–10 scale.

“Fairness” is not one-size-fits-all

When comparing targets of biometric data collection, differences in fairness perceptions are modest and largely country-specific. Only in Germany do fairness ratings significantly vary by target group, with biometric checks of co-nationals perceived as less fair than checks of asylum seekers. In the other countries, fairness ratings for asylum seekers and third-country nationals do not differ, and checks on co-nationals are not systematically evaluated more negatively. This suggests that while who is subjected to biometric checks can matter, such distinctions are not consistent across contexts and do not follow a uniform pattern across Europe.

Automation does not guarantee perceived fairness

Across countries, fairness perceptions do not systematically increase with higher levels of automation. In fact, contrary to expectations, automated decision-making is not generally viewed as fairer than human decision-making. In the United Kingdom, fully automated systems are even rated less fair than manual decisions, while in other countries, no consistent differences emerge across levels of automation. This indicates that public perceptions of fairness are not driven by a simple preference for algorithmic over human decision-making but are instead context-dependent and sometimes sceptical of removing human judgment entirely.

Implications for law and policy

These findings complicate the policy narrative that technology can “solve” fairness concerns by making border management more objective or efficient. Public opinion does not reveal a simple preference for automated decision-making over human border guards, nor does it show a uniform view about whose biometrics should be collected. Instead, judgments about fairness are fragmented, context-specific, and tied to broader biometric opposition. For policymakers, this means that rolling out systems like the EES will not automatically command public legitimacy. How the systems are framed, explained, and monitored will matter at least as much as their technical design.

Under EU law, fairness in biometric border management is not merely a matter of public perception but a legal requirement rooted in the EU Charter of Fundamental Rights (Articles 7, 8, and 21) and specific instruments. In this context, public scepticism toward high levels of automation should be interpreted alongside the legality test: whether a given system configuration is necessary, proportionate, and supported by effective rights to access, rectification, and remedy.

Why this matters now

The EU’s EES will affect millions of travellers every year and sits alongside other large-scale biometric databases such as EURODAC and ETA. At the same time, debates about AI governance and data protection have placed fairness and accountability at the centre of EU regulatory agendas. Understanding how the public perceives biometric fairness is thus not just an academic question but one with direct consequences for legal legitimacy and compliance. If citizens view these systems as opaque, discriminatory, or excessively automated, their acceptability may be called into question — and with it, the sustainability of Europe’s “smart borders” agenda. Because EES and EURODAC process sensitive biometric data, Member States must demonstrate necessity and proportionality and ensure effective judicial remedies, as required by the Charter and interpreted in recent CJEU case law on large-scale data systems.

Looking ahead

As the EES and related systems are rolled out, close attention should be paid not only to their technical functioning but also to their public legitimacy. Future research should track how fairness perceptions evolve once people directly experience these systems at borders, and whether communication strategies or stronger safeguards can mitigate scepticism. For lawyers, policymakers, and technologists alike, the central lesson is clear: Fairness in biometric border management is not a self-generating intrinsic property of technological systems but a societal and institutional achievement that requires active cultivation that depends on conscious design, regulation, and public trust.

References

References
1 Third country nationals is a term used in legislation for non-EU nationals, i.e. citizens of countries neither members of the EU, nor citizens of Iceland, Liechtenstein, Norway and Switzerland, nor their family members.

SUGGESTED CITATION  Dražanová, Lenka: Public Perceptions Of Biometric Checks And Their Legal Significance, VerfBlog, 2025/12/01, https://verfassungsblog.de/public-perceptions-of-biometric-checks-and-their-legal-significance/, DOI: 10.17176/20251201-172056-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.