This article belongs to the debate » Algorithmic Fairness for Asylum Seekers and Refugees
05 December 2025

Navigating Technologies in Asylum Procedures in Austria

Across the EU, asylum authorities are adopting AI tools for speed and cost savings. For instance, the EU Asylum Agency (EUAA) highlights remote registration and automated translation as ways to “shorten … face-to-face time” and make registration more “cost-efficient.” A similar dynamic is evident in the introduction of AI tools in German asylum courts and the turn to AI-based language analysis systems by the Dutch Immigration and Naturalisation Service (IND). Austria, too, is pursuing this logic: in response to an information request lodged by our project AISYL, the Interior Ministry confirmed that the asylum authority’s country-of-origin unit is testing a chatbot intended to make its research “more effective and faster.”

These changes unfold against a political backdrop of a steadily eroding right to asylum. The EU Pact on Migration and Asylum introduces fast-track border procedures that weaken individualised assessment and legal remedies. Across the Union, Member States are also tightening their approaches to migration, including through the criminalization of rejected asylum seekers in Greece, the resumption of deportations to unsafe countries by Germany and Austria, and attempts to outsource asylum processing to third countries altogether. In such a climate, new technologies do not operate as neutral administrative devices but amplify larger political and societal trends striving to erode the fundamental right to asylum.

Austria offers a particularly instructive case for exploring this dynamic. The country has long positioned itself among the hardliners in European asylum and migration policy and has begun implementing controversial new technologies in its asylum identification procedures. However, until recently, little was known about the full extent of AI tools’ use in asylum administration. Our project AISYL (“Artificial Intelligence in the Austrian Asylum System”) set out to close this gap. Over the course of one year, we mapped both existing and anticipated AI-based applications through interviews, focus groups, and document analysis. Our findings show that while AI is often presented as a means of speeding up and streamlining asylum processes, the risks are substantial and consistently outweigh the potential benefits. Certain tools like automatic translation and Large-Language Models (LLMs) for text processing address pressing needs and may provide limited support under careful human oversight, but remain either too unreliable or too intrusive to be compatible with fair asylum procedures. Pitched against a backdrop of steady erosions of the right to asylum, tools marketed as efficiency-boosters instead risk contributing to the weakening of asylum itself.

Against this backdrop, this post examines the current use of automated translation and text processing tools in the asylum administration in Austria before turning to future applications currently under discussion.

Automated translation

Austrian authorities as well as counselling organisations are increasingly using automated tools for translation. While asylum seekers have the right to human translation in personal interviews (Art. 13 (5) of the Asylum Procedures Regulation (APR) and in some other contexts, the use of automated translation tools is not regulated in other parts of the procedure. Austrian authorities have self-reported using tools like DeepL for country-of-origin information, while during our focus groups, NGOs noted that FRONTEX as well as Austrian border guards use automated translation tools to communicate with people on the move.

The advantages of automated translation are evident: the technology is fast, cheap, and readily available on handheld devices and in situations when translators are not available. Significantly, they are often urgently needed. But risks are inherent, and they are especially prevalent when the possibility of errors is underestimated. Errors are possible with all automated translation tools across all combinations of languages, but they are more frequent in languages with limited training data. Low quality of recordings and background noise further increase the likelihood of errors.  Moreover, unlike human translators, automated systems lack essential contextual knowledge and their mistakes can go entirely undetected if the parties involved share no common language to verify the result.

Therefore, automated translation tools should be used only when absolutely necessary and with great caution. Their results should not be relied upon in high-risk situations, and the possibility of errors must be kept in mind at all times.

Large Language Models

Large Language Models (LLMs) are designed to process, generate, and restructure text. In asylum administration, they can be used to search and summarise country-of-origin reports, draft official documents, or even prepare case files. Austria is already experimenting with these technologies. According to the Interior Ministry’s response to our information request, the asylum authority (BFA) uses external systems such as Perplexity, You.com, Copilot and PDF GPT in its country-of-origin research, including for drafting translations and editing documents. In addition, a chatbot is being developed to connect directly to the BFA’s Country of Origin Information database. At the European level, the Open Source Information (OSIF) tool is being designed to facilitate cross-border cooperation, offering functions for structuring, real-time search, and automatic summarisation of documents.

These tools can process vast amounts of text, reduce the time needed for manual research, and present complex information in a more accessible form. In principle, this could make bureaucratic work faster and more efficient. But as with other AI tools, the risks are substantial.

The most pressing concern is reliability. LLMs do not verify facts but generate text based on patterns in their training data. This can produce so-called “hallucinations,” or outputs that are linguistically plausible but factually incorrect. Even leading commercial models have been shown to deliver false information in up to 80 percent of answers, including when summarising authoritative sources. Besides obvious falsehoods, LLMs are prone to more subtly misleading oversimplifications, which can pose a significant risk due to their confident delivery (Wachter et. al. 2024). In the asylum context, this can have far-reaching consequences: country reports or case summaries that include errors may directly shape the assessment of protection claims.

Bias and opacity present further problems. Training data for LLMs often reflect existing prejudices, raising the risk that outputs reproduce discriminatory framings of certain countries or groups. (Navigli et al. 2023) The “black-box” nature of these models compounds the issue: it is frequently impossible to trace how a specific answer was generated or why certain information was emphasised over others. Yet asylum law requires that decisions be reasoned and reviewable. If LLM-generated text enters case files or decisions without clear attribution, this requirement is undermined.

For counselling organisations, this means that country-of-origin information or reasoning in asylum decisions must increasingly be scrutinised with an eye to whether it may have been generated or shaped by AI. For authorities, the promise of efficiency must be weighed against the fact that unverified or biased outputs jeopardise the fairness of procedures.

Future developments

The problems described above are not isolated but rather part of a larger strategy of AI employment in asylum procedures and migration management. In many studies, reports, and plans, the EU Commission and its agencies have demonstrated the intention to employ such new technologies in border control and migration management. Similarly, the Austrian government has made public plans to further automate asylum procedures. This includes automatic dialect recognition for the determination of language of origin, as is already in use in Germany, and the AI-enhanced analysis of data extracted from the mobile phones of asylum seekers. Potential new tools discussed at the EU-level include lie detectors that supposedly detect the intention of fraud through microexpressions, even though scientific evidence does not support such a function.

The registration of asylum seekers could be another venue for automation: a 2020 EUAA report mentions research into chatbots that guide applicants through the registration, verify entries, request missing documents, and answer FAQs, among other tasks. Such tools pose risks of discrimination for people with different levels of digital literacy.

The CEAS reform has also introduced an expansion of automated screening and security checks at the borders and in asylum procedures. According to Regulation (EU) 2024/1356, all individuals crossing an EU external border without authorization are subject to a security check. According to official reports, efforts are now underway to develop machine learning applications that autonomously identify risk factors. Self-learning tools , drawing on a shifting set of historical data. The characteristics that lead to a risk flag can thus no longer be identified in each individual case, creating obstacles for legal remedies.

Conclusion

AI tools in asylum procedures are not neutral fixes but must be understood as part of a larger political landscape in which access to protection is increasingly narrowed. While some applications, such as automated translation or text processing, can fill urgent gaps where resources are lacking, they do so imperfectly and can never replace human judgment. Other tools, like self-learning risk assessments and lie detectors, are often biased, opaque, prone to errors, and impede access to legal remedies. In a context where the right to asylum is already under strain, their uncritical adoption risks accelerating its erosion.


SUGGESTED CITATION  Adensamer, Angelika; Jung, Laura: Navigating Technologies in Asylum Procedures in Austria, VerfBlog, 2025/12/05, https://verfassungsblog.de/navigating-technologies-in-asylum-procedures-in-austria/.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.