This article belongs to the debate » Frontex and the Rule of Law
10 September 2022

Frontex and ‘Algorithmic Discretion’ (Part II)

Three Accountability Issues

Part I of this contribution explains how the regulatory design of ETIAS raises issues in relation to the rule of law principle of legality. Essentially, the ETIAS screening rules algorithm illustrates how automation can lead to what I suggest is a new form of arbitrariness – which I refer to as ‘algorithmic discretion’– a situation where the exercise of power and discretion and their limitations are not sufficiently specified at the legislative level but are delegated to an algorithm instead.

Building onto this, Part II reflects on how these legality issues affect other rule of law principles, including the principle of effective judicial protection. In turn, it raises three accountability issues and calls into question the assumption that the safeguard of manual processing in case of a ‘hit’ is a panacea for all rule of law challenges stemming from this semi-automated decision-making.

Interconnected Rule of Law Principles: Legality and Accountability

Rule of law principles are interconnected since each one safeguards the exercise of the other. Therefore, further down the line – a line which started with legality issues – other rule of law principles can be at stake. This Verfassungsblog debate raises issues related to accountability and judicial oversight (here understood as part of the principle of effective judicial protection, which includes access to justice by judicial review).

The connection between legality and accountability is straightforward, as effective judicial protection and enforcement of rights cannot be ensured when formal and substantive legality is at stake. In the case of ETIAS, there appears to be friction with the principle of legality. The exercise of power and discretion and their limitations are not sufficiently specified at the legislative level but delegated to an algorithm that potentially does not meet substantive legality requirements on foreseeability, clarity, transparency, accessibility and limits on discretion.

Three Accountability Issues on the Horizon

By analysing the ETIAS Regulation, one can notice three accountability issues on the horizon with the potential to hinder accountability and enforcement of the fundamental rights at stake, such as non-discrimination, respect for private life, and protection of personal data (see Alegre et al., 2017; Vavoula, 2021, Zandstra & Brouwer, 2022). These issues are briefly summarised below, in order to illustrate the gravity of the lacunae within the Regulation.

1) As a consequence of the legality issues and ‘algorithmic discretion’ discussed in Part I, an individual who is affected by this regime will not understand how the algorithm works or how Frontex’s ‘specific risk indicators’ may have adversely affected their rights. Therefore, the individual is consequently not able to judicially challenge all parts of the fragmented ETIAS decision-making process. In Ligue des droits humains (par. 210), the Court of Justice of the European Union (CJEU) held that the person concerned must be able to understand how the pre-determined assessment criteria and programs applying those criteria work, “so that it is possible for that person to decide with full knowledge of the relevant facts whether or not to exercise his or her right to the judicial redress […] in order to call in question, as the case may be, the unlawful and, inter alia, discriminatory nature of the said criteria”.

2) This problem of accessibility is reinforced by the potential transparency issue discussed in Part I caused by the difficulties in accessing information and documents related to the screening rules algorithm and its specific risk indicators.

3) Responsibility for fundamental rights violations is  spread out amongst various actors and is not easily traceable when considering the regime from the perspective of the individual. This is problematic, as Frontex’s specific risk indicators are a decisive element regarding if an application is reported as a ‘hit’ and potentially refused in the subsequent manual processing. In terms of explanations in case of a refusal, it is sufficient according to the ETIAS Regulation to simply state the applicable grounds for refusal (Article 38(2)(c) ETIAS Regulation): posing a security, ‘illegal immigration’ or high epidemic risk. Even if the statement aims to “enable the applicant to lodge an appeal”, knowing the ground for refusal does not provide a motivation regarding why one is considered a risk according to an ETIAS automated and manual assessment. Appeals against refusals will be conducted in the Member State whose National Unit manually refused the application (Article 37(3) ETIAS Regulation). However, focusing on the National Unit’s manual decision disregards the reason why an application was singled out for manual processing in the first place, that is the automated processing. The screening rules algorithm serves as a filter for deciding whose application will be manually investigated. Others have already raised concerns on the risk of discrimination ETIAS entails (Vavoula, 2021; Derave et al., 2022). The National Unit’s decision to refuse a travel authorisation is what individuals see and access, but the automated processing of algorithmic filtering will already have taken place at that point and will not be accessible to them. Therefore, an individual seeking accountability must understand the intricacies of and navigate through a mix of automated and manual processes, and a mix of actors (including an algorithm) where the contribution of Frontex is perhaps the most opaque.

This issue is relevant in relation to the CJEU’s statement that the use of certain machine learning technologies “would be liable to render redundant the individual review of positive matches and monitoring of lawfulness”, since the opacity of the technology might make it “impossible to understand the reason why a given program arrived at a positive match”. According to the Court, this could deprive the data subjects of their right to an effective judicial remedy enshrined in Article 47 of the EU Charter of Fundamental Rights, a right which requires a high level of protection “in particular in order to challenge the non-discriminatory nature of the results obtained” (Ligue des droits humains, par. 195).

Conclusion: Manual Processing is No Panacea

Border control is an area associated with a wide degree of discretion. However, this discretion is delimited by rule of law principles. Our understanding of discretion in border control stems from a time of solely traditional human decision-making, which relies on the existence of rule of law safeguards established over time. However, in this new context of semi-automated decision-making, I claim that manual processing at the end of the decision-making process is no panacea for rule of law issues. In sum, we must consider how these new elements of automation require consideration on if and how discretionary decision-making – carried out entirely by, or supported by algorithms – can be made in accordance with the rule of law principles of legality and effective judicial protection.

This contribution, presented in two parts, has shed light upon some of the potential rule of law concerns raised by the upcoming implementation of ETIAS and its screening rules algorithm. These concerns are only amplified by the fact that a high-risk system like ETIAS is currently excluded from the scope (and safeguards) of the proposed AI Act (Article 83). ETIAS is expected to be operational in November 2023 and the author concurs with Zandstra & Brouwer (2022) on the need for the EU legislator to include high-risk border control systems like ETIAS in the scope of the proposed AI Act and also to provide sufficient safeguards to the ETIAS Regulation by amendment.

This contribution is based on a presentation for the EUI Migration Working Group, and a forthcoming article where the author discusses rule of law challenges related to the automation of EU’s external border control. For valuable discussions that inspired this contribution and feedback on earlier drafts, the author would like to thank Markus Naarttijärvi, Lena Landström, Mattias Derlén, Luisa Marin, Alexandra Karaiskou, Evelien Brouwer, Katja de Vries, Elizabeth Perry, Mariana Gkliati, Niels Hoek and the EUI Migration Working Group.


SUGGESTED CITATION  Musco Eklund, Amanda: Frontex and ‘Algorithmic Discretion’ (Part II): Three Accountability Issues, VerfBlog, 2022/9/10, https://verfassungsblog.de/frontex-and-algorithmic-discretion-part-ii/, DOI: 10.17176/20220910-110333-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.