11 May 2023

Challenging Bias and Discrimination in Automated Border Decisions

Ligue des droits humains and the Right to Effective Judicial Protection

In cases where decisions are based on pre-determined criteria or the use of artificial intelligence or AI, it can be difficult to understand not only which risk models or data are used, but also how this use shapes the outcome of the decision-making process. Whereas this ‘black box’ of automated decision making may already have an impact to the right to effective remedies, the possibility to challenge bias or discriminatory criteria in these decisions can be even more difficult (see Fundamental Rights Agency, Bias in Algorithms. Artificial Intelligence and Discrimination 2022, p. 50). In Algorithmic Discrimination in Europe, Gerards and Xenidis highlight the difficulties to detect and challenge these forms of algorithmic decision-making, amongst others because of the impossibility for judges to get access to information on whether the algorithms or risk models are discriminatory. A particular problem of AI-based risk assessments is the use of apparently ‘neutral’ criteria, which in themselves result in discriminatory decision-making. Only recently, the Dutch Data Protection Authority questioned the proportionality and possible discriminatory use of algorithms and profiling in the short term visa decision-making by the Dutch Ministry of Foreign Affairs.

In Ligue des droits humains, the Court of Justice of the European Union (CJEU) explicitly addresses the fact that the use of AI and self-learning risk models may deprive data subjects of their right to effective judicial protection as enshrined in Article 47 of the Charter (para. 195). Referring to AG Pitruzzella’s opinion, the CJEU notes that given the opacity which characterises the way in which artificial intelligence technology works, it might be impossible to understand the reason why a given program arrived at a positive match. The CJEU also underlines the problem of challenging algorithmic discrimination, referring to Recital 28 of the Passenger Name Record (PNR) Directive, according to which the Directive seeks to ensure ‘a high level of protection, in particular in order to challenge the non-discriminatory nature of the results obtained’.

The considerations on the right to judicial redress and their meaning for the role of national judiciaries are particularly interesting because of the references made by the CJEU to its earlier case law dealing with immigration law. The importance of this judgment cannot be understated for non-EU citizens.

Direct and indirect discrimination

The right to non-discrimination is protected in Article 14 ECHR and Article 21(1) of the Charter of Fundamental Rights of the European Union (hereafter Charter’). Both provisions include an open-ended formulation of grounds for exclusion, such as sex, ethnicity, religion, or political belief. However, the use of apparently ‘neutral’ factors may also lead to prohibited discrimination within the meaning of Article 14 of the ECHR (see ECtHR in D.H. v. Czech Republic). Where such ‘neutral’ criteria can place individuals belonging to a minority group in a disadvantaged situation, this may result in an indirect form of discrimination. The Dutch child benefits scandal illustrated how the use of an automated pre-risk assessment by the tax authorities, based on criteria such as the fact whether or not a person had a double nationality, or supposedly neutral characteristics such as the postal code of someone’s home, could result in a both direct and indirect discriminatory selection. Furthermore, a legal distinction based on nationality (or, as in this case, the length of time a person has a nationality) could result in prohibited discrimination, if such policy has a racial basis, making it an indirect distinction based on race or ethnicity (Biao v. Denmark para. 114). The intentions of acting persons or bodies are not the only criteria: even without discriminatory intentions, the acts may nonetheless be illegally discriminatory. Without objective and reasonable justification, measures with disproportionate and prejudiced effects on a specific group are prohibited (Biao, para. 92).

The right to non-discrimination is incorporated in the PNR Directive. Article 6(4) of the PNR Directive provides that any assessment of travelers prior to their arrival against pre-determined criteria must be carried out in a non-discriminatory manner: this means, according to the CJEU, that this provision covers both direct and indirect discrimination. In order to protect the non-discriminatory and proportional use of pre-determined criteria, the CJEU defines four conditions (paras 197-201). First, to avoid direct and indirect discrimination, these criteria must be defined in such a way that ‘while worded in a neutral fashion, their application does not place persons having the protected characteristics at a particular disadvantage.’ For this purpose, PIU’s should establish in a ‘clear and precise manner, objective review criteria’. Second, to ensure the targeted, proportionate and specific nature of the pre-determined criteria, they must target specifical individuals who ‘might be reasonably suspected of involvement in terrorist offences or serious crime’ as covered by the PNR Directive. Third, in order to contribute to the reliability and proportionality of (the use of) those criteria, they must take consideration of both the incriminating and exonerating circumstances involved. Last, following the strict necessity test, the pre-determined criteria must be reviewed regularly. This latter requirement means, according to the CJEU, that the criteria must be updated in accordance with the circumstances justifying their being taken in to consideration, but also taking into account acquired experience to reduce the number of ‘false positives’ as much as possible. The CJEU underlines the role of national PIUs to ensure the implementation of these safeguards by referring to Article 6(5) and (6) of the Directive. They should individually review any positive match by non-automated means in order to identify ‘as much as possible’ any ‘false positives’, but also to exclude any discriminatory results (para. 203). While the CJEU emphasizes as such the responsibility of Member States to ensure a non-discriminatory risk assessment, it at the same time offers wide discretionary power to decide on what should be considered as a ‘clear and precise’ formulation of ‘objective review criteria’. Here, the chosen formula of ‘acquired experience’ does not provide a guarantee for excluding any personal bias in the decision-making of PIU officers. This leaves an important but also difficult role for courts to detect and prove discrimination.

Right to effective judicial protection

The right to judicial redress is included in Article 13 (1) of the PNR Directive. This provides that in respect to all processing of personal data pursuant to this Directive, every passenger shall have the same right to protection of their personal data, rights of access, rectification, erasure and restriction, as well as the rights to compensation and judicial redress as laid down in EU and national law, and in the implementation of the Framework Decision 2009/977 (now replaced by the Law Enforcement Directive 2016/680 or ‘LED’). Importantly, in Ligue des droits humains, the CJEU emphasizes the necessity of transparent and informed decision-making, not only for the right to judicial redress itself, but also to allow the individual to decide whether or not to lodge an appeal.

According to the CJEU, this safeguard is particularly necessary in cases where AI based decision-making includes the risk of discriminatory outcomes. In general, the CJEU follows the AG in the conclusion that the PNR Directive precludes the use of artificial intelligence in self-learning systems or in ‘machine-learning’ capable of modifying the assessment process without human intervention or review. This ban applies, as explained further by Gerards in her contribution to this Verfassungsblog series, to the development of assessment criteria to be used in the screening process, including the weighing of those criteria (para. 194). Importantly, the CJEU considers that the use of such technology makes it impossible for data subjects to understand the reason why a given program arrives at a positive match and to challenge the non-discriminatory nature of the results. This problem, according to the CJEU, is related to the ‘opacity which characterizes the way in which artificial intelligence works’, depriving data subjects of their right to effective judicial protection as protected in Article 47 of the Charter (para. 195).

On the basis of Article 13 (1), competent authorities must ensure, according to the CJEU, that the person concerned is able to understand ‘how those criteria and those programs work’ to allow him or her to decide ‘with full knowledge of the relevant facts’ whether or not to claim the unlawful and indiscriminatory nature of these criteria (para. 201). For the CJEU, this obligation does not necessarily mean that a person is allowed ‘during the administrative procedure, to become aware of the pre-determined assessment criteria’ (para. 210). These findings do raise questions on the practical implication of this right to information: how can someone refused to embark on a flight address the possibly discriminatory nature of the prior risk assessment or calculate his or her chances for a successful judicial review without knowing the assessment criteria?

The CJEU seems to be aware of this problem where it refers in Ligue des droits humains to earlier case-law dealing with Article 47 of the Charter, in order to clarify the scope of protection under Article 13 (1) of the PNR (paras 210-211). The CJEU mentions two judgments in the context of migration: R.N.N.S. and K.A. v Minister van Buitenlandse Zaken (joined cases C-225/19 and C-226/19) and ZZ v. SSHD (C-300/11). R.N.N.S. v Minister van Buitenlandse Zaken concerned the right to effective remedies against the refusal of a short-term visa by a Member State on the basis of an objection from another Member State, which had been consulted in accordance with the rules in Articles 22 and 23 of the Visa Code. Generally, in these cases, the visa applicants are not informed about the precise content of the objection, or even by which Member State the objection was raised. These practices may result in a kind of ‘black box’ decision-making, comparable to the use of algorithms or automated decision-making. It is therefore to be welcomed that the CJEU implicitly draws this parallel, applying procedural guarantees as defined for visa decision-making to the context of the PNR Directive. The CJEU refers to paragraph 43 of the R.N.N.S. judgment, in which it held that in order to ensure that the judicial review guaranteed by Article 47 of the Charter is effective, ‘the person concerned must be able to ascertain the reasons upon which the decision taken in relation to him or her is based, either by reading the decision itself or by requesting and obtaining notification of those reasons’ (Ligue des droits humains, para. 210). In R.N.N.S, the CJEU further clarified that the court with jurisdiction should have the power ‘to require the authority concerned to provide that information, so as to make it possible for him or her to defend his or her rights in the best possible conditions and to decide, with full knowledge of the relevant facts, whether there is any point in applying to the court with jurisdiction’ (R.N.N.S para 43). This information should put the national court ‘in a position in which it may carry out the review of the lawfulness of the national decision in question.’ (R.N.N.S, para 43). The reference by the CJEU in Ligue des droits humains to the R.N.N.S. ruling is important for two reasons. First, the CJEU obliges Member States to ensure that individuals have access to national courts which are empowered to review the lawfulness of the use of pre-determined criteria and the programs applying them. Second, Member States must also guarantee access to courts which are able to examine all the grounds and evidence on the basis of which PNR decisions were taken. In R.N.N.S., the CJEU also stressed the relevance of the fundamental right to good governance, included in Article 41 of the Charter, obliging national authorities to give reasons for its decisions. (R.N.N.S., para. 33). Whereas the wording of Article 41 only protects an individual in his or her relation with EU institutions, the CJEU held in R.N.N.S., as already stated in previous cases (see for example YS. and M. and S.), that the right to good administration reflects a general principle of EU law which is applicable to Member States when they are implementing that law, to the effect that this encompasses the obligation of the administration to give reasons for its decisions (para. 34).

Furthermore, in Ligue des droits humains, the CJEU underlines that not only a court, responsible for reviewing the legality of the decision adopted by the competent authorities, but also the individual must have the opportunity to examine ‘all the grounds and the evidence on the basis of which the decision was taken’ (para. 211). In cases of AI-based decision-making, not having access to all the grounds or evidence is exactly the problem. Interestingly, the CJEU refers at this point to its judgment in the ZZ case (C-300/11). In this case concerning the expulsion of a EU citizen on the basis of national security grounds, the CJEU held that ‘having regard to the adversarial principle that forms part of the rights of the defence, which are referred to in Article 47 of the Charter, the parties to a case must have the right to examine all the documents or observations submitted to the court for the purpose of influencing its decision, and to comment on them’ (para. 55). According to the CJEU, the fundamental right to an effective legal remedy would be infringed, if a judicial decision were founded on facts and documents which the parties themselves, or one of them, have not had an opportunity to examine and on which they have therefore been unable to state their views (ZZ, para. 56). Where a Member State invokes reasons of state security, a competent national authority must be entrusted to verify and be able to carry out an independent examination whether those reasons ‘stand in the way of precise and full disclosure of the grounds on which the decision in question is based and of the related evidence’ (ZZ, paras. 60-62). It is for the Member State to prove, in accordance with the national procedural rules, that State security would in fact be compromised by ‘precise and full disclosure to the person concerned’ (C-300/11, paras. 61-62).

Of course, the right to legal redress vis-à-vis individual decisions should be read complementarily to the right to legal remedies with regard to the data protection rights in general. This right, as developed by the CJEU in amongst others Opinion 1/15, Quadrature du Net, or Smaranda Bara and Others remains relevant in the context of data processing on the basis of the PNR Directive, including the use of data for pre-risk assessment.

Conclusion

As explained elsewhere, the involvement of different Member States and actors in the decision-making on who is allowed entrance and who is not already makes it difficult for non-EU citizens to challenge border decisions. The flagging of persons who are identified as security risks during AI-based risk assessments will cause an additional barrier, not only for the mobility of persons, but also for the protection of their individual rights. This judgment, in which the CJEU emphasizes the necessity of effective judicial protection, is therefore of particular importance for non-EU citizens, who are increasingly confronted with the use of automated border decisions, on the basis of the use of large-scale databases and risk assessments, as for example provided in the more recent ETIAS Regulation. While the PNR Directive and the ETIAS Regulation, as has been highlighted in the report Artificial Intelligence at EU borders, prohibit the use of criteria which entail a high risk of discrimination for risk indicators (ethnicity, race, religious beliefs), these characteristics can also be correlated with or inferred from other types of data. This may result in (prohibited) indirect discrimination. In practice, it will remain difficult for both individuals and courts to detect and prove the discriminatory nature of these decisions. For this problem, the question whether any bias exists in the automated risk model or the ‘acquired experience’ of the individual PIU officer, does not seem to make much differen