08 May 2023

Automated predictive threat detection after Ligue des Droits Humains

Implications for ETIAS and CSAM (Part I)

On 21 June 2022, the Court of Justice of the European Union (CJEU) released its judgment regarding the compatibility of the EU Directive on Passenger Name Record Data (PNR Directive) with the rights to privacy and personal data protection. Ligue des droits humains has already qualified as a landmark decision, where the Court had the opportunity, among other aspects, to provide comprehensive guidelines on how large-scale predictive policing should take place. In so doing, the Court followed in the footsteps of La Quadrature du Net, and Opinion 1/15, which tackle a relentlessly growing kit of big-data-based security instruments.

In the past few years, various security law instruments (both adopted and proposed) have required automated predictive threat detection instruments, meaning that they automatically sift through massive amounts of data in order to predict potential threats to public security. Given this context, Ligue des droits humains could be used as an inspiration for their legal assessment.

Against this backdrop, this two-part contribution aims to first provide an outline of the CJEU’s findings in Ligue des droits humains on automated predictive threat detection (Part I), and then (in Part II) to analyse its implications for two other legal instruments relying on automated predictive threat detection: the Regulation establishing a European Travel Information and Authorisation System (ETIAS) and the Commission’s proposal for a Regulation on combating online child sexual abuse material (CSAM). Here, we argue that, while there remains room for elaboration, Ligue des droits contains a plethora of relevant standards for future security instruments.

Automated predictive threat detection in Ligue des Droits Humains

The CJEU imposed significant limits on the creation and use of algorithms and other forms of modern technology for security purposes. Ligue des droits humains gave the Court the opportunity to do so because the PNR Directive obliges designated national security authorities, so-called Passenger Information Units (PIUs), to automatically process PNR data by comparison, not only against pre-existing databases (Art. 6 (3)(a), but also “pre-determined criteria”. The latter are algorithms which, in the Commission’s words, contain “search criteria, based on the past and ongoing criminal investigations and intelligence, which allow to filter out passengers which corresponds to certain abstract profiles […]” (see here, page 11 footnote 36). According to the Commission, pre-determined criteria serve to “identify persons involved in criminal or terrorist activities who are, as of yet, not known to the law enforcement authorities.” (see here, page 24).

First, the judgment restricted the “use of artificial intelligence technology in self-learning systems (‘machine learning’)”, by prohibiting systems that are “capable of modifying without human intervention or review the assessment process and, in particular, the assessment criteria on which the result of the application of that process is based as well as the weighting of those criteria” (para 194). As previously noted, the exact scope of that prohibition is debatable. What is clear, however, is that the Court insists on the necessity of meaningful human intervention in predictive policing systems – a general principle already enshrined in Art. 11 of the Law Enforcement Directive. Central for its assessments was the “opacity which characterises the way in which artificial intelligence technology works” (para. 195), which may “deprive the data subjects […] of their right to an effective judicial remedy” (para 195).

The Court further highlighted the risk of discrimination. While the PNR Directive already acknowledged such risks in its Art. 6(4), the Court now also emphasised that that provision covers both direct and indirect discrimination (para 197). This is crucial because pre-established criteria may be based on seemingly innocuous personal data, which may, however, be proxies of prohibited characteristics. For example, a person’s address may also be used as a proxy for religion, race or ethnic origin. Algorithms must be “targeted, proportionate and specific” (para 198) and thus non-discriminatory – a finding with wider implications in the context of migration, where non-discrimination must be (but is not) embedded in the discretionary decision-making process. Used technologies will also have to comply with a set of additional quality standards: They will have to incorporate “incriminating” as well as “exonerating’ circumstances” (para 200), thus bolstering their reliability and reducing false-positive rates. The Court stated that high false-positive rates, which are present in Member States’ statistics, may undermine a system’s suitability and proportionality (see para 123). Thus, the CJEU stressed the necessity of regular reviews of the pre-determined criteria’s strict necessity (para 201). In addition, the Court underscored that the Data Protection Officer and national supervisory authorities must be equipped with robust rights of access to the content of pre-determined criteria (para 212).

Whilst acknowledging “the fairly substantial number of ‘false positives’”, the CJEU stressed that “the appropriateness of the system […] essentially depends on the proper functioning of the subsequent verification of the results […] by non-automated means” (para 124). For that purpose, the Court determined that Member States must “lay down clear and precise rules capable of providing guidance” for the review (para 205), which is meant to prevent both discriminatory results, and false matches to be transferred to the competent authorities, thus subjecting passengers to false suspicions of being involved in terrorist offences or serious crimes. It also aimed at reliable documentation and self-monitoring (para 207), as well as guaranteeing uniform administrative practices across PIUs in different Member States that observes the principle of non-discrimination. The results of individual human reviews must take preference over those of automated processing (para 208).

Finally, the judgment bolstered the right to an effective judicial remedy, as enshrined in Art. 47 of the Charter of Fundamental Rights of the European Union: “The competent authorities must ensure that the person concerned […] is able to understand how those criteria and those programs work”, so that they can “decide with full knowledge of the relevant facts whether or not to exercise [their] right to the judicial redress”, pursuant to Art. 13 of the PNR Directive (para 210). This seems to imply notification requirements in cases of verified positive matches which currently neither the PNR Directive nor most national transposition laws expressly contain.

Room for elaboration

Whereas the Court established an abundance of procedural safeguards to reign in the potential excesses of automated predictive threat detection, Ligue des droits humains also left a lot of open questions.

First, while the Court rightly flagged false-positives as a potential hurdle to a system’s proportionality, it did not clarify at what point a system just produces too many of them, thus rendering the system disproportionate. Although this point was raised in the oral hearing, the Court also never addressed the base rate fallacy undergirding the PNR system. The PNR Directive seeks to identify a very small number of potential terrorists and serious offenders within the general population of hundreds of millions of annual flight passengers. It therefore, like some other predictive policing systems, compels security authorities to look for the proverbial needle in the haystack. This can result in systemic flaws which make extremely high false positive rates a mathematical near-certainty. It remains to be seen whether mere procedural safeguards can succeed in saving the PNR system’s suitability as long as its underlying base rate fallacy remains unaddressed.

Second, it remains unclear what purpose and form human interventions in the PNR system have to take, in particular, how humans are supposed to meaningfully engage with the PNR system’s automated outputs. The Court delegated the formulation of “clear and precise rules” for human review to Member States (para 205) without providing them with much guidance. That PIU officials will be capable of meaningfully engaging with the PNR system’s automated outputs seems doubtful when, for the foreseeable future, they will be confronted with thousands of false matches.

Third, whereas the Court’s insistence on substantive human review is extremely important to prevent automation bias (an often-observed over-reliance on automatically generated recommendations, see here), there also remain questions regarding how human review is supposed to prevent direct and indirect discrimination, hamstrung by a phenomenon known as “selective adherence bias”: Recent studies suggest that the ‘human in the loop’ may be predisposed to agree with those results of the automated processing that are more aligned with their personal pre-existing biases. Such biases may be based on socially induced stereotypes, beliefs and social identities, and result in selective adoption of algorithmic advice. It is known that humans tend to be susceptible to confirmation bias, meaning that they assign greater weight to information congruent with prior beliefs and less to content that contradicts them. Whereas automation bias may be more easily detected, it may be more difficult to detect and prevent selective adherence bias, especially in cases where the competent authorities share the same (e.g. regional) biases as the PIU. This holds true especially when PIUs and competent security authorities receive an excess of potentially false matches. In practice, this could result in high risks of false suspicion for members of negatively stereotyped minority groups.

A decision to be remembered

These gripes notwithstanding, the ruling does provide important guidelines on assessing other security-related instruments. Some of the aforementioned standards may be tailored to the PNR context. However, they pertain to features and risks that the PNR system shares with other security instruments designed at preventively detecting threats in large datapools through automated processing. Ligue des droits humains is a decision to be remembered.

In Part II of this contribution (dated 12 May 2023) we assess what it entails for the ETIAS Regulation and the EU Commission’s CSAM proposal.


SUGGESTED CITATION  Thönnes, Christian; Vavoula, Niovi: Automated predictive threat detection after Ligue des Droits Humains: Implications for ETIAS and CSAM (Part I), VerfBlog, 2023/5/08, https://verfassungsblog.de/pnr-threat-detection-i/, DOI: 10.17176/20230508-204551-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
CSAM, ETIAS, Machine Learning, PNR, data protection, gdpr, general data protection regulation, passenger name record, privacy, security


Other posts about this region:
Europa