Automated predictive threat detection after Ligue des Droits Humains
Implications for ETIAS and CSAM (Part II)
In Part I of this contribution, we summarised the main findings of Ligue des droits humains regarding automated predictive threat detection and highlighted some of its shortcomings. Here, we assess what it entails for the European Travel Information and Authorisation System (ETIAS) Regulation and the EU Commission’s proposal for a Regulation on combating online child sexual abuse material (CSAM).
At first sight, ETIAS and the CSAM proposal may appear unrelated: What, after all, does the processing of visa-exempt third-country-nationals’ applications for EU travel authorisations have to do with combating child sexual abuse material? The answer is that both legal instruments entail the use of the same tool for both purposes – potentially self-learning algorithms. In that regard, both instruments are spiritual successors to the PNR Directive (the subject of Ligue des droits humains). Both undertake to automatically identify previously unknown material or persons in large datapools in order to protect public security – artificial intelligence, to the EU Commission, seems like a natural fit for that task (see here).
However, having public authorities – or even private entities, as is the case for the CSAM proposal – unleash such technologies for security purposes is confronted with significant rule-of-law concerns. The CJEU highlighted these concerns in Ligue des droits humains. We conclude that the judgment highlights certain rule of law shortcomings of ETIAS and raises concerns regarding the proportionality of the CSAM proposal.
The implications of Ligue des Droits Humains entails for ETIAS
The ETIAS Regulation requires visa-exempt third-country nationals to undergo pre-vetting through automated processing of the personal data they provide in their online application to obtain a travel authorisation to travel to the Schengen area (for a more detailed description of ETIAS see here and here). Applicants’ personal data will be processed, among others, against so-called “screening rules”. These screening rules, according to Art. 33 (1) of the ETIAS Regulation, are “an algorithm enabling profiling […] through the comparison […] with specific risk indicators established by the ETIAS Central Unit […] pointing to security, illegal immigration or high epidemic risks.” Frontex, where the ETIAS Central Unit is situated, is in charge of deploying algorithms for the purpose of detecting whether applicants pose any of the aforementioned threats. It is doubtful whether ETIAS is compatible with the standards the Court established for automated predictive threat detection in Ligue des droits humains.
To begin, it is unclear whether ETIAS complies with the Court’s standards regarding “clear and precise” criteria ensuring meaningful human review. The Regulation itself does not contain such criteria, only stating that the ETIAS Central Unit should conduct an initial verification of any hit(s) (Art. 22 (3)). Hereupon, the ETIAS Central Unit forwards verified hits to the responsible ETIAS National Unit, which should then “assess” the risks in question (see Article 26 (3) lit. b, (4), (5), (6)). This wording is not “clear and precise” at all. It is questionable whether the specification of these standards can be delegated to Member States to the same extent as in the case of the PNR system. The latter is regulated by a Directive and therefore geared towards wide margins of discretion for Member States, while ETIAS was established by a Regulation.
Furthermore, having the EU legislature define the purpose and form of human intervention in decisions about ETIAS applications in a proper and transparent process is a matter of democratic legitimacy. While Member States’ executives can certainly regulate the details, defining the purpose of human judgment and preventing its displacement by algorithms is not a technocratic sideshow. Therefore, the EU entities which enjoy the most democratic legitimacy should be meaningfully involved – not just the executive. This was also the issue when the Administrative Court of Wiesbaden decided that the German legislature must reform the German law transposing the PNR Directive (the Fluggastdatengesetz), rather than completely delegating the implementation of Ligue des droits humains to the German PIU. From an institutional perspective, without clear guidelines, ETIAS National Units may also not be in a position to ensure meaningful human intervention: Given that the screening rules are manufactured under the responsibility of Frontex (Art. 33 (4)), National Units may not have the necessary expertise on the inner workings of the ETIAS screening rules. While National Units will be supported by the ETIAS Screening Board and a “Practical Handbook”, the former includes no independent supervisory entity (see Article 9 (2)), and the content of the latter remains underspecified in the Regulation (Art. 93). Leaving National Units completely in charge without clear guidelines and independent oversight may not only lead to false suspicion, but could also perpetuate discrimination through selective adherence bias: National Units may have their own, culturally- and historically-grown biases against certain groups of ETIAS applicants, thus creating an additional layer of direct and indirect discrimination.
One possible solution would be to strengthen the role of the ETIAS Central Unit beyond the formalistic manual verification of hits. This would require equipping the Central Unit, first, with clear and precise substantive criteria for determining potential risks and, second, with guidelines on how to prevent automation bias as well as selective adherence. Stakeholders with expertise in human rights at Frontex could be involved in the process and have the power to meaningfully influence the review process for compliance with human rights standards. For that purpose, the role and access rights of the ETIAS Fundamental Rights Guidance Board (see Art. 10) should be strengthened. Such an upgrade of the Central Unit’s role could improve uniformity of administrative practices and compliance with the principle of non-discrimination. Of course, considering what is known about the agency’s involvement in systematic human rights violations, there are concerns regarding the desirability of entrusting Frontex with additional tasks. Thus, any additional tasks delegated to the Agency must come tied to strong safeguards and robust oversight powers for the European Data Protection Supervisor.
Furthermore, ETIAS suffers from the opacity against which Ligue des droits humains cautions. The ETIAS screening rules have been criticised (here and here) for their lack of transparency. The relevant delegated and implementing decisions of 23 November 2021 do virtually nothing to increase legal certainty either. Applicants’ right to an effective judicial remedy is further undermined by the fact that notifications about negative decisions will not enable failed applicants “to understand how those criteria and those programs work” (Ligue des droits humains, para 210). Art. 38 (2) (c) of the ETIAS Regulation only provides for the communication of the category of data processed, rather than a human-centered communication of the grounds of refusal. The mandatory form used for refusals of travel , as prescribed by the Commission implementing decision 2022/102, does not instruct administrators officers on how to meaningfully substantiate their assessment with relevant facts. Though the form contains a free text field, the implementing decision does not contain requirements on how comprehensive the text accompanying the refusal of a travel authorisation should be.
Proposed CSAM Regulation clashes with Ligue des Droits Humains
The findings in Ligue des droits humains may be relevant for the Commission’s proposal for a Regulation laying down rules to prevent and combat child sexual abuse (CSAM proposal). The proposal constitutes a prime example of privatised surveillance, similar to the PNR Directive, whereby private companies or professions are called on to cooperate with state authorities in the fight against crime. It also forms part of an emerging legal framework on online content moderation, comprising the Digital Services Act and Regulation (EU) 2021/784 on addressing the dissemination of terrorist content online. However, the proposed rules go beyond the other legal instruments. In order to combat sexualised violence against children, the proposed Regulation, among other things, authorises competent national courts or independent administrative authorities, upon request by designated national “Coordinating Authorities”, to issue “detection orders” (Art. 7 (1)). Such detection orders, as per Art. 10 (1), oblige providers of hosting services, such as Facebook or Youtube, or interpersonal communication services, such as Whatsapp, to use automated systems to detect and report not only known, but also new child sexual abuse material and grooming. While known and classified CSAM can be recognised through an image’s digital quasi-fingerprint (so-called hashes), this is not possible for unknown CSAM. Automatically detecting the latter is only possible by first training self-learning algorithms based on pattern recognition in previously classified CSAM material (see for that purposes Arts. 44 and 36) and then unleashing them on all users of the service in question. Thus, with regard to unknown CSAM and grooming, the CSAM rules entail generalised reporting obligations and genenalised surveillance of all users’ interpersonal communications. As a result, the CSAM proposal has been criticised for violating Articles 7 and 8 of the Charter (see here, here, here and here). In addition to this critique, the CSAM proposal is not in line with the CJEU’s standards for automated predictive threat detection, as established in Ligue des droits humains.
First, the CSAM proposal does not comply with the standards established in Ligue des droits humains regarding transparency and legal contestability. When screening their users’ communication, it is unclear how it can be guaranteed that private service providers will comply with established rule of law standards. The proposal allows but does not require them to use the screening software developed by the EU Centre on Child Sexual Abuse software (see Art. 10 (2)), thus opening the door for non-transparent, commercial software. It is not clear how supervisory authorities, such as the EDPS, can guarantee that the software in use is compliant with non-discrimination and other quality standards. The CSAM proposal does not provide access to source codes for supervisory authorities or affected persons. In fact, it deliberately curtails access to databases of indicators in the name of security (Art. 46). Given the additional lack of stringent notification and explanation requirements, CSAM algorithms are likely to produce high amounts of false and stigmatising suspicion, based on opaque and hard-to-challenge rules – precisely the sort of thing the Ligue des droits humains standards seek to prevent.
Moreover, the CSAM proposal raises concerns regarding the Court’s pronouncements on high false positive rates. The Court links the severity of interferences with fundamental rights to the reliability of the AI technology used – the higher a system’s false positive rate, the higher its interference with fundamental rights. Distinguishing CSAM from legal content (such as consensual sexual activity among teenagers, medical or family photos) is highly context-dependent, with regards to both the content as well as the modes of production and dissemination. In the foreseeable future, no AI system will be capable of such a complex contextual assessment. Additionally, the CSAM proposal would perpetuate the PNR system’s base rate fallacy problem. Rather than deploying algorithmic profiling in a targeted way (for example, within databases of known prior offenders, or empirically proven CSAM-prone places, such as certain dark net forums), designated service providers will have to sift through all their online communication to find CSAM ‘needles in a haystack’ consisting of billions of private messages on short notice.
According to Ligue des droits humains, systems with a “fairly substantial number of false positives”, “depend on the proper functioning of the subsequent verification of the results […] by non-automated means” (para 124), in order to guarantee their proportionality. This review must be guided by “clear and precise rules” (para. 205). The CSAM proposal, however, just like ETIAS, contains no precise rules for that purpose, only stipulating that the EU Centre on Child Sexual Abuse – an agency specifically created for the Regulation’s implementation – should throw out “manifestly unfounded” reports (Art. 48 (2)), and stating that the training data must be chosen in a “diligent assessment” per Art. 36 (1). Moreover, the extremely intimate nature of the communication contents in question and the review process itself – irrespective of its result – already constitutes a particularly serious interference with Articles 7 and 8 of the Charter and arguably a violation of the essence of these rights. After all, complete strangers will, without users’ consent or knowledge, view their intimate private messages which may often describe sexualised activity or depict nudity.
Another aspect in which the CSAM proposal raises concerns similar to issues touched upon by the judgment is that it indiscriminately interferes with the exercise of a fundamental freedom within the EU. While being quite lenient when it comes to extra-EU flights, due to traditional border controls at the EU’s external borders, Ligue des droits humains insists on much stricter selection standards when it comes to using travel within the EU as an occasion for mass data retention and processing. In paragraphs 279 and onwards, the Court emphasised that mass data retention and processing regimes undermine the freedom of movement (Art. 45 of the Charter, Art. 20 (2) (b) TFEU) and the absence of internal border controls (Art. 67 (2) TFEU), unless there is “a genuine and present or foreseeable terrorist threat” (para. 291). This rationale can be extended to the CSAM proposal a fortiori: Whereas being subjected to scrutiny when travelling is expected to some extent, the same cannot be said for private communications. Confidentiality of communications forms part of the right to privacy and is a reasonable expectation within EU Member States. The CSAM detection orders shatter this expectation, thus curtailing and creating chilling effects for the free flow of communication and the freedom to conduct a business, enshrined in Art. 16 of the Charter. Users and providers of the targeted services could be deterred from engaging in legal (and even desired) activities because they fear sanctions.
According to Ligue des droits humains, this risk must be limited through specific selection criteria to guarantee that the freedoms at play may only be curtailed as a proportionate response to a specific threat. In the context of air travel, the Court required limitations to “certain routes or travel patterns or to certain airports, stations or seaports” (para. 291). Art. 7 (4) (a) of the CSAM proposal does stipulate that judicial authorities may only permit a detection order when there is “evidence of a significant risk of the service being used for the purpose of online child sexual abuse”. Section 6 provides that for detection orders regarding new CSAM, previous mitigation measures, as well as detection orders regarding known CSAM, must have been unsuccessful. These selection criteria, however, do not rise to the level of specificity required by the Court. A detection order could still pertain indiscriminately to all communication transmitted by a service provider, so potentially billions of messages. The movement-based equivalent of such detection orders would be to oblige a market-dominating airline or bus company to transmit data about all their travel connections for automated predictive threat detection. The Court’s criteria, however, are location-based. A reasonable equivalent for “airports, stations or seaports”, irrespective of their technical feasibility, would be detection orders pertaining to specified CSAM-prone communication nodes, like URLs, dark net forums, servers or chat groups. The detection orders, as established in the CSAM proposal, would therefore indiscriminately curtail the free flow of communication and services within the EU and would therefore be disproportionate according to the logic of Ligue des droits humains.
Conclusion
This contribution aimed to bring together the PNR decision’s consequences for other security-related instruments engaging in automated predictive threat detection. In the case of ETIAS, though an immigration control tool, one of its explicitly stated purposes is to contribute to a high level of security (Art.4 (1) (a)), thus, as in the case of the PNR Directive, it hovers in-between an immigration and security tool encompassing surveillance of mobility. As for CSAM, much as the PNR Directive, it exemplifies privatised surveillance, but in this case the predictive threat detection itself is delegated to the private sector without sufficient safeguards. We argued that, in pursuing similar strategies, both the ETIAS Regulation and the CSAM proposal are not in line with Ligue des droits humains in very similar ways. Shedding light into both systems’ shortcomings has demonstrated that the decision will be crucial for the future development of EU security law. Further forward-looking, innovative scholarship is needed to gain a solid grasp of the implications of this complex and challenging decision.