08 May 2023

The Future of the European Security Architecture: A Debate Series

Picture this: you want to travel with your family to a friend’s wedding in New York. Three days before your flight is scheduled to depart, you are informed by the US embassy in your country that your electronic travel authorisation has been cancelled. The following day, you queue at the US embassy to apply for a visa. In the interview, the consular officer tells you that your travel authorisation has been revoked because the ‘algorithm’ had identified a security threat. The consular officer says that she does not know what exactly triggered the algorithm, but she presumes that it might be people you have been in contact with or places you have travelled to, or a pattern in the relation between these two or other factors that the ‘algorithm’ discovered. She tells you that the security officer, in order to swiftly assess your case and, through a manual review of the algorithmic recommendation, rule out that you pose a threat to national security, needs your past 15 years of travel history, as well as the names and contact details of all people in your network. Although this example is drawn from the security apparatus in the US, similar scenarios could soon materialise in the European Union, too.

This debate series is dedicated to Ligue des Droits Humains (Case C-817/19),– a case in which the Court of Justice of the European Union (CJEU) decided on the fate of one of the main drivers of this development: the Directive on on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime (in short: PNR Directive). The PNR Directive, being one of the first major EU-wide examples of predictive policing, is not just interesting in itself. We believe it merits more attention because it exemplifies the emergence and gradual consolidation of a new security architecture in Europe.

A new type of security: from suburbia to high rise

Two decades ago, Europe’s security architecture resembled a middle-class suburban neighbourhood, where plots were neatly divided from the neighbouring properties and each family dwelled in a separate house. These houses strongly resembled each other but each was planned by a different architect. Likewise,  security strategies, institutions, infrastructures, and legal frameworks were, with a few exceptions, siloed along national lines. Cooperation between member states’ authorities was, like in suburban neighbourhood committees, occasional and sporadic – the exception to the architectural rule.

The legal framework, institutional cooperation, the role of and reliance on technological instruments to predict and prevent security risks have radically changed over the past 20 years. Today, the European security architecture more resembles a modern high rise. Although its inhabitants dwell in separate apartments, the building has a common infrastructure, a uniform façade, and common leisure areas, such as terraces and gyms. Its inhabitants use biometric data, instead of manual keys, and complex algorithms in lieu of locks – to access the building and to keep out the unwanted. The building’s sophisticated security system, as well as the common leisure areas, are managed centrally by an opaque web of actors.

The security architecture in the EU relies on the extensive use of personal data, including biometric data, collected in large-scale, supranational databases, which are rendered interoperable and searchable through modern and potentially self-learning technologies, in order to automatically predict threats to public security. The focus on predicting and preventing potential threats through automated means consolidates a paradigmatic shift of security: targeted reactions to specific threats to public security by national police authorities have been replaced by all-encompassing surveillance practices, which are based on a general suspicion of everyone, and carried out by a web of actors so complex that even specialised scholars have a hard time disentangling it. This complexity is enhanced by the applicability of different data protection regimes and their specific oversight mechanisms.

This results in a transformation of traditional legal notions and a blurring of classical boundaries. Private power is instrumentalised for surveillance purposes by requiring companies to transmit data, which they collect and process for business purposes. Institutional separations between intelligence gathering and operative policing fall, and functional distinctions between internal security and external migration control fade. EU and national security authorities, once strictly divided, fuse into one single interwoven security apparatus. And modern technologies allow for an increasing delegation of legal decision-making powers from humans to machines, thus opening the floodgates for algorithmic discrimination, opacity in legal decision-making and an erosion of the fundamental legitimacy of state action.

The EU Directive on the use of Passenger Name Records (PNR Directive) is at the heart of the emerging European security architecture. It instrumentalises private air carriers for the indiscriminate collection and state-led automated analysis of PNR datasets relating to hundreds of millions of air passengers, in order to look for potentially suspicious patterns. In the name of combating terrorism and serious crimes, it transcends functional separations between prevention and repression. By relying on member states to upgrade and connect their security authorities for cooperative, algorithmic threat detection, it obliges them to adapt their national laws in an area that used to reside firmly and exclusively in the national sphere – public security. The perceived threat of terrorism, it seems, works wonders in transforming formerly distinct national structures into one European high-rise. After all, to combat this threat, the PNR Directive creates a considerable task for European law enforcement: the amount of data collected and analysed is enormous. Schiphol airport in Amsterdam alone, where the conference took place of which this symposium is the outcome, has an annual volume of more than 72 million passengers. And it doesn’t stop with air travel: Belgium, for example, has already expanded the PNR Directive’s scope to international trains, buses and ferries. Moreover, the ETIAS Regulation and the EU Commission’s proposal for a Regulation to combat child sexual abuse material (CSAM) are partly spiritual successors to the PNR Directive: They also seek to predict threats to public security by means of algorithmic profiling and integrate formerly distinct national authorities into one single European security network for that purpose.

Hence, when in Ligue des droits humains, the CJEU decided on the PNR Directive’s compatibility with EU law, it simultaneously rendered a landmark decision on the emerging European security architecture.

Architecture as metaphor and practice

The metaphor of architecture, as Max Steinbeis noted in his editorial, is nothing new when it comes to describing the EU. Textbooks in EU law widely used the pillars of the Greek temple to illustrate the pre-Lisbon distribution of competences. The metaphor also lends itself to describe the much more messy post-Lisbon system of multi-level governance and the construction of the walls that separate the inside from the outside. Given ever more restrictive asylum policies, observers warn against the construction of a ‘Fortress Europe’.

Architecture, however, is not only a metaphor, but also a practice that both constructs and reconstructs. Architecture as a critical practice also reconstructs and digitally represents the violence wielded by states and companies over bodies, buildings and the environment, and thus renders public and visible what would otherwise remain invisible. Architecture as critical practice thus aims to bring new aesthetic sensibilities to bear upon the political and legal implications of this violence. In a similar vein, an inquiry into the legal scaffolding that enables and restricts the operation of self-learning technologies for data processes, data transfers, and delegation of decision-making powers from humans to machines is able to uncover the material reality of a multi-layered structure of surveillance.

Another aspect of the notion of architecture as practice is the person of the architect. A quick look into the Treaties suggests that the member states, as masters of the Treaties, and the EU legislature design the blueprint of the European security architecture. However, the development of artificial intelligence is driven by private companies, who de facto assume the role of standard setters, given the absence of legal regulation of self-learning technologies at the state and EU level, and EU institutions’ use of privately-developed algorithms.

These two aspects – architecture as critical practice that makes multi-layered surveillance structures visible, and the increasing role of private actors – points to the importance of the judiciary, which has to mould the use of new technologies and power relations into existing legal norms and doctrines that may be ill-fitting. The CJEU has assumed an important role by obliging member states in a series of landmark judgments and opinions – Schrems I and II, and Opinion 2/15, for instance – to make far reaching changes to the existing security architecture. In the case of Ligue des droits humains, the CJEU, again, requires member states to make changes to the security architecture they built.

Our debate series: Taking Ligue des droits humains as a point of departure

Ligue des droits humains formed the background to a two-day conference, which took place at the University of Amsterdam from 23 to 24 February 2023. During the conference, we set out to take Ligue des droits humains as a point of departure for discussing the wide-ranging effects and problems of the emerging European security architecture. The objective of the conference was to analyse, from a multi-disciplinary perspective, how fundamental rights and other rule of law principles, such as the accountability of involved actors and contestability of legal decisions, can be upheld in the context of a preventive security paradigm that relies on massive collection and analysis of personal data by self-learning technologies. More specifically, the aim was to explore how the legal standards set by the CJEU in Ligue des droits humains could contribute to upholding fundamental rights, such as the rights to data protection and non-discrimination, ensure accountability, and meaningful legal redress.

Two days of intense debate, of course, only mark a first step towards tackling the products and perils of the emerging European security architecture. With this debate series, we set out to continue and build on our discussions in Amsterdam.

The first set of contributions provide an analysis of the broader background of the Ligue des droits humains judgement and its implications. Christian Thönnes and Niovi Vavoula start the discussion with an analysis of the CJEU’s findings on automated predictive threat detection. While the Court established in Ligue des droits humains “an abundance of procedural safeguards to reign in the potential excesses of automated predictive threat detection”, it left open many questions on false positives and the effectiveness of human review when thousands of false positives have to be reviewed.

Janneke Gerards focuses on a different point of automated data processing, namely the risk of discrimination by machine learning algorithms. In Ligue des droits humains, the CJEU was very sceptical of the use of machine learning algorithms for risk profiling and imposed strict conditions on their use, including, among others, that a human being must check the pre-determined criteria that resulted in a ‘positive hit’. Gerards argues that this might mean that “the role of a predictive algorithm is effectively taken over by humans”, who conduct risk assessments based on their own experiences and stereotyped thinking.

Any risk profiling based on machine learning requires large amounts of data to train that algorithm so that it would accurately ‘predict’ future risks. Didier Bigo and Stefan Salomon trace back the emergence of the idea that large amounts obtained through mass surveillance programs, especially the mass collection of passengers’ data, are an effective tool to prevent future threats. They argue that a preventive security logic emerged in the aftermath of 9/11 and the US war on terror, and eventually transformed PNR collection from a commercial activity into a security tool, which fundamentally changed the work of border guards.

Other authors elaborate on effective judicial remedies, legal contestability under the new architecture and the complex relation between private and public actors, which is a constitutive feature of the new security architecture. The increasing reliance by governments on bulk collection of data was somewhat counterbalanced by national and European courts, which established legal safeguards that ought to prevent disproportionate government access to personal data. Yet, as Thorsten Wetzling cautions in his contribution that focuses on the PNR Directive and the German legal framework, the necessity requirement and independent review of data collection are in practice often less robust than they appear in theory.

A precondition for an effective legal remedy is to know which legal framework applies. The new security architecture is built upon complex legal relations between public and private actors. As different legal frameworks and standards apply to data processing by private actors and public actors, Elspeth Guild and Tamas Molnar argue, the exact determination of the applicable legal norms and standards often proves to be a very intricate task.

Chloé Berthélémy takes up a different angle on the collaboration between private and public actors in the development of security technologies: the different forms of participation of private actors in the EU’s security policies. Berthélémy maps the different forms of collaboration that range from coerced, voluntary to proactive ‘cooperation’ of private actors, and the impact that these have on the principle of legality.

One aspect of the principle of legality is legal certainty, taken up by Amanda Musco Eklund and Magdalena Brewczyńska in their contribution. Eklund and Brewczyńska argue that the complex legal enmeshment of public and private actors means that the individual is no longer confronted only with the power of the state, but with a “network of power created by both the state and non-state actors”. This eventually has detrimental effects on the principle of legal certainty and raises broader rule of law concerns in the European security architecture.

Evelien Brouwer focuses on the particular challenges that profiling based on artificial intelligence raises for the right to an effective remedy. How can someone who is refused to embark on a flight, because she has been identified as a risk, challenge the possibly discriminatory nature of the risk assessment without knowing the specific assessment criteria? Despite the legal safeguards set forth by the CJEU in its PNR decision, it will, as Brouwer argues, “remain difficult for both individuals and courts to detect and prove the discriminatory nature of these decisions”.

The two final contributions set out to widen our perspectives by honing on implications of the PNR decision for other legal instruments in EU law and the EU’s efforts in international regulatory harmonisation. Christian Thönnes and Niovi Vavoula assess the effects of the PNR decision for the European Travel Information and Authorisation System (ETIAS) Regulation and the EU Commission’s proposal for a Regulation on combating online child sexual abuse material (CSAM). Aligning a future EU AI regime with international rules and embedding it in a transatlantic regulatory regime, as Daniel Mügge points out, are important policy goals of the EU. Fundamental rights limitations, as interpreted by the CJEU in the PNR decision and possibly further expanded in future judgements, may thus “define the outer boundaries of regulatory cooperation in the AI field — no matter how much goodwill there might be to find a compromise with, for example, the USA.”

The contributions mirror the range of topics and diversity of perspectives that properly addressing the new security architecture’s challenges requires. We believe, however, that the contributions are united in their open-endedness: there is so much left to discuss, so many problems to solve and so many standards to elaborate. We therefore bring this debate to Verfassungsblog in the hope that it will foster an even more multifaceted conversation.


SUGGESTED CITATION  Thönnes, Christian, Salomon, Stefan, Guild, Elspeth; Brouwer, Evelien: The Future of the European Security Architecture: A Debate Series, VerfBlog, 2023/5/08, https://verfassungsblog.de/pnr-debate/, DOI: 10.17176/20230508-204615-0.

One Comment

  1. Rogue Logics Thu 25 Jul 2024 at 12:08 - Reply

    Excellent post! Security Architecture Consulting Services are essential for designing robust security frameworks that protect against evolving threats. Thanks for shedding light on their importance!

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
PNR, data protection, gdpr, general data protection regulation, passenger name record


Other posts about this region:
Europa