03 November 2024

European Security and the Threat of ‘Cognitive Warfare’

Beware of the Algorithmic Ministry of Truth

Allowing Western governments to deploy AI-supported systems to counter the perceived threat of ‘cognitive warfare’ from Russia and other authoritarian actors will likely do more harm than good to the fundamental rights of citizens and the liberal-democratic system such measures purportedly seek to preserve.

A new kind of threat

‘Your mind is under constant attack’, a slightly unhinged video proclaims in bold yellow letters, juxtaposed against unsettling images suggesting a string of nefarious subversive activities by Russia against audiences in the states of the NATO alliance. The YouTube clip, entitled ‘Protecting the Alliance Against the Threat of Cognitive Warfare’ and published on an official NATO channel in May 2023, continues with its dire warning: ‘They seek to affect the behaviours and beliefs of individuals and populations in order to fracture Western societies and ultimately the rules-based international order’.

Cognitive warfare is ‘a new kind of threat’, as NATO alleges elsewhere, defining it as a war that is ‘fought not with bombs and missiles, but with lies and manipulation’. Such a strategy is of course not novel as such, bearing a strong family resemblance with the age-old concept of propaganda. But what is unprecedented, and the reason for the renewed and urgent attention it has attained in political and security discourse in Europe, is the development and now ubiquity of digital means of communication. Thanks to the ease with which manipulative information can be created and disseminated on the internet, cognitive warfare is now often taken as Moscow’s mightiest sword in its ‘hybrid’ conflict with ‘the West’—that is, the members of NATO and the European Union (EU). After tricking a hapless U.S. electorate into voting for Trump in 2016 and helping to instigate the British to abandon the European Union, it is commonly assumed in mainstream media reporting, Russian fingerprints are now allegedly all over virtually every turmoil in a Western society as a result of false or misleading information online. Not only has such a campaign been linked to the unexpected election of a pro-Kremlin candidate in Slovakia’s presidential election, even the global panic about a bedbug pest in Paris in 2023 has been attributed to Russian meddling with the online information ecosystem. Although Russia is considered the principal actor in this ‘new battlespace’, other non-democratic states such as China, Iran, or North Korea have begun to try out similar tactics against their adversaries.

In the eyes of the security establishment in the EU and NATO, information has become a weapon directed at the guileless populations in the open and democratic societies from Los Angeles to Tartu. And the assumption is that the threat of cognitive warfare is only going to increase: What the EU External Action Service calls ‘foreign information manipulation and interference (FIMI)’ is expected to receive yet another boost through the rapid further development and proliferation of generative artificial intelligence (AI) with its large language models (LLMs) and ‘deepfakes’. Researchers at the World Economic Forum, which already lists the scourge of ‘false information’ at the very top of current global concerns, anticipate such widely available and free AI tools to ‘automate and expand disinformation campaigns’ with increased speed and precision. Especially the technology’s purported capability to personalize disinformation campaigns by tailoring a message to the preferences and convictions of the target audience, what Luciano Floridi recently dubbed ‘Hypersuasion’, is supposed to elevate the threat to an entirely new level.

Amid the general hype surrounding generative AI, the clarion calls are growing louder. At a roundtable at the 2024 Munich Security Conference in February, the convenors predicted that this year ‘we are likely to witness the largest use of deceptive AI in human history to date’, eroding ‘trust in democratic institutions . . . as well as international security’. The NATO video, accordingly, ends with an urgent call to action: ‘We must be ready to counter cognitive threats. We must protect our democratic principles and our way of life.’ Such statements are expressions not only of the conviction that cognitive warfare is one of the principal challenges for the future of European security in the 21st century. The framing also implies that influence campaigns by geopolitical adversaries are comparable to kinetic forms of warfare, with a direct link between cause and effect. This blogpost critically interrogates that premise. In line with the other contributions to the Symposium, it asks whether countermeasures conceived so far can be considered appropriate in terms of the rights and values they seek to protect and cautions against an impending overreliance on the promises of AI-based tools.

Countering AI with more AI

If we accept the premise that all these warnings point to a real peril, what would be the most appropriate policy response for leaders in the EU and NATO? Increasingly, the offered answer is: with more AI. Over the past few years, the EU has introduced a panoply of regulatory responses, from the voluntary Code of Practice on Disinformation to the Digital Services Act (DSA), which has established a risk assessment and mitigation regime to address mis- and disinformation on very large online platforms. The aggravating factor of generative AI has assumed centre stage in this context, as exemplified by the recently issued Commission Guidelines on the mitigation of systemic risks for electoral processes. Yet reliance on top-down regulation in combination with voluntary commitments by the leading social media platforms alone is considered insufficient in light of the perceived gravity of the threat of cognitive warfare.

Instead, both NATO and the EU have increasingly turned toward technical solutions to stem the tide of (AI-generated) disinformation with likewise algorithmic countermeasures. In recent years, funding has been approved for a whole range of AI-based tools, from verification algorithms to assist fact checkers and journalists to applications able to identify and flag AI-generated disinformation, de-amplification of false narratives, or even the use of generative AI to automatically create and distribute factually correct responses to circulating disinformation. With a focus less on the content of such campaigns but on the adversaries carrying out influence campaigns, another much-noticed project has created the concept for a ‘cognitive warfare monitoring and alert system’. The tool is envisioned to work with machine learning models capable of detecting any adversarial activities across social networks and online media outlets and autonomously tracking and monitoring their progress. Further down the line, citizens might then receive notifications alerting them of ongoing disinformation campaigns directly on their smartphones or other connected devices.

While variations of some of such approaches are already employed by online platforms, most of these concepts imagine government-led, AI-driven counter-strategies to cognitive warfare campaigns across the different platforms and websites. To be effective, such applications would need to monitor data traffic along digital networks generally to detect and counteract multi-platform campaigns that aim to exploit different channels to exert influence on target audiences in the states of the EU and NATO.

Taking cognitive and social science seriously

What are we to make of this predominant story of the open communication environment as one of the principal vectors of malicious adversarial influence and the concurrent calls for the development of more and more AI tools as an antidote? Despite its prevalence, ‘cognitive warfare’ remains an indistinct concept whose purported menace is based on assumptions of causal mechanics that still do not hold up to serious scientific scrutiny. As hinted at in the NATO video and by the very use of the term ‘warfare’, the way policymakers in Western foreign and security circles appear to imagine the effects of adversarial information activities is analogous to those of a ballistic missile, as a straightforward relationship between the initiation of a campaign and something disruptive occurring in Europe as a direct result.

But that is not how these events unfold. Study after study in the cognitive and social sciences reinforces doubts about real-world impacts and demonstrates the apparent limits of influence operations. As Alicia Wanless recently pointed out, studies that do suggest concrete effects of sustained cognitive warfare activities, for example on electoral outcomes, all too often mistake correlation with causation. Widely circulating reports about the fantastic successes of Russian disinformation campaigns in Western societies, moreover, often seem mainly based on claims made by the adversarial actors themselves. But obviously, ‘those who employ disinformation strategies are incentivized to exaggerate the impact of their actions’. And although more research with a more comprehensive data basis is certainly necessary, initial explorations of the impacts of generative AI on the effectiveness of cognitive warfare suggest that this overall picture is probably not going to change much in the near future.

None of this should be taken as claiming that the ubiquity of false and misleading information online, and its exploitation for nefarious ends by malicious actors, is not a real policy problem that needs to be addressed. Although FIMI activities will only exceptionally violate existing rules of international law, disinformation campaigns can certainly cross the threshold of illegality, for example when spreading war propaganda—as confirmed by the Court of Justice of the EU in RT France v. Council of the European Union—or when qualifying as hate speech or Holocaust denial. Such cases call for a determined response by European policymakers. However, any conceivable countermeasures should be designed and employed with caution, taking into account insights from the social and cognitive sciences. In light of that, further reliance on AI tools is probably not the sweeping remedy we may hope it is.

The drawbacks of techno-solutionism

For one, the implications for privacy are certainly not neglectable: any such model, especially those that are aimed at detecting and monitoring campaigns across platforms and websites, would most likely need to be trained on large amounts of personal data. This concern aside, it remains to be seen whether these AI-based tools could ever be made to work reliably. When it comes to false content, research has shown that machine-learning models, due to the inherent limits of their training data, are mostly unable to ‘sufficiently generalize about the ever-changing facts and concepts of the rapidly evolving media landscape’; apart from the limitations inherent in the technology itself, which can only ever put out predictions based on statistical probabilities even if the training data were otherwise comprehensive and up-to-date.

The deployment of such content-facing models would already have serious potential ramifications for fundamental communication rights. Yet even if a model could be built that consistently detects false and misleading information online, how could a machine-learning algorithm then take the additional hurdle of identifying what activity constitutes a campaign of ‘cognitive warfare’, as opposed to the quotidian flows of mis- and disinformation on the internet? And how should it reliably differentiate between citizens disseminating potentially harmful information for their own objectives—who could, at least to some extent, rely on constitutional guarantees of freedom of expression and information—and agents acting on behalf of an adversarial state who may either initiate false narratives or simply latch onto already existing ones? The latter is not only more common but also much more vexing to address. Presumably, the Parisian bedbug rumour did not originate with Russian operatives but was merely readily exploited once it started gaining traction. But at what point did it turn into ‘cognitive warfare’?

Of course, online platforms themselves routinely resort to AI-based instruments to govern speech on their services, be it for the purpose of content moderation or to detect what Meta calls ‘coordinated inauthentic behavior’. But such platform practices are generally legitimized through contractual arrangements between the company and the user under private law. The same does not apply to the relationship between the state and its citizens. Moreover, leaving it up to governmental agencies to let an algorithmic system make decisions between ‘good’ and ‘bad’ content online not only carries serious implications for individual communication rights, all too easily exploited to more broadly crack down on free speech. It can also quickly backfire. Just imagine the reaction of a citizen, whose political preferences do not align with those of the current government, receiving an automatically triggered, official alert on their phone, informing them that the narrative about the dangers of the latest Covid-19 vaccine currently spreading online is not only false but also part of a Russian cognitive warfare campaign.

The idea to counter the seemingly unprecedented nature of the turmoil in Western information ecosystems with AI-based solutions is yet another expression of the securitization and externalization of the problem. This now dominant framing has incentivized the search for solutions from the toolbox of the defence and security sector. But it is too simple to attribute the wider societal developments that have led to the current situation of rapidly decreasing trust in institutions and increasing polarization exclusively to the devious machinations of some external actor.

Conclusion

To respond to the perceived perils of cognitive warfare by Europe’s geopolitical adversaries with AI-based tools means to buy into yet another techno-solutionist framing that eventually will only benefit the disruptive actors themselves. The ubiquity of false and misleading information online should be addressed, but as recently argued by Alicia Wanless, we must abandon the ‘threat-focused approach’ and instead start ‘envisioning what sort of information ecosystem is most conducive for fostering democracy’.


SUGGESTED CITATION  Lahmann, Henning: European Security and the Threat of ‘Cognitive Warfare’: Beware of the Algorithmic Ministry of Truth, VerfBlog, 2024/11/03, https://verfassungsblog.de/european-security-and-the-threat-of-cognitive-warfare/, DOI: 10.59704/daab535653cf7e06.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Artificial Intelligence, EU, NATO, european security policy


Other posts about this region:
Europa