12 May 2022

The War in Ukraine, Fake News, and the Digital Epistemic Divide

The ongoing war in Ukraine sheds light on crucial challenges of our digital media landscape. Most importantly, the social media-driven “(mis)information wars” surrounding the Russian invasion once more expose a growing epistemic divide running through liberal democracies: a situation in which substantial portions of the population believe in alternative realities on a broad range of factual issues. However, not our knowledge of the world seems to be weakened, but our commonly shared epistemic norms. The regulatory focus on truth, with measures like fact-checking and sanctioning individual pieces of disinformation, serves little to cure the larger problems behind the digital epistemic divide. We should rather use the power of the law to devise new modes of intelligent speech regulation mimicking the functions formerly played by social norms and the centralized set-up of communication conditions.

The emancipatory potential of social media

To begin with an optimistic note: the war sheds a pink light on some major culprits of recent debates on the regulation of social media. Telegram, for example, just recently disparaged for its lack of cooperation with the German authorities and for providing encoded channels for extremist coordination, now serves as a bulwark against Russian information control, and also as important channel for coordinating resistance efforts in Ukraine. Moreover, social media data proved as powerful resource in tracing Russian troop movements leading up to the invasion. Therefore, the war in Ukraine exposes the virtues and emancipatory potential of the decentralized production, distribution, and access of/to information enabled by modern digital information infrastructure. Campaigns for more stringent state regulation – such as those encapsulated in the EU Digital Services Act  – seem less attractive when viewed in the context of aggressive states that ruthlessly make use of their regulatory and informational powers.  In this way, the Ukraine war rejuvenates a more hopeful vision – popular with social network executives  – of digital media as a force for good, one spreading democracy and free speech across the globe.

The challenge of decentralized speech conditions

However, the war in Ukraine also exposes the weak spots of our information ecosystem. Most importantly, the war hits liberal societies in a state of deep epistemic fragmentation, partially caused and exacerbated by the rise of social media and other digital means of information production and distribution (for a closer analysis see our article “Beyond True and False: Fake News and the Digital Epistemic Divide”, Mich. Tel. & Tech. L. Rev., Fall 2022). Significant parts of the German population accord credibility to virtually groundless Russian allegations of a “genocide” of Russian ethnic groups committed in Ukraine and other fake news relating to the Russian “special operation”. Information misrepresenting the Russian aggression is not spread by traditional mass media, but rather is disseminated as videos on Youtube and TikTok, posts on Twitter and Facebook, and forwarded on Telegram and Whatsapp. The astonishing success of Russian misinformation campaigns would be all but impossible under predigital speech conditions. In the pre-digital era information was a scarce commodity, flowing in one direction – coming from those who controlled the capacities to produce, print, and broadcast information. Media institutions, therefore, functioned as effective gatekeepers. This gave them the power to enforce and maintain relatively common standards for verifying facts and widely shared trust in common epistemic authorities. In the digital age, however, it is not speech that is scarce, but people’s attention (see e.g. Balkin, Free Speech is a Triangle, Col. L. Rev. 118 (2019); going back to Simon, Designing Organizations in an Information-Rich World, 1969). The ability to access, produce, and distribute information has been fundamentally decentralized and democratized, subverting the media’s gatekeeping ability.

A media landscape immunized against regulatory efforts

This fundamental shift in our information ecosystems also affects the state’s ability to set and enforce rules on how we communicate. Mass media outlets are few and easy targets to address, regulate, and – if need be – shut down, if they, e.g., persistently spread large-scale misinformation. This is different in a digital, decentralized media ecosystem. Here, direct state interventions can achieve little. Any account closed for reasons of spreading misinformation is replaced with little or no cost and effort by alternative channels. The recent EU attempt to control war-related misinformation by blocking RT and other Kremlin-controlled media outlets is a case in point. It achieved little against the overall spread of misinformation produced and promoted by these sources. The decentral avenues of distribution provided by TikTok and the like proved to be largely immune to regulatory efforts. It even seems as if the current media landscape combines the harmful potentials of both centralized and digital decentralized media ecosystems. Centralized actors – mass media – still produce much of the more visible digital content and the larger tendencies and topics ventilated in the media sphere (see e.g., Balkin, Digital Speech and Democratic Culture, N.Y.U. L. Rev. 79 (2004), p. 1 (10)). However, the channels of distribution are no longer in the hands of a few. The current set-up thus allows centralized actors such as Russian state-controlled media to infuse decentralized information flows with well-coordinated misinformation that then takes on a decentralized life of its own. Misinformation campaigns thus come across as following a hive-logic while in fact being centrally produced propaganda efforts. They are fake news produced by fake hives.

A growing epistemic divide

These dynamics are not without consequence: substantial parts of populations in liberal democracies live in epistemically different worlds from mainstream society. We call this a Digital Epistemic Divide: a situation where substantial portions of the population subscribe to alternative realities on a host of issues that are more and more distinct from the factual beliefs of the rest of society. This divide is evident on many issues besides Russian war narratives: 14 percent of the French population did not trust the integrity of the election process in the context of the current presidential election (see here for an analysis of social media’s role). More dramatically, 20 percent of the US population sincerely believe in Trump’s “big lie” of the supposedly stolen 2020 election. As for vaccine skepticism, it is still prevalent in many liberal democracies and correlates highly with mistrust of how the government handled the CoViD 19 crisis.

The crucial feature lies in the interrelatedness of these “alternative” factual beliefs. It is largely the same group of people who mistrust CoViD vaccines, believe in the “big lie”, and are chanting “Lügenpresse” who are now sincerely subscribing to counterfactual narratives as to the reasons and responsibility for the ongoing war (see for a recent analysis of the situation in Germany). Case in point is the Querdenken-movement, founded as a dissenting group of CoViD skeptics who have now discovered the Ukraine war as their focal point, turning CoViD skeptics into Putin defenders. This personal continuity across intrinsically unrelated issues shows that individual falsehoods and disinformation on specific issues do not lie at the heart of the problem. Instead, misinformation is a symptom and accelerator of an erosion of common epistemic norms. Institutions of social knowledge production and distribution such as universities, the press, and the state in many liberal democracies are experiencing increasing distrust. There is a crisis of trust, not of truth. Not overall knowledge of the world is weakened, but our epistemic norms, i.e. the ways by which we reach our beliefs. Due to the extreme fragmentation of epistemic norms, individuals tend to hold increasingly different factual beliefs on a host of issues, rather than on just specific issues (see for a closer analysis of possible mechanisms behind this strange cross-issue convergence of unrelated factual beliefs). Ironically, truth-centered practices such as collaborative fact-checking may even become part of the problem. Sometimes they confer the idea that everyone, on her own, is capable to reach and verify complex factual conclusions about our world. However, it simply is an illusion that we – as trained lawyers with no science background – could make a competent factual judgment as to the reliability and functioning of CoViD vaccines. All we can do is trust judgment of those institutions and practices societies put in place in order to create and make use of virological knowledge. Claiming more would be an autodidact’s hubris.

The contribution of digital platforms

The epistemic divide seems to be exacerbated by the operating logic of digital platforms. First, the logic of tailoring undermines commonality of media experiences. Digital platforms aim to personalize media experience to each user through algorithmic gatekeeping and reinforcement loops. Mass media, in contrast, created an experience of common information consumption. If a country has three or four major newspapers, much of its population will be engaging with the same pieces of news – be they as trivial as a VIP wedding. Our Facebook feeds, however, are tailored to us specifically and are programmed to cater to existing preferences. While mass media kept us largely in the same epistemic community, the logic of tailoring may divide us into ever-smaller epistemic subgroups. Second, epistemic fragmentation is deepened by social media’s general goals and structure. Their services are designed to define, foster, and protect subcommunities (friendships, “follower-structures,” groups, other personal links) – not built to create a public sphere of common concern. What is more, social media’s technical makeup enables and promotes novel, highly standardized forms of speech (“liking,” “sharing”) that stress the emotive and identity-building function of speech. The speaker and her allegiances, not the information conveyed, take center stage. Such types of speech serve to curate and deepen existing allegiances. They leave little room to bridge preexisting divides, since they are poor in context and communicative content. If we no longer actually talk to and convey information to each other, there is not even the possibility of convergence.

The disruptive potential of the epistemic divide

The disruptive potential of this epistemic divide is substantial. If we no longer live in the same world, collective governance is severely threatened. This specifically concerns liberal democracies – as opposed to other political systems. Democracies must value and react to the beliefs of the governed and are limited in their ability to enforce social (epistemic) norms through state power (see for the background of this claim Mangold, Das Böckenförde-Diktum). When there exist stark divides in not only normative but factual beliefs, this is particularly dangerous for democracies – as opposed to other political systems. What is more, what we believe to be factually the case is a powerful motivational force with a particular intolerance for divergence. While we (can learn to) “agree to disagree” on questions of value, accepting the same relativity for questions of fact is not part of our social practices (for a closer analysis of the social practices concerning the fact/value-distinction see Buchheim, STAAT 2020, 159 (173 ff.)). Consequently, deep divides in factual beliefs have a strong tendency to lead to violence. The violent unrest ensuing Donald Trump’s allegations of voting fraud is a case in point. It shows that epistemic fragmentation can exacerbate political divides to a point where they can no longer be bridged. It is one thing to believe that others hold wrong values; it is another to believe that they are lying or deeply manipulated. Such an opinion turns agonism into antagonism, undermining belief in the legitimacy of democratic processes.

What can be done? The misplaced focus on directly enforcing truth

The regulatory challenges posed by this situation are intricate and unsolved. The most common move turns to efforts at directly safeguarding truth against encroachment by misinformation, such as fact checking or content moderation. However, enforcing truth(s) as such, for good reasons, has not been part of what we believe to be a legitimate business of state power. What is more, reacting to individual instances of misinformation, under decentralized communication conditions amounts to a Sisyphean task. If administered by private platforms it creates severe democratic risks of its own. The focus on truth also serves little to cure the larger problem behind the digital epistemic divide: Weakened social gatekeeping, weakened trust in institutions, as well as a sudden multiplication of channels, modes and types of speech unchecked by social norms governing their use. Social normativity does not develop overnight, but takes time. It is the law’s specific ability to create new normativity at will that could be key to the solution. We should use this power to devise new modes of systemic speech regulation mimicking the functions formerly played by social norms and the centralized set-up of communication conditions. Part of these could be subsidiaries to mass media institutions, mandatory diversification of social media feeds (reducing the “echo chamber”-problem), limits on the number of sharing instances (reducing the speed and breadth of distribution), speaker transparency (curing the social bot-problem). All of these regulations would be content-neutral, holding important free speech-advantages over the current regulatory focus on truth and individual enforcement as exemplified by the German NetzDG. They could stay agnostic to what is “the truth” and still react to the harms of disinformation to our democracies.