Fakeness in Political Popularity
It is well known that social media is a cesspool of fakeness. Two recent German examples provide insights into the plethora of questions that arise when fakeness affects political communication. In these examples politicians were exposed as, or at the least suspected of, misrepresenting their popularity on social media through fake messages or followers. Hubert Aiwanger (Freie Wähler), was supposedly caught commenting on himself, using what is thought to be a fake account. Adding insult to this comical injury, he attempted to rectify himself, but ended up making matters worse. A few days later, Lasse Rebbin (a member of the Jungsozialisten in der SPD) posted a thread on Bernd Althusmann’s (Christlich Demokratische Union Deutschlands – CDU) followers on Instagram, showing a sharp spike of (fake) accounts. Rebbin concludes that it is likely that the CDU buys followers for its less popular politicians.
Politics in a democratic society have long been a glorified popularity contest, which we can all hope the most capable person wins. As long as there has been democracy, people have been pondering precisely how to win: by promising golden futures, by finding a common enemy, by proving to be the best alternative. Social media platforms have taken a crucial role in this process. The potential to reach millions of citizens through algorithmic amplification is unprecedented, resulting in platforms being considered the new democratic fora of our time (Balkin, 2017; Reno v. American Civil Liberties Union, US Supreme Court 1997). Charming an audience into electoral approval creates an incentive to use the infrastructure social media platforms provide to boost one’s popularity, even in ways the infrastructure was not intended to be used.
This contribution delves into the practice of creating fake political popularity on social media platforms. As is visible from the above-mentioned examples, politicians have an incentive to artificially boost their online popularity through fakeness. On a fundamental level, a false sense of popularity may affect our election outcomes: strategic voting is a common strategy, and choosing a popular candidate over a preferred candidate is not unheard of (Stephenson et al. 2018). In what follows, we analyze fakeness from the perspective of the political speaker and explore the legal limits of fakeness under three existing and upcoming EU law regimes (commercial practices, political advertising, and intermediary liability). In doing so, we contribute to existing debates around political speech and advertising on social media by focusing on the role of platform architecture in facilitating fakeness. The architectural features platforms use – which can be observed as affordances – can manipulate messages and as such function as dark patterns.
A Taxonomy of the Fake Modern Ethos
Fakeness is inherent to politics: election promises rarely materialize fully, after all. A hot topic in discussing the impact social media platforms have on democratic processes centers around disinformation, or fake news, being the intentional spread of falsehoods to a political effect. Similarly, deep fakes (namely ‘videos that have been manipulated to alter their contents’, Tahir et al. 2021) have also been a topic of discussion that is gaining increasingly more traction. For instance, as recently as last year, Dutch parliamentarians were tricked into having a call with Navalny’s chief of staff, later turning out to be a deep fake.
While these two examples reveal a proliferation of fakeness, some differences must be kept in mind. One concerns the fake substance of speech, the other a fake speaker. Our brief contribution focuses on the latter: the fake ethos of a speaker. Aristotle’s On Rhetoric has never been more relevant. Social media presences and popularity are important, and of high economic value. That presence can be seen as a modern ethos; we let ourselves be persuaded by online popularity taking the form of armies of followers. At the very least, we listen to them more than we listen to obscure, unknown speakers.
Social media platforms design public (and private) experiences through their infrastructures, and most importantly for this analysis, through the functionalities they offer to users. In media scholarship, these are referred to as ‘platform affordances’ that play a role as ‘communicational actors’ (Bucher & Helmond, 2018). As Bucher & Helmond put it, ‘a feature is clearly not just a feature. The symbols and the connotations they carry matter.’ Yet platform affordances can be misused, with fakeness being a constant concern for platform policies. In this contribution, we explore the different platform affordances that can shape the fake modern ethos in political communication.
Followers and likes
Followers and likes are the primary metrics of importance and popularity on social media. A sharp spike in followers by accounts that seem unrealistic leads us to believe Althusmann has bought followers. Politicians have taken this route before him: as early as 2012, Mitt Romney was accused of having bought followers, due to a spike of 141.000 followers over 2 days, or his adversaries having bought them for him, in order to accuse him of fakeness. Based on a screening of Romney’s followers, their interaction, the age of their accounts and their negligible amount of followers, the Atlantic concluded that the chance that all his new followers were real was 0%. The followers and likes bought from ‘click farms’ can either be automated through a computer script, or use ‘old-fashioned’ manual labor.
Distinct from followers and likes is the engagement aspect that comes with commenting on social media posts, particularly if these comments are praising a political actor. The reason for this distinction is elaborated on further in the legal part of this piece, but it boils down to the following: depending on the affordances a platform provides, a like or follow in itself cannot be considered a substantial addition to the debate. A comment expressing approval of the message of a certain politician, or expressing dismay and contrarian views to their opponents, is easier to frame as a political advertisement.
Another option is the use by the political actor of fake accounts (by themselves) to promote their own message. This is the behaviour of which Aiwanger is accused. The use of such ‘sock puppet’ accounts needs to be seen distinctly from buying comments from bot-enterprises, as it does not rely on an intermediary to provide any service. It raises the question to what extent tweeting about yourself through a sock puppet account is political advertising.
Fakeness and European Law
Not only modern political ethos employs fakeness in social media communications. The same affordances outlined above, being followers, likes, comments and alternative accounts, are used daily in commercial communications as well. As social media nurtures emerging business models around content monetization, platform affordances commodify online presence and enable the endorsement of perceived ‘authenticity’ (Bishop, 2021). At the same time, they signal popularity through metrics such as engagement or reach, characterized as ‘active participation and passive content consumption’ by users (Laeeq Khan, 2017).
At the European Union level, commercial communications have long been covered by the Unfair Commercial Practices Directive (UCPD), which formalizes and harmonizes the limits of deceit in commercial advertising and prohibits unfair commercial practices. According to the fairness test embedded in Article 5(2), commercial practices that are contrary to professional diligence and are likely to materially distort the average consumer’s economic behavior are deemed to be unfair. Following the Modernisation Directive, the UCPD’s coverage of fakeness has been enhanced (Duivenvoorde, 2019). Its amendment of the UCPD Annex lists fake reviews as a prohibited practice. This also includes ‘likes on social media’ (Recital 49 UCPD). In a transactional context where users rely on the signals given by platform affordances to purchase goods and services, the fakeness of followers, likes and comments (including reviews) is considered an unwanted manipulation and therefore prohibited (see for instance (Otero 2021). The scope of the UCPD, albeit wide from a transactional perspective (e.g. it also applies to advertising and not merely to concluding contracts with traders), reflects a narrow coverage of fakeness that occurs in the course of purchasing products and services from professional commercial parties. This raises questions about its applicability (and desirability thereof) to practices and speech which might push the boundaries of these criteria. Profiling and targeting Internet users have raised such high concerns relating to their informational self-determination that even the high standards of consumer protection embedded in the UCPD are taken as insufficient in the current landscape rigged with digital structural asymmetries (Helberger et al., 2021).
By contrast, the European legal framework applicable to political communication is underdeveloped and heavily fragmented. Recently, the Commission proposed the ‘Regulation on the Transparency and Targeting of Political Advertising’, to combat the fragmentation when it comes to ad hoc regulation on digital challenges to democratic elections. The Regulation introduces a number of definitions that both widen and limit its scope. Political advertising is defined broadly in Article 2(2) as ‘the promotion, publication or dissemination of a message by, for or on behalf of a political actor’ with the exception of purely private or commercial nature, or ‘a message that is liable to influence the outcome of an election or voting behaviour’. This encapsulates a lot of what political actors can do, but also captures issue-based ads. The scope of application of the Regulation is subsequently limited by its obligations predominantly applying to advertising service providers (van Drunen et al., 2022). Those are defined in Article 2(5) as ‘a service […] providing political advertising without consideration for the specific message.’ This significantly reduces its applicability in regulating the creation of a fake political ethos.
Turning back to the taxonomy: it is uncertain whether and how the acquisition of likes and follows falls under the proposed Regulation. Likes and follows are not explicitly mentioned as political ads, and therefore engaging with a service to acquire them – which could potentially qualify as an advertising service provider – does not fall under the definition provided by the Regulation. This is ambiguous, however; recital one expands that ‘political advertising’ also includes promotion in rankings. Likes and follows certainly increase the algorithmic ranking of content (e.g. Cobbe & Singh, 2019). Acquiring them indirectly leads to a promotion in ranking: clarity from the regulator on this is required. The second category of the taxonomy is more clearly covered by the Regulation; employing a service to generate comments promoting a certain politician – or even a political issue – requires adherence to transparency requirements under Articles 6-11. Finally, a post by a politician promoting themselves is a political ad, regardless under which account it is posted. However, it appears that this type of speech, and the third category of our taxonomy, is not covered by the Regulation.
European rules on fakeness diverge in the context of commercial and political communication. To an extent, the differences between these regimes emanate from clear legal doctrines focused on different values. On the one hand, protecting consumers on the internal market from fraudulent or misrepresenting commercial practices reflects the high importance placed on transactional fairness and trust. In this context, freedom of expression is only secondary to this policy goal. On the other hand, political communication is built around freedom of expression as a fundamental democratic need. The Commission stresses in the Regulation’s impact assessment that it is impossible to address fake advertising in political communication as it is in commercial advertising, due to the specific context of elections and political freedom of expression. Yet on social media platforms, this divergence seems rather dated. Today’s social media space is a living room, a political podium, a shop, and everything else we experience in our lives, everywhere, all at once. Through their role in designing and controlling popularity through their affordances, social media platforms are more than just the medium, but they become part of the message. As the designers, managers and amplifiers of online speech, it is important to reflect upon their legal responsibilities, particularly in the light of recent regulatory changes in EU law.
Fakeness as Dark Patterns: A Platform Problem
The Digital Services Act, signed into law on 19 October, reflects the new legal regime dealing with the liability of platforms for illegal content. From an advertising perspective, it seems to include both political and commercial speech (Article 24), with ads covering both commercial and non-commercial information for the presentation of which a platform receives payment (Article 2(r)). This new piece of regulation marks a long-awaited reform in platform liability in the EU. It includes substantive rules on a handful of policy issues that were taken on board due to widespread concerns over their systemic impact. Dark patterns are one such example. Recital 67 DSA defines dark patterns as online interfaces that ‘materially distort or impair’, regardless of the underlying intention, the ability of users ‘to make autonomous and informed choices or decisions’. Although the material distortion or impairment component is not specifically defined in the DSA, this formulation echoes the terminology of the UCPD (which the DSA endeavors to not overlap with – see Article 25(2) DSA). Article 2(e) UCPD defines the material distortion of ‘the economic behavior of consumers’ as ‘using a commercial practice to appreciably impair the consumer’s ability to make an informed decision, thereby causing the consumer to take a transactional decision that he would not have taken otherwise’. Reference to the consumer’s economic behavior in relation to a transactional decision reflects the clear commercial scope of the UCPD. In contrast, the DSA does not refer to terms such as ‘transaction’ or ‘economic behavior’, allowing us to read this concept in a broader meaning. This argument is further supported by Article 25 DSA on online interface design and organization, which prohibits online interfaces that ‘deceive, manipulate or [empasis added] otherwise materially distort or impair the ability of recipients of their service to make free and informed decisions’. Dark patterns taxonomies already deem the use of fake accounts to mimic what is called ‘social proof’ in the hope that it increases positive reputation as manipulative (Mathur, 2019). Labeling the use of platform affordances that may manipulate voting behaviour as dark patterns could thus be a cornerstone approach in pinning down platform responsibilities in protecting users from deception in political communication, and shielding the people from a fake political ethos.
A wider reading that does not limit dark patterns to commercial transactions is not only necessary but very much in line with concerns raised by the empirical findings of an analysis of political campaign communications in the United States. According to a study by Mathur et. al in 2020 on a corpus of 435,436 emails from 3,129 senders containing ‘emails from candidates running for state and federal office, political parties, and other political organizations like PACs’, electoral campaigns heavily relied on manipulative techniques to nudge users to open emails and donate to the respective campaigns (see also Jellins, 2022). This study, completed by some of the same authors who did the initial investigation of dark patterns on shopping websites (Mathur, 2019), shows a lot of similarities with commercial nudging in that voters are the targets of deceit just as much as consumers.
Although a growing field, the study of dark patterns remains somewhat volatile, as it combines insights from design, computer-human interaction, privacy and security, ethics and law to say the least. More research is necessary to identify, analyze and measure dark patterns consistently, and most importantly for the European regulator, to clarify which dark patterns constitute unlawful manipulation and which ones remain within the legal limits of persuasion (Leiser & Caruana, 2021). This also includes questions relating to new horizons in dark patterns research, such as dark patterns in political communications, and most importantly, dark patterns on social media. Particularly the latter needs more coordinated attention for the translation of taxonomies already developed around web-based information, to the social media and mobile app world. Until then, we can highlight that the DSA might be a surprising alternative avenue and solution for the tensions emerging between the diverging regimes in commercial and political advertising.
Leave A Comment