30 June 2023

Politicians don’t dance? AI doesn’t either!

A discussion of generative AI and political campaigning

“Why don’t politicians ever dance? – Because they have too many steps to backtrack on!”

Chat-GPT answered this when we asked the program to tell a political joke. While this example is somewhat worrying since the underlying assumption might perpetuate existing stereotypes about politics and politicians, the joke also highlights that AI has become witty and incredibly good at behaving in a way we perceive as human. Thus, we take the recent advancements of generative AI as a motivation to analyze its potential effects on political campaigns and democratic elections.

Political campaigning is about quick and widespread communication. The goal of any campaigner is to get political message across to voters. AI, and especially LLMs, potentially change the mechanics of that communication as they allow for rapid proliferation of communicatory acts at very low costs. What took the Russian influence campaign on the 2016 US Presidential election (only) a warehouse full of ‚trolls‘, may now be done in a more automatized manner using interfaces to generative AI systems. We argue that generative AI will influence the political sphere and communication environments as an accessible bag of instruments but does, at least for now, not change the music of political campaigning. Until systems are completely autonomous, they instead intensify existing problems concerning content moderation, media trust and false information online.

In this blog article, we discuss the potential consequences of current generative AI applications for democratic elections and political campaigns with respect to four sets of actors (1) organized campaigns and political parties, (2) other political actors like partisans, third party campaign organizations, NGOs, (3) digital online platforms as digital campaign environments like social media platforms and online search engines, and (4) regulators and electoral governance institutions like, in the German case, the Landesmedienanstalten, Bundeswahlleiter, Digital Service Coordinators on the national level and EU Commission, or the OSCE on a supranational level. In the end, we give an outlook on the 2025 German Federal Elections and the role generative AI will play in contrast to the past elections in 2021. Moreover, AI systems may become more independent, with unknown consequences for online platforms and elections. Here, however, we refrain from speculation about further technological developments and focus on the implications of currently existing AI applications as tools for the selected sets of actors.

Generative AI has improved rapidly, and in recent months more accessible interfaces to image and text generators like Chat-GPT spurred worldwide hype about the capabilities and social implications of these models. OpenAI and other companies have used en-masse annotations and interactions with humans to further train foundation models and decrease the alignment problem of machines behaving statistically rationally but in ways that do not make sense to humans. Also, generative AI for images and videos has become easily accessible and incredibly realistic, which lowers the cost of image and video production to almost zero.

This will have significant consequences for political campaigns and the media system. AI does not yet dance, which means it is not autonomous and is not further developing its own algorithms. However, this could be the case in the soon future. In 2023, generative AI can be a powerful tool for political campaigns as for many other sectors by generating textual and visual content and providing coding support for data analysis. However, it also accelerates existing risks concerning digital media as political communication environments when political actors do not obey the norms or self-limit themselves to ethical codes. Thus, it points to online platform governance, regulation, and education as critical areas to protect and strengthen democracies’ resilience and ensure electoral fairness.

AI and data-driven campaigning in the fourth phase of political communication

In recent years, there has been much debate about using data-driven methods and AI in political campaigns. In this respect, AI should be understood as an umbrella term for machine learning and natural-language processing methods. The so-called fourth phase of political communication is distinct to prior phases because digital platforms and data-driven methods have become much more present and advanced compared to campaigns before the emergence of the Internet and social media platforms (Römmele and Gibson, 2020). Our conversations with experts and campaign professionals as part of the ERC-project DiCED – Digital Campaigning and Electoral Democracy emphasized that data-driven methods complement and sometimes compete with a political logic based on intuition (the so-called gut feeling) and experience of campaign professionals. However, experts and campaign professionals stressed that there is not one central algorithm or AI model for campaign decisions such as on distribution of ads spending, main campaign messages or fund raising. Where advanced predictive models have been used, they performed similar or worse than established social science methods like survey research and focus groups to test key campaign messages and to identify target groups. With the easier employability of generative and analytical AI methods via interfaces like Chat-GPT, we expect an increase of their use in organized political campaigns and third-party actors as the first two sets of actors we will discuss in this article.

Concerning digital online platforms, complex AI models have been used for content moderation, as the recommendation and filtering of content, for years and have become very efficient in maximizing user time spent and targeted advertisement, but also in the detection of illegal and community guideline inflicting content. However, we know that social media platforms like Twitter have not successfully deleted content promoting conspiracy narratives and vaccination-sceptic content during the Covid-19 pandemic (Darius and Urquhart, 2021). While digital platforms are the set of actors where AI systems play the most crucial role because they manage flows of information on these platforms, it will be essential to how well platforms will perform in detecting and how they will deal with removing AI-generated content like images, videos or text, mainly when containing misleading, false information or consciously misleading political disinformation by internal and external actors.

Regarding the regulators and electoral governance institutions, we would advise building task forces to monitor online debates and the use of generated content during sensitive election times and recommend a close cooperation with Digital Service Coordinators, which will be the Bundesnetzagentur in Germany. For these institutions, it is vital to have a task force for the electoral peak time to coordinate with civil society actors and identify inauthentic behaviour and astro-turfing campaigns. Research has shown that there have been various interferences in elections via astroturfing campaigns and ill-controlled targeted advertisements on social media platforms (Kim et al., 2018, Schoch et al., 2022).

Organized political campaigns in a changing party and communication sphere

Digital platforms like social media and online search engines have become essential campaigning environments for organized political campaigns and enable increased data analytics. With respect to the use of data-driven methods and new technologies, however, the application may lead to two distinct types of campaigns that Römmele and Gibson (2020) describe as scientific and subversive. As ideal types, these state a scientific sort of campaign that uses enhanced analytics to engage and mobilize voters. In contrast, the subversive type of campaigns uses divisive issues and messaging, populist speech and targeted advertisements to increase political conflict and sometimes actively demobilize voter groups that lean to vote for the political opponent. As such, technologies like generative AI may accelerate this bifurcation of political campaigns into scientific and subversive.

In the case of Germany, parties choose marketing campaign agencies as close partners in creating campaign content and strategies and supporting or managing the digital campaign. Especially to identify target groups and roll out targeted ads on social media like Facebook, Instagram and TikTok (and since Musk’s return to allow political ads in the future again also Twitter) and online search engines like Google. The central goal of digital and general campaigns is the persuasion of voters, particularly the group of undecided voters. In the past 2021 German federal elections, the group of undecided voters was abnormally large by German standards because party affiliations are further loosening, and the Merkel era ended and left many former Merkel voters undecided between parties. The continued communication between parties and campaign actors seeks to bring voters to subscribe to online newsletters via their main campaign website and to follow candidates and party accounts on social media. The subscription to newsletters, however, comes with extended analytic capabilities to test campaign messages and grouping favourable voters.

Concerning the application of generative AI, we know that visual content has increased in importance and quantity, whilst costs dramatically decreased even before the availability of generative AI at the current quality levels. Thus, we expect a further increase in the use of visual content and use of generative AI tools like DALLE2, Midjourney and many others. Whilst generative AI lowers the cost of the production of visual content, it may increase the cost for party internal rapid response teams to counteract fake posts about the own party or major candidates. In the past 2021 elections, we also observed an increase in negative campaigning, as video spots or social media posts and ads attacking political opponents instead of promoting the own political goals. This is in Germany, in contrast to other countries and especially the US, relatively uncommon and could accelerate with increased use of generative AI that may allow producing compromising photos of political adversaries. A recent example that made German headlines in recent months was a generated image of Arabic-looking men with angry faces generated and posted (with a misleading caption) by politicians of the far-right party Alternative für Deutschland (AfD). Also, in the run-up to the 2024 US Presidential elections, the campaign of Ron DeSantis uses deep fakes and generated content to compromise his opponent for the Republican candidacy Donald Trump. To avoid an spiral of inauthentic campaign content parties and political actors should commit themselves to publicly communicated guidelines or codexes and adhere to these rules.

Third-party actors

Digital tools and social media also give third-party actors a public stage to express their opinions on parties and candidates that are part of the electoral competition. Third-party actors are partisans and sometimes well-funded third-party campaign organizations like NGOs, industry associations, or labour unions. Whilst some of these organizations have been representing their clientele’s interests for decades, others were funded in recent years and roll-out marketing and social media campaigns in campaign times to influence public opinion on parties, issues and candidates. The Internet and social media enable a sort of citizen-initiated campaigning (Gibson, 2015). Platforms like Facebook, Instagram or Twitter also allow partisans to influence the public perception and visibility of messages of official political campaign actors (Bossetta, 2018; Darius, 2022).

Moreover, partisans and political actors behave strategically and sometimes seek to claim their political opponents’ messages and hashtags in so-called hashjacking strategies (Bode et al., 2015; Darius and Stephany, 2019). In these competitive public campaign environments, generated visual content can provide a tool to increase attention and cause scandals, first by potential claim of the generated content itself, and second when media report about it being a fake. Moreover, third-party actors may also employ bots and inauthentic accounts to appear as “normal” citizens and support their favoured political candidates or verbally attack political opponents.

AI-generated content and activity may help make a campaign “dance” and attract more online visibility, however, in a misleading and inauthentic fashion. This is especially concerning with respect to the capability of AI generated arguments to affect or persuade citizens to change their policy positions (Bai et al., 2023). Thus, it is crucial to that multiple actors monitor the campaign sphere to detect early coordinated inauthentic behaviour such as disinformation campaigns and debunk misleading generated content before it reaches a broad audience on social media platforms or gets even picked up by traditional media. Moreover, we can only hope the electorate will punish those using subversive methods at the ballot box.

Digital platforms and the media

Digital online search engines and social media platforms provide crucial campaign environments that enable more fine-grained analytics than traditional media channels and will need to invest more resources into discovering AI-generated activity and misinforming content in the coming years. The core service of these platforms is the recommendation of content to users and is based on algorithmic content moderation systems (Gorwa et al., 2020). Generative AI and large-language models will further improve the accuracy of content moderation, and our expectation is that also the detection of harmful and misleading content will also keep up with increasing numbers of generated content and inauthentic accounts. Platforms companies should report in detail about methods used and metrics applied to detection of harmful generated content and consult with researchers and policymakers how to make their platforms a safer environment for political campaigns and constructive political debate.

Regulators and electoral governance institutions

Regulators and electoral governance institutions might not directly apply generative AI models. Still, they should build capacities to coordinate and communicate on a technical level with campaign teams, platform companies and creators of generative AI. The German Digital Service Coordinator will be the Bundesnetzagentur, also responsible for telecommunications and having technical expertise within their institution. Here it could be possible to require very large online platforms (VLOPs) and very large online search engines (VLOSEs) to report specifically on political and electoral risks of their services, and potentially even a continuous election sphere monitoring by platforms and civil society actors. Looking ahead, especially when generative AI starts to “dance” with respect to being autonomous and improving and recoding itself, regulators must prohibit, request or execute deletion of such systems when harmfully interfering with digital publics, especially during election campaign periods.

Summary

We argued that because generative AI yet does not dance as acting autonomous most application of generative AI rather provides tools for all sets of actors. Nevertheless, these tools will have significant consequences for election campaigns and their use by political actors can pose challenges to the integrity of elections. Thus, campaign codexes by organized campaigns and close coordination of regulators and digital platform providers are essential to limit the potential of misleading or disinforming content generated and disseminated by generative AI applications and systems. Establishing these safety mechanisms will be crucial to prevent interferences and protect the integrity of upcoming elections such as the European election and US presidential election in 2024, and the German Federal election in 2025.