This article belongs to the debate » The Rule of Law versus the Rule of the Algorithm
28 März 2022

Our Minds, Monitored and Manipulated

How AI Impacts Public Discourse and Democracy

Well-functioning democracies require a well-informed citizenry, an open social and political discourse and absence of opaque or deceitful influence. Western democracies have however always been prone to power asymmetries and to coercion and the curbing of these freedoms through oppression and propaganda. And while the powerful have always used tools and techniques to influence our opinions, the increasing use of the latest digital technologies in the 21st century, such as Artificial Intelligence (“AI”), has put these processes on steroids. Adoption of AI and datafication has raised concerns whether society is sliding into an Orwellian nightmare, where all of our actions are being scrutinized, controlled and manipulated at a scale that has never been possible before. So, what is it exactly that makes this time so different?

The Internet-of-Minds

First, the combination of data and computing power has enabled the capture of an unprecedented amount of information about us. Ever larger parts of our lives happen online, where we constantly leave data trails. We have seen a surge in sensors being deployed in public spaces, at home, and on our bodies. As it stands today, over 13 billion devices are connected to the internet and recently the concept of the Internet-of-Bodies has emerged. The use of AI has made these troves of data a true playground for the categorization, sifting, sorting, and profiling of our entire lives, behavior, thoughts, ideas and beliefs. These insights in turn have created ample opportunities to target, nudge, manipulate and deceive us and have led to altered, even harmful beliefs, ideas, and behavior. All this happens ever more covertly at the abstract level of code, algorithms, models, and data. So-called ‘dark patterns’ lure us into accepting these practices and subliminal techniques manipulate us beyond our awareness. Beyond the Internet-of-Things and the Internet-of-Bodies, we are also on a trajectory towards the Internet-of-Minds, and we should seriously question whether we like where we are going.

AI has become a popular tool for surveillance, categorization, and manipulation for companies and politicians alike. The most widely discussed example of this is Cambridge Analytica, which exploited data of 87 million Facebook users to build profiles that could be utilized for political gain. But we need to look beyond Cambridge Analytica to understand the full exposure of our democracies and societies to AI.

The distortion of democracies, public discourse, social cohesion, and public trust by AI cannot be pinpointed to one event, scandal nor even a single phenomenon. AI-driven computational propaganda has distorted elections in Ukraine, Estonia, China, Iran, Mexico, the UK, and the US.1) It has been estimated that during the 2016 US Presidential elections almost one-fifth of the Twitter discussions came from bots.2) In 2017, two of the most widely followed black activist Twitter accounts, @Blacktivist and @WokeBlack, turned out to be fake accounts run by Russian troll farms. Facebook’s algorithm has incited violence against protesters in Myanmar, when it promoted junta misinformation, death threats, and the glorification of military violence. Facebook whistleblower Frances Haugen has said that the company’s algorithm is “fanning ethnic violence” in Ethiopia.3) Very recently there have been signals that deep fakes are being used to deceive the population into believing fake claims on the Ukrainian crisis by non-existing journalists (generated by AI).4)

This is the basis of a phenomenon called ‘stochastic terrorism’: the inflammation of hatred and skepticism to a level where violent acts become statistically more likely. Again, something as stochastic terrorism cannot be pinpointed to a single event, but rather is the result of continuous and deliberate manipulation of public opinions, where single acts create a massive phenomenon. This likely happened during the January 6th insurrection of Capitol Hill last year.

These are just a few contemporary examples of how AI can impact our democracies and social cohesion, but we cannot even begin to fully grasp how AI could potentially shake democratic societies in future. We do have some idea of the main enabling conditions though: indiscriminate datafication through the constant monitoring and tracking of our entire lives, the increasing ability to categorize, sift and sort us into groups and profiles, the limited ability to choose our information freely, the complexity of the AI systems, the covertness of the manipulation, the subliminality of the techniques, the shifted power balance from democratically elected officials and scrutinized media outlets to private actors with engagement-based business models, to name but a few.

Policy reactions and gaps

Europe can be commended for taking legislative steps to counter some of these enablers. We fear however that we have barely scratched the surface of understanding what might be necessary to effectively protect our democracies from the adverse effects of AI.

The European Commission has reacted to the increasing risks AI brings to European democracies by proposing several regulations: the Digital Service Act (DSA), the Regulation on the Transparency and Targeting of Political Advertising (TTPA) and the Regulation on Artificial Intelligence (AIA).

While the DSA and AIA more generally deal with large online platforms and AI respectively, the TTPA specifically tries to deal with political (micro-)targeting. Political micro-targeting can happen at many levels but can be very effective if done through “very large online platforms” (VLOPs), like Twitter, Facebook, Instagram, TikTok and Google. These platforms have a systemic role in our societies in shaping, amplifying, directing, and targeting information flows online.5) The recommender systems designed by the VLOPs are crafted for their main clientele: advertisers. Providing advertisers with targeted advertising services is the main source of revenue for VLOPs. To maximize their profits, their business model is relatively simple: they must grab and maintain users’ attention in order to maximize the time they spend on the platform. This, on the one hand, raises the price of ad space and on the other provides additional data on those users that can be used for further profiling and targeting.

Political micro-targeting via the VLOPs utilizes these very same recommender systems. Obviously, a system designed for commercial purposes and profit maximization has entirely different implications when it is employed to ‘sell’ political ideas or candidates. The VLOPs “engagement maximization” model translates, when used in a political context, to recommending and amplifying fake, extreme, hyper-partisan and radicalizing content.6) Algorithmic logic can create ‘filter bubbles’ and amplification of fake news on social media, leading to polarization in society and jeopardizing freedom and peace. Consequently, it jeopardizes user’s access to pluralistic and objective information and may undermine shared understanding, mutual respect and social cohesion required for democracies to thrive.7) If AI-driven micro-targeting is very powerful and effective, it may even undermine the human agency and autonomy required for taking meaningful decisions.8)

Proxy Data

The TTPA tries to address this by prohibiting the use of ‘sensitive’ personal data such as political opinions, religious or philosophical beliefs, trade union membership, or biometric data9) for political micro-targeting or amplification techniques. This prohibition however overlooks the concept of ‘proxy data’ where data that is not personal or sensitive, can, in combination with other data, provide a proxy for the very same sensitive insights the proposal is trying to protect. Think of a combination of data shared on social media, location data, data on online purchases and search history, that allows for the political targeting of 35-year-old women, living in Paris and interested in environmental matters.

Consent