This article belongs to the debate » The Rule of Law versus the Rule of the Algorithm
28 March 2022

Our Minds, Monitored and Manipulated

How AI Impacts Public Discourse and Democracy

Well-functioning democracies require a well-informed citizenry, an open social and political discourse and absence of opaque or deceitful influence. Western democracies have however always been prone to power asymmetries and to coercion and the curbing of these freedoms through oppression and propaganda. And while the powerful have always used tools and techniques to influence our opinions, the increasing use of the latest digital technologies in the 21st century, such as Artificial Intelligence (“AI”), has put these processes on steroids. Adoption of AI and datafication has raised concerns whether society is sliding into an Orwellian nightmare, where all of our actions are being scrutinized, controlled and manipulated at a scale that has never been possible before. So, what is it exactly that makes this time so different?

The Internet-of-Minds

First, the combination of data and computing power has enabled the capture of an unprecedented amount of information about us. Ever larger parts of our lives happen online, where we constantly leave data trails. We have seen a surge in sensors being deployed in public spaces, at home, and on our bodies. As it stands today, over 13 billion devices are connected to the internet and recently the concept of the Internet-of-Bodies has emerged. The use of AI has made these troves of data a true playground for the categorization, sifting, sorting, and profiling of our entire lives, behavior, thoughts, ideas and beliefs. These insights in turn have created ample opportunities to target, nudge, manipulate and deceive us and have led to altered, even harmful beliefs, ideas, and behavior. All this happens ever more covertly at the abstract level of code, algorithms, models, and data. So-called ‘dark patterns’ lure us into accepting these practices and subliminal techniques manipulate us beyond our awareness. Beyond the Internet-of-Things and the Internet-of-Bodies, we are also on a trajectory towards the Internet-of-Minds, and we should seriously question whether we like where we are going.

AI has become a popular tool for surveillance, categorization, and manipulation for companies and politicians alike. The most widely discussed example of this is Cambridge Analytica, which exploited data of 87 million Facebook users to build profiles that could be utilized for political gain. But we need to look beyond Cambridge Analytica to understand the full exposure of our democracies and societies to AI.

The distortion of democracies, public discourse, social cohesion, and public trust by AI cannot be pinpointed to one event, scandal nor even a single phenomenon. AI-driven computational propaganda has distorted elections in Ukraine, Estonia, China, Iran, Mexico, the UK, and the US.1) It has been estimated that during the 2016 US Presidential elections almost one-fifth of the Twitter discussions came from bots.2) In 2017, two of the most widely followed black activist Twitter accounts, @Blacktivist and @WokeBlack, turned out to be fake accounts run by Russian troll farms. Facebook’s algorithm has incited violence against protesters in Myanmar, when it promoted junta misinformation, death threats, and the glorification of military violence. Facebook whistleblower Frances Haugen has said that the company’s algorithm is “fanning ethnic violence” in Ethiopia.3) Very recently there have been signals that deep fakes are being used to deceive the population into believing fake claims on the Ukrainian crisis by non-existing journalists (generated by AI).4)

This is the basis of a phenomenon called ‘stochastic terrorism’: the inflammation of hatred and skepticism to a level where violent acts become statistically more likely. Again, something as stochastic terrorism cannot be pinpointed to a single event, but rather is the result of continuous and deliberate manipulation of public opinions, where single acts create a massive phenomenon. This likely happened during the January 6th insurrection of Capitol Hill last year.

These are just a few contemporary examples of how AI can impact our democracies and social cohesion, but we cannot even begin to fully grasp how AI could potentially shake democratic societies in future. We do have some idea of the main enabling conditions though: indiscriminate datafication through the constant monitoring and tracking of our entire lives, the increasing ability to categorize, sift and sort us into groups and profiles, the limited ability to choose our information freely, the complexity of the AI systems, the covertness of the manipulation, the subliminality of the techniques, the shifted power balance from democratically elected officials and scrutinized media outlets to private actors with engagement-based business models, to name but a few.

Policy reactions and gaps

Europe can be commended for taking legislative steps to counter some of these enablers. We fear however that we have barely scratched the surface of understanding what might be necessary to effectively protect our democracies from the adverse effects of AI.

The European Commission has reacted to the increasing risks AI brings to European democracies by proposing several regulations: the Digital Service Act (DSA), the Regulation on the Transparency and Targeting of Political Advertising (TTPA) and the Regulation on Artificial Intelligence (AIA).

While the DSA and AIA more generally deal with large online platforms and AI respectively, the TTPA specifically tries to deal with political (micro-)targeting. Political micro-targeting can happen at many levels but can be very effective if done through “very large online platforms” (VLOPs), like Twitter, Facebook, Instagram, TikTok and Google. These platforms have a systemic role in our societies in shaping, amplifying, directing, and targeting information flows online.5) The recommender systems designed by the VLOPs are crafted for their main clientele: advertisers. Providing advertisers with targeted advertising services is the main source of revenue for VLOPs. To maximize their profits, their business model is relatively simple: they must grab and maintain users’ attention in order to maximize the time they spend on the platform. This, on the one hand, raises the price of ad space and on the other provides additional data on those users that can be used for further profiling and targeting.

Political micro-targeting via the VLOPs utilizes these very same recommender systems. Obviously, a system designed for commercial purposes and profit maximization has entirely different implications when it is employed to ‘sell’ political ideas or candidates. The VLOPs “engagement maximization” model translates, when used in a political context, to recommending and amplifying fake, extreme, hyper-partisan and radicalizing content.6) Algorithmic logic can create ‘filter bubbles’ and amplification of fake news on social media, leading to polarization in society and jeopardizing freedom and peace. Consequently, it jeopardizes user’s access to pluralistic and objective information and may undermine shared understanding, mutual respect and social cohesion required for democracies to thrive.7) If AI-driven micro-targeting is very powerful and effective, it may even undermine the human agency and autonomy required for taking meaningful decisions.8)

Proxy Data

The TTPA tries to address this by prohibiting the use of ‘sensitive’ personal data such as political opinions, religious or philosophical beliefs, trade union membership, or biometric data9) for political micro-targeting or amplification techniques. This prohibition however overlooks the concept of ‘proxy data’ where data that is not personal or sensitive, can, in combination with other data, provide a proxy for the very same sensitive insights the proposal is trying to protect. Think of a combination of data shared on social media, location data, data on online purchases and search history, that allows for the political targeting of 35-year-old women, living in Paris and interested in environmental matters.

Consent

It also includes a crucial exception to the prohibition: consent. With consent, political micro-targeting based on sensitive personal data is allowed. But obtaining true informed consent on the use of sensitive personal data, including the proxy effects we described above, has shown to be a myth rather than a feasible option. Even if dark patterns would be prohibited, as is the position of the European Parliament on the DSA, people simply neither do nor are able to take the time to inform themselves of what they consent to, and one cannot blame them (this is known as “consent fatigue”). In a world where vast amounts of data are collected, combined, and reshuffled in the most creative ways, one cannot reasonably be expected to oversee all potential future uses and consequences of such consent.

Shifting the burden

This is even more true for the transparency solution proposed in the TTPA. Apart from acquiring consent, the party doing the political micro-targeting needs to provide additional information on why someone is being targeted, which data was used, the logic involved in the decision-making process, the parameters of the AI technique, etc. Even if this would (or could) be done for each and every micro-targeted message, it can simply not be done in a truly transparent manner, where people fully understand why and how they have been targeted. Transparency is not a one-way street. Information cannot just be ‘thrown over the fence’. Recipients need to be able and inclined to familiarize themselves with the information. Whether the recipient is able to do this depends on many things: volume and complexity of the information, time, interest, knowledge and attention. Other AI-related rules, such as the Platform Directive proposal and the GDPR, include similar transparency and consent measures. They seem to be the holy grail of protecting our online lives. We disagree. Transparency and consent measures place the burden of deciding whether someone is willing to be profiled and micro-targeted and how, entirely on the recipient. It has led to slippery slope developments of ever more shortcuts to push towards consent, for instance by making cookie banners ever more confusing, annoying, and manipulative. The obligation of transparency is often subverted or executed miserably, for example by providing incomprehensible and lengthy information.

People should be able to do nothing and still be protected. In other words, not being tracked, profiled and micro-targeted should be the default mode. The European Parliament recently took the position that VLOPs must provide at least one recommender system that is not personalized. However, an amendment to make the non-personalized version the default recommender system, was voted down.

Banning AI-driven manipulation

We might find a solution in the AIA, which aims to ban AI practices that center around ‘materially distorting a person’s behavior’ in a way that can cause psychological or physical harm. These practices include the use of subliminal techniques and the exploitation of vulnerabilities. According to the Commission, these prohibitions should cover a very narrow set of AI practices and address only the most severe risks and harmful consequences of manipulation. We however argue that these prohibitions provide a grand opportunity to address the AI-driven manipulation that we describe in this article. After all, the main objective of the AIA is to ensure protection of health, safety, and fundamental rights, from the ill effects of AI. According to the Charter of Fundamental Rights of the European Union (the “Charter”) and the Ethics Guidelines for Trustworthy AI, every human being possesses an “intrinsic worth”, which should never be diminished, compromised, or repressed by others – nor by new technologies like AI. This means that all people are to be treated with respect, as moral subjects, rather than merely as objects to be surveilled, sifted, sorted, conditioned, or manipulated. A ban on AI that is aimed at deception, material distortion of behavior or exploitation of a person’s vulnerabilities would fit well within this larger objective of the AIA.

Question Zero

At the beginning of this article, we asked the question how this time is different. The final difference lies in the way policy makers currently try to protect our online lives, our minds, our democracy, and our societal structures. We think that transparency, consent, technical requirements, documentation and even a ban on AI-driven manipulation are not the full answer to the seismic shift a technology such as AI can bring about in our society. Asymmetries of power, private control over public information, constant tracking and tracing and the Internet-of-Minds, they all need bolder solutions. The paradigm must shift from tackling one symptom at a time, to a proactive and anticipatory approach, where we keep asking question zero: what kind of society do we want to live in, and where does AI truly help us achieve that?

References

References
1 Woolley, S. C., & Howard, P. N. (Eds.). (2018). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford University Press.
2 Ibid.
3 Facebook’s role in Myanmar and Ethiopia under new scrutiny | Facebook | The Guardian
4 Ben Collings, NBC News on Twitter @oneunderscore_
5 European Commission. (2020). Impact Assessment Accompanying the Proposal for a Regulation of the Council on a Single Market for Digital Services (Digital Service Act). SWD(2020) 348 final. Brussels. 12.12.2020.
6 Wu, T. (2016). The attention merchants: The epic scramble to get inside our heads (First edition); Zuboff, S. (2019). The age of surveillance capitalism: The fight for the future at the new frontier of power.
7 Bruns, A. (2019) Filter bubble. Internet Policy Review, 8(4).
8 Taddeo, M., & Floridi, L. (2018b). How AI can be a force for good. Science, 361(6404), 751–752.
9 As referred to in Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of Regulation (EU) 2018/1725

SUGGESTED CITATION  Muller, Catelijne, Talvitie, Christofer; Schöppl, Noah: Our Minds, Monitored and Manipulated: How AI Impacts Public Discourse and Democracy , VerfBlog, 2022/3/28, https://verfassungsblog.de/roa-our-minds/, DOI: 10.17176/20220329-011119-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.