23 March 2023

Political Advertising and Disinformation

The EU Draft Regulation on Political Advertising Might De-Amplify Political Everyday-User Tweets - and Become a Blueprint for Stronger Online Platform Regulation

Over a year ago, the European Commission presented its Proposal for a Regulation on the transparency and targeting of political advertising (COM(2021) 731 final). Recently, the Council presented its General Approach, followed by the position of the European Parliament (EP).

While stakeholders are waiting for the trilogue negotiations to shape the final text of the legislation, critical voices are raising concerns. Various NGOs worry that the draft regulation’s scope is too broad, covering not only promotional communication against remuneration, but also independent non-profit political statements by citizens (see here and here). Concerns are that under the future regulation online platforms might have to de-amplify such independent content (see YouTubers RobRubble and Rezo).

What is it all about?

Online disinformation campaigns pose threats to civic discourse and electoral processes, especially when political messages can be tailored to specific audiences through targeting, or when campaigns mingle with User-Generated Content (UGC), with the chance of amplification by the online platforms’ recommender systems.

Cambridge-Analytica has been a wake-up-call for lawmakers here, and recent reports about “Team Jorge” bolster the case. For long, to protect against online disinformation threats, the European Union had relied on self-regulatory initiatives (e.g. Code of Practice on Disinformation). The Digital Services Act (DSA) has introduced a horizontal framework that might have a certain impact against disinformation, yet it is vague and difficult to enforce. The draft regulation on political advertising aims at introducing specific rules tailored to the problem.

To better understand the project, it is helpful to look at politicians’ talking points on the topic, e.g. from MEP Alexandra Geese, who aims to:

  • tackle disinformation-agencies (organizers of disinformation campaigns),
  • curtail targeting in the field of political advertisement (diverging messages might be targeted to different audiences, fragmenting the political discourse; targeting might be used for manipulation),
  • de-amplify disinformation in platform recommender systems (borderline content and conspiracies often provoke engagement, allowing such content to go viral more easily).

The role of online platforms in the draft legislation

The draft legislation addresses “political advertising service[s]” and so-called “political advertising publisher[s]”. Online platforms – depending on the circumstances – might fall into these categories when political ads are transported through their systems. Under the draft regulation, they then would have to meet transparency obligations and various duties of care.

Beyond these more or less conventional approaches, Art. 12 of the draft legislation imposes limitations on “Targeting or amplification techniques” in the context of “political advertising”. This is tailored to online platforms, where algorithmic curation through recommender systems is often a core function of the services.

The catch-all approach of “political advertising”

As Art. 12 limits targeting and amplification of “political advertising”, the question arises: What content will be covered?

According to Art. 2(2)(a), messages by or on behalf of political actors are in most cases to be considered “political advertising”. Much more interesting is the additional definition in Art. 2(2)(b), whereby the “dissemination, by any means, of a message …which is liable to influence the outcome of an election or referendum, a legislative or regulatory process or voting behaviour” is political advertising – irrespective whether a communication is private or business, independent or for remuneration.

The resulting broad scope would be in line with the lawmakers’ intention to de-amplify disinformation blending with user-generated content, such as Kremlin trolls and the like. However, this catch-all approach would also cover independent and everyday-users’ political statements.

Will the Council position solve the problem?

It seems that the European Parliament as well as the Council of the European Union both regard this outcome as too far reaching, but they suggest different solutions.

The Council proposes to modify the definition of political advertising in Art. 2(2)(b) by introducing exemptions: „political opinions expressed in any media under editorial responsibility” are explicitly exempted from the definition.

This exemption refers only to “opinions”, a term which is narrower than the general term “messages” (defining advertising in the first place). Thus, factual allegations might not qualify for the exemption.

Moreover, to be exempted, “opinions” must be expressed under “editorial responsibility”. For guidance on this, recital 19 of the Council proposal points towards the AVMSD, where “editorial responsibility” is defined as: “the exercise of effective control both over the selection of the programmes and over their organisation either in a chronological schedule, in the case of television broadcasts, or in a catalogue, in the case of on-demand audiovisual media services”.

In my view, this AVMSD-definition does not fit well in the broad context of the draft regulation on political advertising, as the regulation will cover all kinds of platforms: Tweets on Twitter are not organized in a chronological schedule, and creators on TikTok do not present a catalogue.

Therefore, “editorial responsibility” in the context of the draft regulation should be understood as the exercise of effective control over the dissemination of a given content plus exposure to potential liability. For effective control, formal requirements (e.g. that a third party reviews content) cannot matter. Neither should substantive standards of diligence: National Media Laws know concepts like journalistic-editorial standards (e.g. § 19 of the German Interstate Media Treaty) – but the Council did not refer to such standards in its proposal. In my interpretation, content is posted under “editorial responsibility” if a natural person exercises effective control. This person must not be identical with the author, but she must have effective control over whether the content is disseminated or not, e.g. the owner of a social media account. To bear “responsibility”, this person would also need to act under its real name or at least be sufficiently identifiable, exposing this person to regulatory oversight and jurisdiction.

A comparison between the different proposals

What is the consequence for how online platforms will be recommending political content through their algorithms?

According to Art. 12(1) of the Commission proposal, targeting or amplification techniques that involve the processing of personal data referred to in Article 9(1) GDPR in the “context of political advertising are prohibited”. YouTube would not be allowed to recommend videos on political topics based on attributes like “left leaning”, “politically conservative” or “Turkish ancestry” (but they still could recommend based on non-sensitive data, e.g. to users which are classified as “economically middle class”). This limitation on how platforms are allowed to target and amplify is somewhat loosened by Art. 12(2) of the proposal, which exempts cases of Art. 9(2)(a) and (d) GDPR: especially in cases of explicit consent platforms would still be allowed to amplify/target based on sensitive data. The practical relevance of this exemption remains to be seen. Users might reject consent for data usage for political advertising, often without being fully aware that they (counter-intuitively) then also opt to de-amplify (independent) political content (since the platform might treat it as advertising, in line with the regulation’s definition). As a result, under the Commission proposal, online platforms will often have to de-amplify political content, at least once they become aware of it, e.g. through a notification. It seems less clear whether platforms will also need to take-proactive steps (filtering for political content?), a problem that touches the not-so-clear relationship of the proposal to the DSA. Since Art. 12(1) does not refer to a knowledge-level, but instead to the “context” of political advertising, this supports an interpretation of Art. 12(1) requiring more general, pro-active measures. Art. 6(1) DSA – which exempts from liability for illegal content – might not protect against this, because political advertising must not amount to illegal content in that sense and also because the DSA does not fully protect against specific monitoring obligations in the first place.

The Council follows the logic of the Commission proposal on “Targeting or amplification techniques”, which means no targeting/amplification based on sensitive data and exemptions in cases of explicit consent. However, since the Council proposes a slightly narrower definition of political advertising, Art. 12(1) would have a less severe effect: In the context of political messages, “opinions under editorial responsibility” could still be fully targeted/amplified using sensitive data.

The European Parliament is going its own way. It substantially changes course on the scope of the targeting/amplification limitations, as it proposes to change the heading of Chapter III („Targeting … of Political Advertising Services”). In my view, this means that only advertising provided against remuneration would fall under the limitations on targeting/amplifications in Art. 12. Twitter could still fully amplify political UGC-Tweets.

Will it work out against disinformation?

As we have seen, the EP is opposing the idea of de-amplification of everyday-user-content. By doing so, the EP is also abandoning the attempt to de-amplify disinformation by Kremlin trolls hiding amongst user content.

The European Commission is on the other side of the spectrum. Platform algorithms would not be allowed to target/amplify political content based on sensitive data (exemption: explicit consent). Algorithms could then only use a portion of their horse-powers. This might then also downgrade disinformation by bad actors.

The Council’s position is de-facto in the corner of the Commission, with some exemptions. According to the Council, in the field of political content, “opinions” which are “under editorial responsibility” can still be freely amplified/targeted (even beyond cases of explicit consent).

As lawmakers now face opposing positions with complicated proposals, one should keep in mind that the effect of Art. 12 against disinformation might be limited in the first place. Disinformation campaigns through fake-profiles and troll-accounts might only be slightly weakened since Art. 12(1) in the Commission/Council-proposal only limits amplification/targeting based on sensitive personal data. Platforms will still be allowed to use non-sensitive data for their recommendations, and this might be sufficient for algorithms to carry conspiracies towards virality since relevant interaction-based collaborative-filter recommender systems usually do not require sensitive data.

However, other actors might be tempted to take more far-reaching considerations: Social media users depending on virality (Influencers?) might become more hesitant to touch political topics in the first place (as they then think to risk de-amplification).

A door opener for future regulation?

In the end, while Art. 12(1) might not prove overly effective against disinformation, it might serve as a crucial door opener when it comes to future platform regulation.

Through its substantive standards for how recommender systems shall treat content, Art. 12(1) can be seen as a new approach. With the DSA, lawmakers had opted against strong substantive rules for platform algorithms à la “Do not amplify bad content!”, which had been articulated (e.g. draft opinion for the IMCO committee of 28.5.2021, proposal for an Art. 24(a)(new)). The DSA lawmakers instead had compromised on non-substantive rules for recommender systems through transparency and data access (Art. 27(1), 40), strengthening users’ options (Art. 27(3), 38)) and introducing vague risk mitigation obligations (Art. 35, 36). Interestingly, lawmakers even made it harder for courts to establish liability for recommender decisions through traditional liability laws, because the neutral/active threshold for losing the liability exemptions (Art. 4 – 6, 8 DSA) has (theoretically) been slightly raised compared to the E-Commerce-Directive (recital 18 DSA reframes the threshold as “providing the services neutrally”, while recital 42 of the E-Commerce-Directive had – more narrowly – asked for the “passive nature” of a provider). With Art. 12 of the draft regulation on political advertising, lawmakers could overcome this caution, opting for a substantial regulation (“Do not amplify XYZ!”) instead of just complementary procedures (“Be transparent!”).

If lawmakers settle on Art. 12(1) in the version of the Council, this would open doors to yet another unchartered territory of online platform regulation. So far, proposals for identification schemes (obligations to use real names or at least to be identifiable) have faced strong opposition. With the DSA, similar approaches had been proposed (See amendment 291 of the final EP position of 20.01.2022: verification obligation for users who disseminate content on porn platforms), yet had been rejected. As I have shown above, the Council position for the draft regulation on political advertising incentivizes it for users to verify themselves or to act under real names, because otherwise their political messages might be de-amplified.

Mix both new approaches together and Art. 12 could become a blueprint for much stronger regulation: e.g. where content by anonymous users cannot be amplified at all (it will be hosted, but hardly be found) or where platforms lose their liability exemptions for anonymous content (it can be recommended, but platforms must bear the risks).

In parts, these approaches could also inspire how the DSA is to be enforced. For instance, regulators might argue that de-amplification of anonymous content is a necessary risk mitigation measure based on Art. 34, 35 DSA.

Conclusion

Art. 12(1) of the draft regulation on political advertisement aims at introducing substantive standards on how recommender systems shall work: In certain cases, do not (fully) amplify political content! Doing so, it might disincentivize anonymous postings online – if you still want to go full viral, identify yourself!

The lawmakers’ course is legitimate, while minor adjustments seem necessary. In my view, the Council position should privilege not only “opinions”, but all “messages” under editorial responsibility. Moreover, it should be clarified that any natural person exposing herself to verification or acting under real name can exercise editorial responsibility.

From a helicopter perspective, Art. 12 deserves our attention because it could act as a blueprint for new – controversial – approaches for online platform regulation by future legislators.


SUGGESTED CITATION  Holznagel, Daniel: Political Advertising and Disinformation: The EU Draft Regulation on Political Advertising Might De-Amplify Political Everyday-User Tweets - and Become a Blueprint for Stronger Online Platform Regulation, VerfBlog, 2023/3/23, https://verfassungsblog.de/political-advertising-and-disinformation/, DOI: 10.17176/20230323-185217-0.

4 Comments

  1. Jukka Ruohonen Fri 24 Mar 2023 at 15:03 - Reply

    Thanks for the nice analysis!

    I agree with the critics on that non-commercial political speech should not be covered. Though, I am little unsure about your interpretation of Art. 12 in the proposal. In my viewpoint, this article is already pretty much covered through the General Data Protection Regulation (GDPR). It follows that if the GDPR would be properly enforced, there would not even be a need for Art. 12. It is difficult to imagine how platforms and social media companies could even rely on any other basis than consent (i.e., Art. 9(2)(a) in the GDPR). To my reading, this point has been made also by Maja Brkan [1].

    [1] The regulation of data-driven political campaigns in the EU: from data protection to specialized regulation, Yearbook of European Law, 2023.

    • Daniel Holznagel Tue 28 Mar 2023 at 15:00 - Reply

      Thats a super interesting point.

      “Pretty much” covered is the question I think. Couldnt Platforms nowadays rely on Art. 9(2)(e), when e.g. Twitter relies on public leftist posts to tag someone as “leftist”. That would be gone under the proposal (or with the consequences described).

      Also one could question whether through its construction, consent in the maning of Art. 12(2) would have to be given explicitely for the purposes of Art. 12(1) (under the future regulation one must click: “I consent to use of my sensitive data to be used for amplifying policitial advertising”) while nowadays platforms can just rely on specific consent for the purpose of amplifiying UGC, as political advertising (in the form of normal UGC) is not by law a specific purpose (so today its sufficiant that users click: “I consent to use of my sensitive date for recommending content”).

      But yes, I also think the effect is limited (as I wrote), and the more interesting aspect I think is that Art. 12(1) opens a door to new, substantive approaches to recommender-regulation.

      • Jukka Ruohonen Wed 29 Mar 2023 at 15:36 - Reply

        Regarding the GDPR, there are some interesting precedents and developments in this regard. For instance, even under Art. 9(2)(e), i.e. in case someone has manifestly made his or her political opinions public, fines have already been imposed by national data protection authorities for not taking other data protection requirements into account (see DOS-2018-04433 in Belgium). Now that NOYB recently filed complaints against all major political parties in Germany, we will hopefully get some clarity over this matter.

  2. N.W. Wed 29 Mar 2023 at 15:34 - Reply

    Interesting analysis! One question though: how would you define ”disinformation”? Personally, I find that the devil is always in the details and we have, unfortunately, witnessed several occasions in recent years where ”disinformation” turned out to be true. This is the principal reason why I would steer away from the notion altogether as it is clear that it will be (and already has been!) used to target political opposition by ruling parties or administrations. Imo, we should focus exclusively on curbing the dissemination of content that has been proved to be wrong. Opinions, no matter how controversial, should never be banned or de-amplified because that runs against the free exchange of ideas which is the cornerstone of democratic societies. If we lose our principles, we will lose our democracies in the process. I guess that the road to hell is indeed paved with good intentions.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Disinformation, EU Regulation, European Union, Platform Regulation, Political Advertising


Other posts about this region:
Europa