18 December 2025

The Limits of Symbolic Regulation

Why the EU Should Not Enforce the DSA (right now)

Things are happening in EU platform regulation. On 5 December, the Commission made its first (landmark) decision under the Digital Services Act (DSA), which has since sent Elon Musk into a cursing spree, culminating in demands to abolish the EU. The decision against X has even led to interventions by high-level US politicians like JD Vance and Marco Rubio, who view the DSA (and the Commission’s decision) as nothing less than a declaration of war on the US Constitution’s First Amendment. The EUR 120 million fine has even led some US legislators to consider implementing a “civil damages turbolaser”, threatening companies with excessive damages claims if they do not abide by First Amendment standards when they use US servers to provide their services. Adding insult to injury, the decision came in the wake of a far-reaching CJEU ruling in re Russmedia, which has been widely criticised for undermining the DSA’s safe harbours (see this excellent analysis by Erik Tuchtfeld).

In its decision, the Commission has found that X has facilitated inauthentic behaviour on its platform through the deceptive design of its “blue checkmark”. Is this the DSA’s “big enforcement moment”? I argue that it is not. Modifying this design will not enhance trust and safety. X is a controversial service with a multitude of problems – yet the EU chooses to complain about blue checkmarks. The motivation behind the regulator’s actions was not to make very large online platforms more beneficial for society; rather, it was an attempt to assert authority. The decision also highlights that the DSA is caught in a state of arrested development: Without cultivating an understanding of the DSA’s most fundamental principles, all enforcement efforts are doomed to fail.

What does it all mean? A very short introduction to verified badges

X’s verification system supposedly violates Art. 25 DSA, which prohibits deceptive design practices. The design choice is considered deceptive because every user can subscribe to X’s verification service in order to gain the – once sought-after – blue checkmark. The Commission considers the icon to tag users whose identity has been verified by the platform (e.g., through checking their ID). Under X’s subscription scheme, however, the identity of a user is no longer verified – all you need to do is pay. This leads to the argument that users cannot trust the checkmark anymore, as it does not indicate whether someone is who they claim to be. The violation is specific to X, as other providers of online platforms seem to still verify the identity of a user upon payment of the subscription fee.

Unfortunately, the Commission has not released its reasoning for the decision. However, it does not seem to consider that users who seek to acquire the badge must have “non-deceptive” profiles, meaning they have to provide a phone number and may not have violated the terms and conditions on inauthentic behaviour, among other requirements. Additionally, X has not abandoned the notion of “verified users” completely: For high-profile users, like politicians and companies, a separate verification scheme was set up.

According to the Commission, X’s verification system makes users more susceptible to scams and “malicious actors”. The press release does not corroborate this claim: Do users really trust information from non-verified accounts more than verified accounts? Users often do not know the accounts they are confronted with on their “For You” – pages, whether verified or not, and therefore should remain sceptical of claims made by them. Practically speaking: Why would I give Jane Doe, whom I do not know, my credit card information? Why would I believe her when she claims that migrants commit the majority of violent crimes? Because she has a blue checkmark?

Surely, the real issues seem to lie with accounts which purport to have some sort of power – be it social capital, epistemic authority, or something else. Would I be more likely to click on a link that was posted by someone purporting to be my favourite YouTuber when they have a verification badge? Would I believe someone claiming to work for the US government when they allege China to be responsible for sabotaging the Nord Stream 2 pipeline – because they have a blue checkmark?

If the Commission instead wants to prevent targeted scams, there are more effective ways of achieving this goal than by dictating a specific interpretation of the blue checkmark. If a criminal impersonates my friend, it would often be even more suspicious to have a blue checkmark, as most X users are not verified (only about 2 million users appear to be verified, which would equate to less than one percent of all users). This means that my friend’s real account is likely not verified. If my friend does not have an account on the online platform, it would also be suspicious to purchase a subscription right away. Either way, it would depend on how well the perpetrators imitate my friend, which would make me susceptible to a scam – not the verification badge.

While a blue checkmark may be a first line of defence, it is a rather irrelevant instrument to stay safe from scam and phishing attacks online. In all of the above cases, users are not deceived by the verification badge but by the behaviour of the accounts. Users should be sceptical of what they see online – and online platforms should empower them to question things. In short, the real problem is that I cannot tell inauthentic behaviour apart from real user behaviour.

Conversely, the decision highlights a conceptual problem with protecting the blue checkmark: We assign meanings to symbols, and these meanings can change over time. The blue checkmark has never meant that information can be trusted. While it once meant that a user’s identity had been verified and that they were important in some way, this is not necessarily true today. It is also not inherently bad that the meaning of this sign has changed. It rather raises questions about how and why this exact meaning should be established and, not least, how users really understand the blue checkmark. A specific objection concerns Wikipedia, which does not use a blue checkmark to signal that a user’s identity has been verified. Would this be considered deceptive?

Verifying a user’s identity can surely have benefits, e.g., users usually think more about what they should post when they have their identities attached to their account. But it seems infeasible to equate something as simple as a verification badge with more trust on an online platform. Empirical research even suggests that the use of verification badges may not serve the Commission’s interests, as users are more likely to share fake news after receiving one. Additionally, Twitter’s verification system was infamous for having a lot of problems of its own long before Musk bought the company. Instead of leaning into the problematic concept of the European average consumer, the DSA should empower the users of the platforms by putting more trust in them. This would require educating people on how to consume content in online environments. Such an approach would get to the root of the disease instead of merely treating its symptoms, like the current approach does. Not everything can and should be resolved by laws.

On data access

The Commission decision has also targeted X’s data access regimes. This concerns the service’s ad repository (Art. 39 DSA) as well as the “scraping law“ found in Art. 40(12) DSA. These violations are much easier to justify than the deceptive design of the verification badge. For the DSA community, this is where the magic happens, as it greatly supports researchers in their endeavours. In the past, X has prohibited eligible researchers from accessing publicly available data, including through scraping. “Scraping” describes a practice whereby data is fetched and extracted from a website, often automated. The Commission decision therefore comes as no surprise; a similar decision was already taken by the Kammergericht Berlin earlier this year.

For empirical research projects in need of data from X, this is a big improvement. For the interpretation of the DSA, on the other hand, this part of the decision is rather irrelevant: Data access cannot provide a normative concept for regulation (naturalistic fallacy). The effects will only begin to be relevant for regulation when the studies are complete and the regulation is adapted.

A wallpaper to cover the cracks

This begs the question: Were the violations worth troubling EU-US relations even further? How is the EU doing in the “cold war”? I think the decision has not helped the EU’s position but unnecessarily damaged transatlantic relations. After all, the Commission might not even achieve its goal: X could simply decide to lose the verification system in the EU, as the DSA does not mandate user verification. Arguably, it might also be enough to simply change the blue checkmark to something else. None of these responses would meaningfully provide trust for users on X.

The decision highlights the revolting lack of ideas on what makes online environments safe and trusted (Art. 1(1) DSA). The decision does not help in clarifying these abstract concepts. Rather, it treats the users of X as if they were reliant only on the verification badge. At the same time, the decision does not provide any meaningful help with issues like disinformation and online scams: The key factor for whether inauthentic behaviour or disinformation is prominent on an online platform remains the design of the recommender system.

The Commission decision was made not long after the Berlin Digital Sovereignty summit. The summit and the decision both go to show how important it is to think about the technology stack as a whole when drafting and enforcing rules for digital platforms. The most honourable goals in the EU legislation are worth nothing when they can be demolished by a “civil damages turbolaser.” While the European digital rules are crucial for digital sovereignty, they are not enough by themselves and will have to be accompanied by economic, political, and societal initiatives.

There will always be bad actors. But it makes a difference whether they can systematically take advantage of platform structures or their actions remain isolated. Therefore, questions of platform design remain crucial to create a safe online environment. While the Commission decision concerns the design of a minuscule part of Musk’s service, it seems to ignore its real problems. It is like telling the resident of an apartment to use a different wallpaper to cover the gaping cracks in their wall. And until the Commission has thought about what constitutes a “safe” online-environment, it should stop considering which wallpaper to put up.


SUGGESTED CITATION  Bovermann, Marc: The Limits of Symbolic Regulation: Why the EU Should Not Enforce the DSA (right now), VerfBlog, 2025/12/18, https://verfassungsblog.de/the-limits-of-symbolic-regulation/.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Commission, DSA, Twitter


Other posts about this region:
Europa