07 February 2025

Elon Musk, the Systemic Risk

Social media platforms already occupy a state-like role within their “de facto public spaces” (here Pt. 1, Rec. 22). Meanwhile, the actual public authorities and the broader EU society, confronted with the complex and dynamic interaction between algorithms and humans, appear to be left relying on the goodwill of these platforms—at least if they want to preserve the benefits they provide. Elon Musk, the owner of X and xAI, and perhaps the most extreme example of “Big Tech” merging with political power to date, seems to many in Europe to symbolize the dawn of a digital dystopia.

I argue, however, that this view may be incorrect in several respects. With the Digital Services Act (DSA) and its new “systemic tools,” the EU has an opportunity to address the technological roots of Musk’s powerful position in the digital sphere. In this context, Musk (potentially) using his platform (or AI) to intentionally influence the access, distribution, and presentation of information is “merely” a manifestation of risks that are already inherent in the systemic position of certain digital services.

Systemic risk mitigation in the DSA

Most of the debate surrounding the DSA focuses on how services disseminate and moderate (illegal) content, and thus on the lower levels of due process obligations (Art. 7-32 DSA). However, the DSA also includes tools that, not only through stimulating a high volume of individual cases, have an indirect systemic effect, but that also directly address structural (algorithmic) decisions and mechanisms impacting a large number of users at once. These tools are part of the additional duties (Art. 34-43 DSA) for very large online platforms and very large online search engines with at least 45 million users (VLOPs and VLOSEs, Art. 33(1), (4) DSA). The genuine systemic obligations in Art. 34 and 35 DSA require VLOPs/VLOSEs, in three steps, to identify and analyze systemic risks, including the four risk categories in Art. 34(1) DSA, and then to assess these risks with particular attention to the factors outlined in Art. 34(2) DSA. Following Art. 35(1) DSA, VLOPs/VLOSEs must, at their discretion ( “may include”), “put in place reasonable, proportionate and effective mitigation measures” tailored to the risks they have identified and the specifics of their service.

Firstly, this choice and design of obligations mean that it is not primarily a network of platforms, users, and other societal actors – as it has been the case so far and still is at the lower levels of obligations – that determines the type of content being circulated, but rather the VLOPs/VLOSEs and the exclusively competent Commission (Art. 56(2) DSA), operating within a system of regulated self-regulation (e.g., here Vor Art. 33 ff, Rec. 10; here § 60 Rec. 7; and generally Voßkuhle pp. 197 f.). Secondly, these large companies now face an enforceable duty to act preventively (see particularly Art. 34(1)(subpara. 2) DSA: “in any event prior to deploying functionalities”) below the threshold of a sufficiently foreseeable and causally determined danger (cf. Calliess p. 28) – a manifestation (cf. Buchheim p. 261) of the general European precautionary principle (see particularly Art. 191(2), sentence 2 TFEU; (cf. here Rec. 28 ff.)).

According to the DSA, (systemic) risk is a combination of severity and probability arising from the (algorithmic) design, functioning, and use of services, for example content moderation or recommendation systems. This risk can manifest in negative effects on fundamental rights, civic discourse, or a (mass-) proliferation of illegal content (Art. 34(1), (2), and Rec. 79(s. 4) DSA). However, through the self-regulation embedded in the (systemic) risk-based approach, the specific nature and manifestation of these systemic risks is, in the first step, left to the big tech companies to define. This reflects the (perhaps counterintuitive) core regulatory decision to treat very large services as quasi-public infrastructure (cf. e.g., here Art. 34 Rec. 3; and Eifert/Metzger/Schweitzer/Wagner, p. 1025) and, as such, to recognize their systemic relevance – rather than opting for direct intervention of any kind.

Leaving aside the many (potential) positive and negative implications, the self-regulation is counterbalanced by various channels of influence. These include guidelines, delegated acts, and implementing acts, as well as the joint development of (initially) voluntary codes of conduct and crisis protocols to dynamically concretize and adapt (cf. Repasi) or enforce the regulation. There has already been discussion about the (potential) “hardening” of soft law, which could serve as a basis for sanctions (see also Art. 35(1)(h) DSA and Rec. 104; or here p. 6). In addition, through numerous reciprocal references across the different levels of co-regulation, platforms and the public sector are not only de facto but also de jure entangled (see, e.g., Art. 35(3) DSA). As a result, the hands of Musk, Zuckerberg, and others are already more tied than one might think.

However, the backbone of the EU’s influence lies in numerous transparency and reporting obligations (not only in Art. 33 ff. DSA), trusted flaggers (Art. 22 DSA), independent audits (Art. 37 DSA), and a broad right to data access (Art. 40 DSA and Art. 67 DSA for specific information). By even opening the gates for so-called “vetted researchers” (Art. 40(4), (8) DSA) – perhaps the central lever – these channels reach deep into the heart of Big Tech’s data-driven business model. While precaution may have been privatized in the first place, it is now being pulled back through a network of legal influence and extensive channels for gathering information.

Now: Time for Implementation

Certainly, this shift in regulation, which acknowledges and translates the structural and societal concerns into legal provisions, is a highly significant development with the potential to close many protection gaps. However, when considering Musk, X, xAI, and the broader dynamics of digital transformation, the real question is: What happens now? (see also the announcement by Von der Leyen, pp. 7, 10)

What exactly are the systemic risks?

As we have seen, the DSA addresses the dynamics and complexities by leaving significant room for concretization through the obligated companies, under, particularly, the Commission’s supervision. This marks a substantive shift in how systemic risks are defined and mitigated to the implementation level – where the Commission holds a (more or less) tight grip on how things unfold. While much remains open, it is the operation of and interaction with the services themselves that creates the potential for negative impacts on large segments of the digital single market (as discussed above). At the heart of this is the systemic position of these companies, combined with the reciprocal relationship between humans and digital technology in a functional process (cf. Castells, pp. xviii, 5; Nemitz/Pfeffer, p. 29), and their potential for change. Thus, changing how the vast amounts of information are organized through these services can also change the rules for public discourse – whether through asymmetric amplification (cf. Rec. 80 and 84 DSA) of radical content or content of a singular kind (as might be the case with right-wing content on X) –, or even contribute to issues like a rising society-wide addiction or suicide rates of minors.

Even if a so-called general-purpose AI is integrated into the platform – such as an individualized recommendation system or a chatbot like X/xAI’s “Grok” – its systemic risks, such as racist biases or hallucination tendencies, must be assessed under the DSA (cf. Rec. 118, 136 Artificial Intelligence Act; critical Peukert).

The Normative Side: What Are Our Systemic Risks?

Although many aspects of digitalization, especially AI, may seem like technical matters, decisions about whether highlighting certain information or using “micro-targeted” political advertisements are helpful or an unacceptable interference in the opinion-forming process ahead of an election, or where to draw the line between systemic tendencies to favour hate speech and disinformation and allowing room for freedom of expression (cf. Art. 34(1)(b), (d), (2)(a), (b), (d) DSA, and later reports), are highly normative decisions. Therefore, here the substantive shift to the implementation level becomes the crucial factor.

Time for Action

First and foremost, systemic risk mitigation must start. On the EU’s side, this primarily involves leveraging the various channels of soft and hard influence, particularly by gathering information. And it can be said that the Commission has remained anything but inactive. Not just since Musk’s recent surge in public attention, but as of 2023, the Commission, in its Decision C(2023)9137 final of December 18, 2023, has initiated proceedings (Art. 66(1) DSA) regarding, among other things, compliance with the systemic risk obligations (Art. 34(1), (2), 35(1) DSA), including providing access to researchers (Art. 40(12) DSA). In particular, the Commission has been unconvinced by X’s “Community Notes” and “Freedom of Speech, Not Freedom of Reach” systems, which it believes do not provide sufficient protection against negative consequences for electoral processes or the spread of harmful content such as hate speech (Rec. 7 ff.). Furthermore, the Commission found that X failed to present an adequate “risk assessment of actual or foreseeable effects for the exercise of fundamental rights, […] tested with impacted groups, independent experts and civil society organisations” (Rec. 9).

Given the complexity of human-technology interactions, finding appropriate standards, thresholds, and benchmarks to assess systemic risks is not only difficult for the companies but also for the Commission’s oversight. This is why there are no preliminary findings on Art. 34 f. DSA yet. However, non-compliance with researcher access or the transparent presentation of advertisements (Art. 39 DSA) is easier to assess. And even potential violations of the systemic risk mitigation duties may be addressed sooner rather than later, especially after the recent (arguably too hesitant) “retention order” and the request for internal documentation on the recommender systems and their current changes. If X does not follow through with serious commitments, the Commission may issue a non-compliance decision, which could result in fines of up to 6% of the company’s total turnover (Art. 73 f. DSA), or, as a last resort, even a shutdown (Art. 82(1), 51(3) DSA). X may not be as straightforward a case as TikTok or Microsoft, which either did not launch a new function (TikTok Light) or met the Commission’s demands (a new generative AI feature for Bing). Nevertheless, despite the highly politicized case(s), systemic risk mitigation under the DSA has moved from law in the books to a law in action. Musk should be well aware of this.

The European Opportunity

We now have the tools, and they are even being used. However – and this is more of a call to action – it is not just about technology! For the same reasons the Commission might be the pragmatic and feasible choice – in particular because of its functional and expert-led structure, and potential regulatory economies of scale – it lacks proper democratic legitimization. An advanced accountability concept creating “better” out-puts cannot be everything, especially in the numerous pluralistic and fundamental rights issues at stake. Given the immense power of these platforms and their owners, decisions made about them will also have a significant, namely a systemic, society-wide impact.

Without even mentioning the potential path dependencies of a technically-oriented authority (cf. Cohen, p. 173; on the AI Act Almada/Petit, pp. 21 ff.), which is even de jure entangled with the companies, systemic risks stemming from the regulation itself could replace those of the platforms. To avoid this, the Commission must involve as many voices and societal actors as possible (cf. Art. 11(1)-(3) TEU, Art. 15(1) TFEU) in the concretization, and materialization (particularly) of the systemic obligations. Likewise, the European Parliament and the Council, as the pillars of European democratic legitimization (Art. 10(2) TEU), must actively support this process. At the same time, national parliaments, researchers, and other civil society members must be aware of and engage in the (re)transformation of the digital sphere – through the systemic platforms but also through official actions. As Husovec put it, this requires “money and effort.” Another code of conduct on hate speech (even as a mitigation measure, Art. 35(1)(h) DSA) or (rather vague) guidelines (on systemic risks in electoral processes (Art. 35(3) DSA)) may provide some support, but they can never replace the necessary democratic involvement.

Eventually, given the strong legal framework and the Commission’s willingness and capability to act, Elon Musk may have unintentionally created the perfect opportunity for the EU to find and pursue its declared “European way for the digital transformation”.



SUGGESTED CITATION  Uhlenbusch, Julian: Elon Musk, the Systemic Risk, VerfBlog, 2025/2/07, https://verfassungsblog.de/elon-musk-the-systemic-risk/, DOI: 10.59704/bbb5b45066581219.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
DSA, Digital Services Act, Elon Musk


Other posts about this region:
Europa