Musk, Techbrocracy, and Free Speech
In a previous post I contemplated what it would mean for (then) Twitter (now X) if Elon Musk bought it and turned it into a free speech utopia. A simpler time, in which I feared the creation of a ‘Two Twitter’-solution where the newly-acquired platform would look significantly different in the US than it would in the EU. Under this scenario, I also argued that Twitter could not simply withdraw from co-regulatory instruments such as the Code of Practice on Disinformation because ‘consumers have an interest in a well-moderated platform.’ Almost three years and a bot-ridden cesspool-platform later, I was proven wrong, but the final point I raise still stands: should we consider the free speech impacts one billionaire social media platform owner can have?
In this blogpost, I situate and address Musk’s position within the broader EU debate on freedom of expression. The purpose of this symposium is to elucidate aspects that make Musk, his influence, and his provocations to the EU legal order, problematic under EU law, and, should we consider his influence as unwanted, harmful or illegal, whether EU law can provide answers to it. This post centres on three points: (i) Musk’s changes to X’s content moderation process, (ii) Musk’s usage of X to amplify select political candidates and (iii) Musk’s ownership of Starlink. It ends with a note on how this fits in a grander theme, which has been dubbed by commentators such as Paul Bernal as the ‘techbrocracy’.
Musk’s moderation withdrawal
The first aspect of Musk’s role in shaping free speech in the European Union is visible through X’s moderation policies and practices. X has, under Musk’s auspices, made severe cuts in its moderation teams, disbanded trust and safety initiatives, withdrawn from EU co-regulatory documents, and taken an overall more permissive style to content moderation. This has severe implications for the discourse on X. Empirical studies show that discourse has roughened, hosting more harmful but also illegal content, going against Musk’s original promise to align X’s policies with EU law. X’s moderation against illegal content is arguably limited. Analysis of the DSA Transparency Database shows that it performs little moderation compared to other social media platforms. EU law offers tools to combat this: the Digital Services Act (hereinafter DSA) provides clear avenues for authorities to combat illegal content in, for example, Article 9, and it also provides incentives to platforms to combat ‘lawful but awful’ content in Article 7, through systemic risk mitigation in Article 35 that can cover lawful but awful content and through its use of Codes of Conduct (Articles 45-47). X is currently under investigation by the EU Commission for its content moderation practices and transparency obligations. In the meantime, illegal speech freely roams the platform. Illegal speech has the potential to limit free speech: for example, in a hate speech-ridden environment, it is unlikely that affected minorities will be able to effectively express themselves. They will be more likely to leave the platform, or abstain from sharing their opinion – the marketplace of ideas has its limitations. Disinformation forms a similar risk, affecting people’s right to be informed. Combating such illegal content does not scale to the enforcement capacities of responsible authorities, underlining our reliance on social media platforms to ‘do the right thing’. Often, scholars, including myself, have theorised that platforms will likely try to abide by regulators’ wishes to avoid further regulation or liability, and that doing so can lead to over-removal of content since platforms will err on the side of caution. In the case of X in the EU, the opposite has occurred. Musk boasts his disregard of the DSA in – admittedly entertaining – X-exchanges with former EU Commissioner Thierry Breton, whilst also shaming other platforms for showing deference to regulators. The DSA provides tools to combat freedom of expression violations by X, both on an individual and a systemic level, but so far that has not led to a change in X’s landscape, raising the deeper and inconvenient question of whether Musk has outgrown the force of the regulator, or simply whether that DSA is not fit for purpose.
Musk has delivered on the promise of free speech absolutism, and has indeed created a virtually lawless public square. In that promise, he downplays his role in amplifying certain voices on that town square. Free speech does not mean free reach, as Musk himself acknowledges. Although Musk has downscaled content moderation efforts, X is amplifying voices that align with his political standpoints. While empirical studies so far (such as here and here) have their limitations since X is not an accurate representation of the election landscape, with a relatively large share of right-leaning users and politicians, they hint to the fact that right-wing candidates are favoured by the platform’s algorithm. If nothing else, Musk’s interviews of then presidential candidate Trump and more recently German far-right chancellor’s candidate Alice Weidel, are an indication of the same trend, having been both pushed through the platform’s notification structure. The amplification of certain voices is by default the demotion of others: attention of social media users is limited. The question is whether the amplification of politicians reduces the exposure that other politicians who do not align with Musk’s vision receive, meaning that although their speech is not limited, less people will see it. This creates a scenario in which the public town square of free speech is still the public town square, only Elon is creating market stalls for AfD and UK Reform Party politicians to operate on that town square. In principle, this does not necessarily depart from existing practices of newspapers interviewing political candidates, for example. However, in an age where people rely on social media networks for information, a variety of viewpoints can be a concern when the owners of platforms flock toward a limited number of political actors – a trend discussed in the final section. As anticipated, under EU law, these concerns relating to platform practices can be addressed under the DSA’s systemic risk assessment and mitigation obligations in articles 34 and 35, and can also be targeted through article 27 on recommender system transparency. Yet the standards set in these provisions are somewhat vague and multi-interpretable. This leaves room for interpreting them as necessary by platforms and enforcement authorities. However, it also means that enforcing them in practice requires significant study and data access for enforcement authorities, and actual sanctioning will take time to manifest, especially in today’s political context in which the EU appears hesitant to challenge the incoming US administration.
Freedom of speech and (satellite-based) internet access
Another point relating to free speech can be raised in the context of Musk’s Starlink enterprise, which is part of SpaceX. Starlink provides an internet connection, which is of vital importance in regions where other internet infrastructures are destroyed. This service has made headlines because of its role in providing not only Ukrainian citizens but also the Ukrainian army with an internet connection. Musk has been criticised for not extending Starlink coverage to Russian-occupied territory and defends this by claiming the Starlink service was not meant for war and that he was seeking to avoid conflicting with US sanctions on Crimea. Whatever the truth in this is, it raises concerns about the impact Starlink has. This is exacerbated since it is apparent that Ukrainian armed forces also rely on SpaceX for internet access. It is beyond the scope of a blogpost to consider the geopolitical implications of a politically opportunistic owner controlling internet access in combat areas, but one can ponder the free speech implications. Internet access is clearly linked to freedom of expression in the European Court of Human Rights’ case law: free speech involves the right to express, but also the right to be informed. Depriving people of internet access can interfere with their right to freedom of expression. This means that the implications of SpaceX being used as a political pressure point, e.g. depriving vulnerable regions of the world of their internet connection, can be enormous, not only from a geopolitical standpoint, but also from the perspective of citizens that cannot be informed or show the world what is happening in their country. So far, SpaceX has been a significant contributor to the Ukrainian cause, but this again raises the question of whether the potential pressure exerting from controlling these critical internet connections that vulnerable people rely upon for expression and information should be placed in the hands of one man.
The era of techbrocracy
This leads us to the final point: the techbrocracy. The examples above have indicated how power with tremendous free speech implications centres around Elon Musk. Whilst certainly an enigma, he is not the only person with such power: Jeff Bezos, Mark Zuckerberg, and Sam Altman, to name a few, are all immensely powerful individuals who, albeit to a different extent, hold power that may potentially affect the exercise of freedom of expression in the EU. These stalwarts of the techbrocracy have recently aligned themselves with the current dominant political preference, with Musk even acquiring an unprecedented influence in Trump’s government. Following X’s example, Meta and Google are revisiting their content moderation policies and practices. They have rescinded cooperation with fact-checkers and revised community guidelines in favour of a more permissive policy on topics such as hate speech. While you can, on an individual level, make a case against the use of fact-checkers in favour of ‘community notes’ and have doubts about the alignment of some community guidelines and codes of conduct with European free speech values, the collective departure from these moderation traditions in favour of political alignment leads to the cynical analysis that, in spite of all EU regulation, and all principled opinions on free speech in content moderation, the techbrocracy still favours opportunistic political gain. The first signs of the techbrocratic rulers turning from EU rule are showing. Musk has shown that, in the absence of stringent enforcement of EU law, it is possible to run a platform in the EU that does not moderate content. It does not come as a surprise that both Meta and Google have shown to be ready to follow in Elon’s steps. The question is whether they will sacrifice free speech values in the process. The inconvenient reality is that they sure can.