Elon Musk Wants to Buy Twitter to Create a Free Speech Utopia: Now What?
The enigmatic Tesla founder Elon Musk has made a public offer to buy 100% of Twitter’s shares at approximately 138% of each share’s value. In his letter of intention submitted to the U.S. Securities and Exchange Commission, Musk describes that free speech is necessary in a democratic society, and he wishes to unlock its full potential by bringing Twitter under (his) private ownership. Musk presents himself as a free speech absolutist. He has frequently commented on Twitter about the use of free speech on social media platforms, expressing his dismay with ‘the west-coast big-tech’ acting as de facto arbiters of free speech (there is an increasing amount of literature suggesting that they are.) In a TED-talk following his bid, he announced some of the changes he would make after acquiring Twitter. If Twitter were owned by Musk, it is likely that he will attempt to shape the platform in a different direction, minimising the amount of content moderation and limiting the applicable community guidelines to align the speech codes of Twitter with the limits of the First Amendment. Further, he wishes to publish Twitter’s algorithm on GitHub. From a scientific perspective this is very interesting: scholars have long called for better transparency from social media platforms on their algorithms (see for example here and here). Such insight would enable better research tackling issues such as filter bubbles or amplifying hate speech.
The prospect of Musk’s free speech Utopia raises critique, such as by Berkeley professor Robert Reich and former UN special rapporteur David Kaye, calling Musk’s free speech ideology regressive and dangerous. Some even fear a Voldemortian return to the platform of former President Donald Trump. In itself, the notion of a billionaire buying a platform because he finds that the rules by which that de facto town square (see also Packingham v North Carolina, 137 S. Ct. 1730) is governed to be restrictive and undemocratic seems paradoxical at best; constitutionally this raises an interesting point: if indeed a billionaire wants to change the rules of speech on the ‘new public squares’ by acquiring a social media platform, can he – and should he be able to?
Twitter as the American Dream
In the U.S., Musk is certainly able to do so. Essentially, platforms have the freedom to moderate however they please, under Communications Decency Act Section 230 (see e.g. Domen v Vimeo, No. 206-6 (2nd Cir 2021). They cannot be held liable for content posted on their platforms (Section 230(c)(1)), nor can they be held liable on account of any moderating practices (Section 230(c)(2)), provided they act in good faith in their moderation practices. Government involvement in content moderation is barred to a significant degree, and protection under Section 230 is said to be wider than under the First Amendment – which also applies to private companies. There are some limitations to content posted on those platforms that are prohibited by law, such as enshrined in the Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA), as well as intellectual property limitations under the Digital Millennium Copyright Act Section 512.
For Twitter, it is likely that Musk would limit the applicable civic integrity policies, which prohibit using Twitter to suppress or mislead users in their democratic participation. Further, Twitter’s Hateful Conduct Policy on hate speech and Covid-19 Misinformation Policy are far more restrictive than the protection afforded to those types of speech by the First Amendment. This would indeed create a more “open” social media platform in the United States, akin what Donald Trump sought to achieve with his platform Truth Social. There have been some legislative initiatives to curb the near-unlimited power bestowed upon platforms by Section 230. Some Republican-sponsored bills seek to limit the power platforms have in moderating content, demanding that they moderate on a ‘content-neutral’ basis (see for example Texas H.B. 20, and Florida Senate Bill 7072). Democrat-sponsored bills seek to do the opposite, seeking to limit disinformation around elections, and seek to address the the amplification of illegal content by algorithms. However, none have successfully managed to curb platform power, forming no barrier for Musk to realise his ‘free speech arena’.
In the European Union, it is unlikely that Musk can take a similar approach. Firstly, it is worth noting that Musk mentioned in his TED-talk that the rules for speech on Twitter should align with the country. For Europe, that approach has a different result than in the United States. Currently, most Member States of the European Union are more restrictive in what types of expression enjoy protection under their respective freedom of expression provisions, both nationally, in a European context under the Charter and under the European Convention on Human Rights (ECHR). This becomes problematic in the context of the pending Digital Services Act. In Recital 12, the European Parliament’s Amendment proposes that “illegal content” should be defined as “what is illegal offline should also be illegal online”, including hate speech, terrorist content, discriminatory content, or content referring to illegal acts under Union law or national law. These are types of speech that enjoy free speech protection – to some degree – in the United States, but do not typically fall under freedom of expression protection in the European Union under Article 11 of the Charter or Article 10 ECHR. These types of speech can even be considered illegal in some countries, see for example the German Gesetz zur Bekämpfung des Rechtsextremismus und der Hasskriminalität. This would mean that platforms in the European Union have the obligation to act expeditiously to remove illegal content when being made aware of it, or risk being held liable for that content under Article 5 (1) (a) & (b) of the Proposed DSA. Although in principle this need not change Twitter’s user’s policy – it might be fine with risking that liability – in practice this is likely to lead to more stringent moderation practices. A similar notice-and-takedown obligation was laid down in § 3 Abs. 2 and 3 of the German Netzwerkdurchsetzungsgesetz (NetzDG). A feared consequence of that system is ‘overblocking’: platforms blocking more content than necessary because the consequences for doing too little to combat illegal content are financially dire (up to 50 million euros, §4 Abs. 1 and §4 Abs. 2). The fear was only partly justified and certainly more nuanced, it turned out, but nevertheless critics of the NetzDG still fear that it adversely – even unconstitutionally – impacts freedom of expression. This fear resonates in the treatment of similar legislative proposals in France and Austria. The European Parliament’s amendments to the Digital Services Act attempt to minimise the impact that content moderation should have on freedom of expression under the Charter (Recital 38, an obligation to respect that right in the terms of services under article 12(1) DSA as well as an obligation to monitor for potential systemic harms to that freedom under article 26(1)(b)), yet the protection afforded to different types of speech is much narrower than in the United States.
The Two Twitter Solution
To best operationalise the free speech arena, Musk needs to differentiate between the European speech rules and the American ones. It is evident from Recital 31 of the proposed DSA that orders to act against illegal content should be as limited as possible regarding the territorial scope. This practice is called ‘geoblocking’ (see for example on Russian propaganda). If Musk succeeds in acquiring Twitter and creates the free speech Utopia many fear, it is likely that European law will necessitate the creation of different Twitter spheres: the United States variety, in which content moderation is absolutely minimised and Twitter becomes a wild west only limited by the constraints of the First Amendment; and the European variety, in which not much will change compared to the status quo of content moderation, which will adapt to the proposed DSA as every other platform. In the European sphere, Twitter will likely still abide by notice-and-takedown requests, which will probably be limited in territorial scope, both in the spirit of Musk’s intentions and the proposed DSA. Aside from this, it is not inconceivable that Twitter could withdraw as a signatory to a number of European coregulatory instruments, such as the Code of Practice on Disinformation. This in itself would be a strong statement that would surely be received unfavourably by the European Union. That said, such a withdrawal might cause a strong PR-backlash with dire economic consequences. As many correctly observe, this would be a major regression in the strive towards platforms taking more responsibility for the online sphere they create. From a business perspective, the creation of a free speech-focused platform would also be questionable. Tarleton Gillespie rightly notes in his book Custodians of the Internet that consumers have an interest in a well-moderated platform, therefore platforms have a commercial interest in providing one, and as a result community guidelines from platforms are converging. It is unlikely that a platform riddled with hate-speech, conspiracy theories and misinformation will draw a larger audience than a well-moderated one, aside from a niche group of enthusiasts. It is difficult to see why a business-savvy entrepreneur like Musk would opt for this variety.
The desirability of letting one man decide the speech codes for a public forum with nearly half a billion users is certainly questionable. In an acquisition process such as this one, this raises the question whether fundamental rights concerns should be factored in, and how. Social media platforms have an enormous impact on how we express ourselves nowadays. Putting all that power into the hands of one person raises serious concerns, which warrant a deeper look than simply weighing the risk of economic power abuse. Again, Section 230 would be a barrier to this: platforms are nearly immune to government involvement in their moderating practices. However, if more billionaires want to buy their own public fora, it is worth reconsidering this standpoint in the future.
I am trying to understand what the problem is, but I am failing to find one. The question should be „so what‘?“. When Bezos acquired Washington Post – making zero commitment to free speech- there were very few people complaining about a billionaire controlling speech or information. Now we have a different billionaire basically saying “I will apply the law, not my personal beliefs as to what constitutes free speech“ and this is somehow supposed to be problematic? The US has survived so far under those „Free Speech Utopia“ rules, what is exactly going to change now? Instead of applying only to the government, the 1st Amendment will also apply to one of the platforms. Is anyone seriously suggesting that applying the 1st Amendment- meaning simply applying the positive law-to a platform actively used by some 38 million people daily in a country of 329,5 million people is somehow harmful? If so, how? Elon Musk will voluntarily not (!) delete the opinions he disagrees with or considers unacceptable, but will apply the 1st Amendment instead? If the people think that the legislative protection is insufficient, they can exercise their democratic right to vote and vote for those who want to change the legislation. Otherwise, outsourcing the application of that right to free speech (as well as explicitly allowing the companies to institute their own “banned speech“ rules) to private companies and allowing them to impose their own vision of what free speech is sounds despotic and dystopian. More generally, my impression is that there is a confusion in the West between what free speech is all about and what undesirable ends may come from a faithful application of that right. Alas, if we’re going to be principled, than there is no alternative to applying rules consistently.