18 October 2023

A Step Forward in Fighting Online Antisemitism

The Contribution of the EU’s Digital Services Act (DSA)

Any examination of the current state of antisemitism must consider the central role of online platforms, as these spaces are now the principal vehicles for the dissemination of antisemitic sentiments. Social networks such as Facebook, microblogging services such as X (formerly known as Twitter), and messenger services like Telegram have significantly facilitated – if not promoted – the dissemination of antisemitism. Alarmingly, this trend continues to gain momentum, as illustrated by the spread of antisemitic messages on X, which more than doubled after Elon Musk’s takeover in October 2022. In addition, following the recent Hamas terrorist attacks against Israel, there is evidence that X is being used to disseminate antisemitic hate speech.

Against this backdrop, the present blog post analyses the legal framework for combatting online antisemitism in the EU and the regulatory approaches taken so far. It addresses the new Digital Services Act (DSA), Regulation (EU) 2022/2065, highlighting some of the provisions that might become particularly important in the fight against antisemitism. The DSA improves protection against online hate speech in general and antisemitism in particular by introducing procedural and transparency obligations. However, it does not provide any substantive standards against which the illegality of such manifestations can be assessed. In order to effectively reduce online antisemitism in Europe, we need to think further, as outlined in the following blog post.

Online Antisemitism

Virtually every imaginable expression of antisemitism freely circulates across online platforms, ranging from timeworn antisemitic tropes and conspiracy theories to cases of overt Holocaust denial and incitement to violence, despite the latter constituting a criminal offence in most EU Member States.1) However, in the majority of cases, antisemitic online content is non-violent and not subject to criminal prosecution under national law.

The EU and its Member States struggle to develop clear rules for dealing with such content. So far, EU law does not provide any guidance, except for the case where antisemitic terrorist online content is disseminated (see Regulation (EU) 2021/784 and Directive (EU) 2017/541), on when online antisemitic speech is legal or not, leaving it to Member States to address this issue. Still, most of the legal bans on online antisemitic speech in Member States contain terms whose definitions highly hinge on interpretation, thus every potentially incriminating online utterance must be assessed in context. In practice, it is therefore difficult to distinguish between permitted and illegal antisemitic online content. As a result, antisemitic content can very often be characterized as “lawful but awful” and thus defies removal. The prohibitions of online antisemitism contained in the Terms of Service of online providers on the other hand, go beyond the national statutory prohibitions in some cases, but are not always adhered to, which will be discussed below.

Amplification of Antisemitism by Platforms

An unfortunate consequence of the presumed anonymity offered by online platforms is that users assume immunity from consequences, and, therefore, act with supposed impunity in articulating antisemitic hostilities in ways unthinkable in a live setting or authored works. Furthermore, for economic reasons, the algorithms used by online platforms reinforce rather than remove controversial content, because such content entices users to spend more time on the platform.

Growing antisemitism on these platforms is even more alarming given the fact that social media has become the dominant source of news for a significant segment of the population. Hence, enacting legal means to combat or limit the online dissemination of antisemitism is essential to protect informed freedom of opinion and public discourse, both of which are closely linked to the EU’s values as described in Article 2 of the Treaty on European Union (TEU).

Effective Self-Regulation?

The EU and its Member States postponed countering online hate, and online antisemitism in particular, relying instead on the platforms regulating themselves. Platforms specify in their Terms of Service, which form part of their contractual relations with users, the content they consider inappropriate and subject to deletion. Typically, platforms prohibit any “hateful conduct”, whether overtly violent or not, on the basis of race, ethnicity, or religious affiliation. They also ban hateful references to the Holocaust – but not its denial or even antisemitism per se, which is made clear, for example, in the “X Rules” which are part of X’s (formerly Twitter’s) Terms of Service. The situation is somewhat different with Meta, which in 2020 introduced more precise and stricter rules in this respect, explicitly banning dehumanising speech or imagery (e.g. equating Jewish people with rats or pigs) as well as Holocaust denial and harmful stereotypes. Indeed, one should not make a blanket judgement and lump together all online providers’ measures to (self-)regulate antisemitism.

Notwithstanding, it can be observed that these internal rules have so far not been effective in countering antisemitism. Either they do not exist in a well-defined form as with X, or they are not applied consistently, which rather holds true for Meta, for example. In this regard, a 2021 study by the Center for Countering Digital Hate found that online platforms respond to only 16% of reported cases of antisemitism on their services and delete only a tiny fraction of such content. Moreover, these numbers decrease significantly when paying users are involved, as demonstrated by research regarding hateful posts by X Premium (formerly Twitter Blue) subscribers. However, Article 14 (4) DSA now requires online platforms to adhere to their own Terms of Service more strictly than before, which will increase their significance in the fight against antisemitic online hate speech, as will be shown later in this post.

E-Commerce-Directive and Member State Legislation

Self-regulation of platforms has, therefore – up until now -, proven ineffective in combatting antisemitism. Similarly, governmental regulation has also faced difficulties in holding platforms responsible for antisemitic content.

The legal status quo of platform regulation at the EU level has been maintained by the E-Commerce Directive, which closely mirrors Section 230 of the United States Communications Decency Act of 1996 by essentially introducing liability relief for platform operators. According to Article 14 of the E-Commerce Directive, providers of intermediary services are only liable for illegal content if they have actual knowledge of the illegality or if the illegality is apparent and they remain inactive. However, there is no obligation for service providers to acquire this knowledge independently. Passivity or wilful blindness, on the other hand, is rewarded with a general reduction of liability.

Some Member States introduced legislation to combat hateful content and improve upon the E‑Commerce Directive’s framework. The German Network Enforcement Act of 2017 (Netzwerkdurchsetzungsgesetz, NetzDG) and Austria’s similar Communication Platforms Act of 2020 (Kommunikationsplattformengesetz, Kopl-G) are noteworthy examples of national anti-hate speech legislation. They define “illegal/unlawful content” and introduce an obligation for intermediaries to establish an effective review procedure. However, NGOs campaigning against antisemitic online hate, such as Hate Aid and the European Union of Jewish Students, note, that even in Germany, where the positive effect of the NetzDG is supposed to be apparent, users who report antisemitic content usually receive no response or only automated negative reactions from online platforms. Moreover, these national initiatives might violate the Country of Origin Principle of the E-Commerce Directive, and are therefore likely in breach of the Directive (see here and here). Ultimately, these national laws will become obsolete due to the harmonizing effect of the DSA. In Germany, a proposal for a national Digital Services Law (Digitales-Dienste-Gesetz, DDG) to replace the NetzDG is already on the table.

DSA and Online Antisemitism

The EU recognised that the current legal framework for combatting online hate is insufficient and adopted the DSA, which will be directly applicable across the EU as of February 2024. The European Commission has designated services with 45 million average monthly active recipients as “Very Large Online Platforms” (VLOPs) and “Very Large Online Search Engines” (VLOSEs), whose obligations under the DSA already apply as of the end of August 2023. Following their designation as VLOPs, platforms such as X, Facebook, TikTok and YouTube are now required to comply fully with the provisions introduced by the DSA (see Article 92 DSA). It appears at first glance, that the Commission takes its powers to enforce these obligations (Article 65 ff. DSA) quite seriously. On 12 October 2023 it sent to X its first formal request for information under the DSA concerning the spreading of illegal content, in particular of terrorist content and hate speech.

The DSA purports to counter unlawful content by setting transparency standards as well as introducing procedures and legal protection concerning online hate speech, lies, and disinformation spread through platforms. However, the DSA only leads to full harmonization in procedural matters, while remaining silent regarding specific EU standards on what constitutes illegal online content. Nonetheless, it does define “illegal content” in Article 3 (h) very broadly as “any information that, in itself or in relation to an activity, (…) is not in compliance with Union law or the law of any Member State (…), irrespective of the precise subject matter or nature of that law” (see also Recital 12 DSA). In the absence of an express ban on antisemitic content, either in primary or secondary EU law, it is left to the Member States – and the online providers by way of self-regulation – to address this issue. Still, Article 3 (h) expands the narrow understanding of illegal content, which was previously limited to criminal offences. Now, it explicitly includes breaches of any law in its definition of illegal content, which is a step forward in combatting hateful content.

More importantly, Article 3 (h) DSA may modify the Country of Origin Principle enshrined in Article 3 of the E-Commerce Directive, which, under Article 2 (3) DSA, remains relevant. Article 3 (h) DSA applies in any case where content violates the law of any Member State, thus, the question of illegality may not depend exclusively on the law of the Member State where the online provider resides.2)

It is, therefore, unnecessary to refer to the complex exemption from the Country of Origin Principle laid down in Article 3 (4) of the E-Commerce Directive, which allows a country of destination to take measures against a service provider established in another Member State for reasons of public policy (see here and here). Article 3 (h) DSA, as well as Recital 38 DSA, supports the view that stricter national provisions, such as the prohibition of Holocaust denial, in the country of destination of an online service may apply to a service provider residing in a different Member State (see here, pp. 130-132). In his recent opinion regarding the compatibility of the Austrian KoPl-G with the Country of Origin Principle codified in the E-Commerce Directive, Advocate General Maciej Szpunar seems to endorse this interpretation (see para. 72).

Article 16 DSA – Notice and Takedown Procedure

The DSA improves upon the legal regime established by the E-Commerce Directive, not least due to the introduction of a mandatory notice and takedown procedure in Article 16 DSA (see here, p. 1010). According to this provision, platform providers throughout the EU are required to provide online tools enabling persons or entities to report online content they consider unlawful. These notices must be reviewed, and content that is confirmed illegal must be removed.

The procedures prescribed by Articles 16, 20, and 22 DSA allow individuals and organizations to highlight hate speech, particularly, antisemitism, bringing a large volume of such content to the attention of online service providers in the EU, which are obliged to conduct a review. Compliance with this duty to review is compelled by platform operators being subject to liability for reported illegal content (Article 6 DSA).

Articles 34 and 35 DSA – Online Antisemitism as a Systemic Risk

The obligation of online providers to conduct systemic risk assessments is usually praised as a special feature of the DSA that constitutes an attempt to overcome the limitations of individual remedial mechanisms in content moderation by focusing on service providers’ “curation practices” (see here, p. 170). Articles 34 and 35 establish obligations for platforms to assess and mitigate systemic risks stemming from the design and function of their services. Widespread antisemitism, which differs from ordinary hate speech, particularly due to the global annihilation fantasies associated with it, may well pose such a systemic risk through specific negative effects on freedom of expression (for example, the silencing effect), public discourse, and the democratic process in general, which the DSA ostensibly seeks to protect (see Recitals 9, 76, 79, and 82). However, a detailed analysis of these provisions is beyond the scope of this blog post.

Article 14 DSA – Taking Terms of Service Seriously

Article 14 introduces another novelty. According to this provision, providers of intermediary services are required to disclose any restrictions regarding user-generated content in their Terms of Service. Furthermore, under Article 14 (4) DSA they are obliged to act “carefully, objectively and proportionately, taking into account (…) the rights and legitimate interests of all parties”, when applying these restrictions in practice.

All providers, therefore, after years of turning a blind eye, must finally take their own Terms of Service seriously. So far, legal matters, particularly from the standpoint of platform operators, have been dominated by disputes between users and platforms over alleged overblocking. The DSA could help overcome this asymmetry by putting a stronger emphasis on the rights and legitimate interests of all parties, including aggrieved parties who do not necessarily have to be users. The act codifies case law on the E-Commerce Directive, calling for “a balance (…) between the different interests” of all parties concerned and for an interpretation consistent “with the fundamental rights involved”, as advocated by Advocate General Maciej Szpunar (see para. 34; less explicit ECJ, case C-18/18, Glawischnig-Piesczek, para. 43).

From the wording of Article 14 (4) it is not entirely clear, whether the EU legislator intends to impose a horizontal effect of EU fundamental rights in the relationship between online platforms, users, and third parties. If there is a horizontal effect of the Charter of Fundamental Rights in relation to the application of providers’ Terms of Service, it is rather indirect (see here, pp. 901-903). In any case, providers of intermediary services will be required to give more consideration to the protection of users whose personal rights are violated by antisemitic online content through the strict application of their Terms of Service in alignment with fundamental rights. Furthermore, they are obliged to genuinely follow up on complaints about offensive antisemitic content lodged in accordance with their Terms of Service.

Particularly interesting in this context is the lawsuit filed by the NGOs HateAid and EUJS against Twitter on 24 January 2023, before the Berlin Regional Court. The purpose of this test case, which already reflects the shift in perception of the underlying fundamental rights relationship now expressed in Article 14 (4), is to clarify whether users have a legal right to enforce the Terms of Service of online platforms and, on this basis, to demand the removal of online content that incites antisemitic hatred. However, this case has not yet been decided.

Conclusion – Defining Antisemitic Online Content as “Illegal” Per Se?

Overall, the DSA is an improvement on the status quo and may contribute to a reduction of illegal hate speech on social media and it is positive that the Commission is indeed enforcing its provisions against VLOPs regarding hate messages.

However, the DSA fails to provide the tools to effectively reduce online antisemitism in Europe on a large scale. This would require additional legal steps, such as the establishment of specific monitoring obligations for platforms pertaining to antisemitic online content, or the introduction of substantive law provisions regarding “illegal content” under Article 3 (h). For example, if the EU clarified that certain antisemitic online content is, in principle, “illegal”, even if it is not a criminal offence, platform operators would be more inclined to delete such content.

However, such a strict regulatory approach could well lead to legal conflicts, as exemplified by the Documenta fifteen scandal. In the discussion on the tension between antisemitism and artistic freedom that followed last year’s Documenta art exhibition, calls for a general legal prohibition of antisemitic artworks were met with scepticism. Rather, it was recommended alleged antisemitism be dealt with on a case-by-case basis. Such an approach may be understandable, at least from the perspective of (German) constitutional law, but it can hardly be viewed as a solution for effectively combatting the mass dissemination of antisemitic hate on the internet. In this respect, further legislative modifications may be required by expanding Article 3 (h) to declare antisemitic content “illegal”. Such content would, prima facie, have to be removed from online platforms.

Objections from the point of view of legal certainty in dealing with the legal term “antisemitism”, however, can be addressed by considering that the EU already provides a definition of antisemitism that could be used as a point of reference in this context. The endorsement of this definition, though it is embedded in soft law, clearly indicates the EU’s view that online manifestations, which meet the definition’s criteria, even if they do not amount to criminal offences, are pejorative and should therefore be condemned.

References

References
1 Under § 130 German criminal code, for instance, incitement to hatred of a group based on national, ethnic, religious, or racial characteristics in a manner capable of disturbing public peace (Volksverhetzung) is a criminal offence. Similarly, the Austrian criminal code prohibits incitement to hatred (§ 283 StGB) and the disparaging of religious doctrines as well as other forms of behaviour that are likely to arouse justified indignation (§ 188 StGB). Additionally, according to the Austrian Prohibition Act 1947 (Verbotsgesetz 1947), any form of re-engagement in National Socialist activity is subject to criminal prosecution.
2 See Hofmann, ‘Article 3’, in: Hofmann/Raue (eds.), Digital Services Act: Gesetz über digitale Dienste (2023), para. 81.

SUGGESTED CITATION  Schroeder, Werner; Reider, Leonard: A Step Forward in Fighting Online Antisemitism: The Contribution of the EU’s Digital Services Act (DSA), VerfBlog, 2023/10/18, https://verfassungsblog.de/a-step-forward-in-fighting-online-antisemitism/, DOI: 10.59704/4635febfdf48976a.

One Comment

  1. Gunther Jikeli Thu 19 Oct 2023 at 18:12 - Reply

    Thank you for the excellent overview and discussion of the potential impact of the Digital Services Act. In our Social Media Research Lab, we haven’t seen much change on the platforms yet. The Terms of Service are still rarely enforced by the platforms.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Antisemitism, DSA, Digital Rights, Digital Services Act, Hate Speech, Platform Regulation, content moderation


Other posts about this region:
Europa