This article belongs to the debate » 9/11 und der öffentliche Diskurs
14 March 2022

Terrorist content online and threats to freedom of expression

From legal restrictions to choreographed content moderation

I. Freedom of expression and threats to national security

Article 19 of the International Covenant on Civil and Political Rights (ICCPR) protects the right to freedom of expression as a universal right and strictly limits the powers of states to impose restrictions and conditions to its exercise. However, paragraph 3 of the indicated article refers to preserving “national security or of public order” as one of the purposes that may justify the establishment of the mentioned limitations, provided that the principles of legality and necessity are respected.

Many states have used these general stipulations contained in international law to introduce in their counterterrorism legislation specific provisions criminalizing the dissemination of ideas or opinions that might incite, endorse, or stimulate the commission of terrorist acts. This is very sensitive territory. Drawing the line between extreme, controversial, or offensive yet politically motivated speech (thus particularly protected under human rights standards) and expressions that may trigger an unacceptable harm or danger might be particularly complicated in different contexts. Moreover, the impact of a wrong assessment in this field, in terms of free and open dissemination and discussion of ideas of all kinds, can be particularly negative.

Radical and extremist speech is generally protected under freedom of expression clauses, and it can only be restricted under exceptional, necessary, and proportionate circumstances. As the UN rapporteurs on freedom of expression have underscored, the concepts of “violent extremism” and “extremism” should not be used as the basis for restricting freedom of expression unless they are defined clearly and are “demonstrably necessary and proportionate to protect, in particular, the rights of others, national security or public order.” The UN Special Rapporteur on the promotion and protection of fundamental human rights and freedoms in the fight against terrorism has also emphasized the need to restrict the criminalization of expressions to cases in which there is a “message to the public with the intention of inciting the commission of a terrorist crime, provided that such conduct, whether it advocates a terrorist crime or otherwise, leads to a risk of one or more crimes of such a nature being committed”.

The fact that certain forms of radical or extremist speech are legal does not mean that they may not be potentially harmful or encourage processes of radicalisation that can lead, under certain circumstances and in connection to ulterior events, to violent actions. However, it is also important to underscore that we still know remarkably little about when extremist speech (either legal or illegal) leads to violence and how to prevent that from happening. This last aspect will be further elaborated in the next sections.

II. Terrorist content online and the role of online platforms

From a human rights perspective, users of online platforms are generally protected by the freedom of expression when posting content online. However, such content is also subject to a series of private moderation rules, community standards or terms of service (ToS), defined by platforms themselves. These ToS define obligations, limits, and conditions beyond applicable legal provisions, and they are generally enforced across all users, independently from their location and the jurisdiction they are subjected to. This is particularly relevant regarding terrorist content online, as in parallel with national counterterrorist legislation and international standards, platforms have formulated their own and specific policies in this sensitive area.

These platform-internal efforts are the result of both internal and external pressure and factors. After the terrorist attacks in 2019 against two mosques in Christchurch, New Zealand, the Prime Minister of this country together with the French President conveyed other political leaders and representatives from the tech industry to adopt the “Christchurch Call to eliminate terrorist and violent extremist online content”. The Call is based on a commitment by governments and tech companies to eliminate terrorist and violent extremist content online and outlines “collective, voluntary commitments from Governments and online service providers intended to address the issue of terrorist and violent extremist content online and to prevent the abuse of the internet as occurred in and after the Christchurch attacks”. It is important to underscore the “voluntary” nature of companies’ pledge regarding the moderation of terrorist and violent extremist content (TVEC). Therefore, the Call is not mainly about enforcing and interpreting the existing legislation (which is a task that belongs to State authorities) but creating proper internal mechanisms aiming at effectively avoiding the exploitation of social media platforms for purposes including recruitment, dissemination of propaganda, communication, and mobilization.

This and similar measures also mean that platforms are committed to adopt measures vis-à-vis legal-but-harmful content. As the debate on the Online Harms proposal in the UK has shown, this is a particularly sensitive and controversial category. In this context, transparency and accountability need to be seen as a fundamental pre-condition to guarantee that freedom of expression and reporting of matters of public interest (including for example human rights violations) are not curtailed. As a matter of fact, big platforms already have a long record of mistaken or harmful moderation decisions in this area in different parts of the world.1)

A recent OECD publication on transparency reporting on TVEC online shows that the degree of transparency and clarity in the top 50 content sharing services’ TVEC-related policies and procedures has improved over the recent years, although the nature of the information provided varies from one company to another. There still is a lack of uniformity in how TVEC and related concepts are defined, what is reported, as well as the measurements and metrics used. As noted by the report as well, TVEC-related laws and regulations in force or under consideration are not necessarily consistent (including basic definitions), which also affects the way private actors come across these matters.

An important factor to be added to this equation is connected to the “voluntary” role that States can play in this area, according to the Call. As recently explained by Daphne Keller, around the world, law enforcement bodies known as Internet Referral Units (or IRUs) are asking online platforms to delete posts, videos, photos, and comments posted by their users. Such requests are not based on the existence of a clear illegality or the need to enforce TVEC legal provisions, but the allegation of the violation of platform’s internal content rules. Platforms are therefore complying by citing their own discretionary ToS as the basis for their actions. Legal intermediary liability provisions (for example in the European Union) establish that in order to retain immunity, platforms must not have actual knowledge of illegal activity or information, and therefore having received a referral might trigger liability, if the piece of content in question is proven to be also illegal. Another fundamental element in this context is that users are not being informed of governments’ involvement and such restrictions are exclusively perceived as part of the contractual private dynamics between users and companies. This also means that courts will have little or no role in reviewing the mentioned request and the adopted measures.

Last but not least, the need to coordinate efforts around the moderation of this type of content led to the creation of a body still not properly known and discussed: the Global Internet Forum to Counter Terrorism (GIFCT). GIFCT’s aim is prevent terrorist and violent extremists from exploiting digital platforms by working together and sharing technological and operational elements of each member’s individual efforts. GIFCT is currently governed by an Operating Board made up of members from the founding companies (Facebook, Microsoft, Twitter, and YouTube) and is advised by an Independent Advisory Committee made up of representatives from civil society, government, and intergovernmental organizations. The organization has made some improvements in terms of governance since its creation, to increase the presence of experts and civil society. It now also counts an academic research arm and the partnership of Tech Against Terrorism. GIFCT’s most important pillar is the Hash-Sharing Database, which enables sharing of “hashes” (or “digital fingerprints”) of known terrorist images and videos between GIFCT member companies based on a specific taxonomy. It is important to note, however, that the original scope of the hash-sharing database was limited to content related to organizations on the United Nations Security Council’s Consolidated Sanctions List. Following the terrorist attacks in Christchurch, in which the perpetrator livestreamed his attack, GIFCT expanded the taxonomy in order to enable hash- sharing of content from such attacks where violent propaganda is produced.

Thus, despite the mentioned changes and its increase of transparency, GIFCT is still subject to criticism, particularly for freedom of expression concerns.

Firstly, despite the collaborative and mainly private nature of the GIFCT, it is important to bear in mind that this initiative was created as the result of pressure coming from relevant Western countries’ governments, in a context where companies were being accused of “not doing enough” and clear “threats” of regulation were made.

Secondly, a platform like GIFCT clearly triggers serious doubts of possible extralegal censorship. As it has already been mentioned, individual platforms do not lose their individual capacity to take their own decisions regarding this type of content. However, the existence of a database of this kind facilitates and automatizes very sensitive and context-dependent decisions. This tool may be particularly attractive for small platforms with lower content moderation resources. In any case, it is clear that this powerful system has the capacity to swiftly curtail speech in the complete absence of minimal private or public procedural and appeal safeguards. An additional outcome is the fact that opaque and unaccountable criteria become the basis for a coordinated and unified private regulation of speech, thus creating a clear version of what Evelyn Douek has termed content cartels, or monopolies over the shaping of public discourse based on arrangements between the different platforms engaging in content moderation. In such a context, the privatization of speech regulation becomes systematic and widespread, exponentially increasing accountability deficits and harms caused by (unavoidable) mistakes.

Thirdly, the slightly mitigated presence of Governments in the governing structure of the GIFCT still triggers unanswered questions regarding the actual power and influence of these actors.

III. The Regulation on addressing the dissemination of terrorist content online

Apart from the already mentioned and unharmonized national legislation, the Regulation on addressing the dissemination of terrorist content online of 29 April 2021 constitutes a step forward within the EU. I have analyzed several aspects of this Regulation in more detail elsewhere. In the present context, only issues related to the main topic of this article will be briefly presented.

The regulation empowers national authorities to issue removal orders requiring hosting service providers to remove terrorist content or to disable access to terrorist content in all Member States. Here we are not talking about content that violates the ToS, but information that contravenes national counterterrorism legislation. Such orders must be executed by providers in any event within one hour of receipt, thus forcing them to act in an extremely rapid manner. Even though ex post appeal mechanisms are obviously contemplated, this procedure is de facto depriving platforms and users of any chance to avoid the immediate implementation of the order. Article 5 contains a group of proactive measures that definitely change the role of intermediaries in this area, particularly their content monitoring responsibilities, as they are obliged to take specific measures to protect their services against the dissemination to the public of terrorist content, especially in cases where they have already been exposed to this kind of content. State bodies will also have the power to review and request the adaptation of such measures to the objectives of the Regulation. Although the Regulation does not allow the imposition of the obligation of using automated tools, the specific nature of the obligations and responsibilities included in the Regulation may de facto determine the proactive use of this type of moderation techniques.

With this legislation, Europe seems to move towards a progressive delegation of true law enforcement powers to private companies, depriving Internet users (and hosting service providers themselves) of the legal and procedural safeguards applicable to this kind of decision until now. Moreover, intermediary platforms may be progressively put in a position where cautiously overbroad decisions may be taken, as the only way to avoid the high and somewhat vaguely defined responsibilities penalties that may be imposed on them.

IV. Final reflections

In an interesting position paper, Tech Against Terrorism has explained that terrorists use an eco-system of predominantly small platforms to communicate and disseminate propaganda, rely primarily on simple web-based tools such as pasting, archiving, and file-mirroring sites, as well as increasingly use own-operated websites, alternative decentralized and encrypted platforms, and infrastructure providers to host, aggregate and disseminate propaganda. Therefore, although often presented as the frontline of counterterrorism measures, big online platforms seem to play a relatively reduced role in the dissemination of TVEC.

What can be said is that social media platforms like Facebook have in the past made terrorist acts and imagery particularly visible and noticeable, perhaps to an extent wider than what our societies could tolerate. The intensive and choreographed intervention on how TVEC is regulated and moderated, both at the public and private levels, had the effect of sending this content to darker and less controllable corners of the online world.

In any case, any approach that does not properly consider and tackle the deep roots of radicalization and terrorist behavior offline is doomed to fail. A critical and comprehensive understanding of the current and limited tools to tackle the dissemination of TVEC online will not only improve the necessary protection of the human right to freedom of expression. It will also be key in order to properly define the measures and policies that are necessary to effectively protect all of us against the dangers of terrorist groups and activities.

References

References
1 See Jillian C. York, Karen Gullo, “Offline/Online Project Highlights How the Oppression Marginalized Communities Face in the Real World Follows Them Online”, Electronic Frontier Foundation, 6 March 2018 (https://www.eff.org/deeplinks/2018/03/offlineonline-project-highlights-how-oppression-marginalized-communities-face-real), Billy Perrigo, “These Tech Companies Managed to Eradicate ISIS Content. But They’re Also Erasing Crucial Evidence of War Crimes, Time, 11 April 2020 (https://time.com/5798001/facebook-youtube-algorithms-extremism/?xid=tcoshare), and  “When Content Moderation Hurts”, Mozilla, 4 May 2020 (https://foundation.mozilla.org/en/blog/when-content-moderation-hurts/).

SUGGESTED CITATION  Barata, Joan: Terrorist content online and threats to freedom of expression: From legal restrictions to choreographed content moderation, VerfBlog, 2022/3/14, https://verfassungsblog.de/os4-content-threats/, DOI: 10.17176/20220314-121205-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Freedom of Speech, Platform Regulation, Public Discourse