14 February 2024
Absolute Truths and Absolutist Control
Last week, the Bombay High Court delivered its judgment in Kunal Kamra v. Union of India, comprising a split verdict on the constitutional validity of the Information Technology Rules, 2023. The rules install an institutional regime for determining – and warranting takedown by social media intermediaries – of content relating to the Central Government deemed “fake, false or misleading”. This regime was challenged on three main grounds – first, its violation of citizens’ free expression due to “fake, false, or misleading” speech being constitutionally protected; second, the pedestalization of state-related information, such that it enters public discourse with a single, truthful formulation, as being an illegitimate and disproportionate measure; and third, the violation of natural justice in enabling the state to determine truth and falsity concerning itself. Continue reading >>
3
08 January 2024
Putting X’s Community Notes to the Test
All of the biggest social media platforms have a problem with disinformation. In particular, a flood of false information was found on X, formerly Twitter, following the terrorist attack by Hamas on 7 October 2023 and the start of the war in Ukraine. The EU Commission therefore recently initiated formal proceedings against X under Art. 66 para. 1 of the Digital Services Act (DSA). One of the subjects of the investigation is whether the platform is taking sufficient action against disinformation. Despite these stakes, X takes an approach different to all other platforms: As can be inferred from the X Transparency Report dated 03.11.2023 posted information is not subject to content moderation, but solely regulated through a new tool: The Community Notes. Continue reading >>08 January 2024
Community Notes auf dem Prüfstand
Die größten Social Media Plattformen haben ein Problem mit Desinformation. Insbesondere auf X, vormals Twitter, war nach dem Terroranschlag der Hamas am 07.10.2023 und dem Beginn des Krieges in der Ukraine eine Flut an Falschinformationen feststellbar. Daher hat die EU-Kommission vor Kurzem mitgeteilt, dass sie ein förmliches Verfahren nach Art. 66 Abs. 1 Digital Services Act (DSA) gegen X eingeleitet hat. Gegenstand der Untersuchung ist unter anderem, ob die Plattform hinreichend gegen dieses Problem vorgeht. X setzt dabei alles auf eine Karte: Wie aus dem X Transparency Report vom 03.11.2023 geschlossen werden kann, unterliegen Desinformationen nicht der sog. Content Moderation, sondern ihnen soll allein durch den Einsatz eines neuen Tools entgegengewirkt werden. Das heißt, dass die Nutzerinhalte auf X von Seiten des Betreiberunternehmens weder durch Algorithmen noch durch dazu beauftragte Personen auf Falschinformationen kontrolliert werden. Continue reading >>
0
18 October 2023
A Step Forward in Fighting Online Antisemitism
Online antisemitism is on the rise. Especially since the recent terror attack by Hamas in Southern Israel, platforms like X are (mis)used to propel antisemitism. Against this backdrop, this blog post analyses the legal framework for combatting online antisemitism in the EU and the regulatory approaches taken so far. It addresses the new Digital Services Act (DSA), highlighting some of the provisions that might become particularly important in the fight against antisemitism. The DSA improves protection against online hate speech in general and antisemitism in particular by introducing procedural and transparency obligations. However, it does not provide any substantive standards against which the illegality of such manifestations can be assessed. In order to effectively reduce online antisemitism in Europe, we need to think further, as outlined in the following blog post. Continue reading >>23 September 2023
Be Careful What You Wish For
The European Court of Human Rights has issued some troubling statements on how it imagines content moderation. In May, the Court stated in Sanchez that “there can be little doubt that a minimum degree of subsequent moderation or automatic filtering would be desirable in order to identify clearly unlawful comments as quickly as possible”. Recently, it reiterated this position. This shows not only a surprising lack of knowledge on the controversial discussions surrounding the use of filter systems (in fact, there’s quite a lot of doubt), but also an uncritical and alarming approach towards AI based decision-making in complex human issues. Continue reading >>
0
31 July 2023
Warum Fehlinformation, Desinformation und Hassrede nicht gleich behandelt werden sollten
Der Umgang mit Fehlinformationen, Desinformationen und Hassrede im Internet ist ein hochaktuelles Thema. Eine im Juni 2023 vorgestellte Politikrichtlinie der UN zielt darauf ab, eben jene Phänomene zu bekämpfen. Es erscheint jedoch nicht sachdienlich Fehlinformationen, Desinformationen und Hassrede ähnlich bzw. gleich zu behandeln, wie es der UN Entwurf momentan vorsieht. Dieser Blogpost vertritt daher die These, dass zumindest Fehlinformationen - also unabsichtlich unrichtige Aussagen - anders behandelt werden müssen als bewusste falsche oder verletzende Äußerungen im Internet. Continue reading >>08 June 2023
YouTube Updates its Policy on Election Misinformation
Last Friday, YouTube announced that it ‘will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections’. This development has upsides and downsides, a few of which are worth sketching out, and all of which further accentuate why the US constitutional framework regarding online platform regulation requires updating. The nature of this update requires transcending a governance approach of overreliance on expecting good faith self-regulation by companies providing these intermediaries. Continue reading >>10 May 2023
Taiwan’s Participatory Plans for Platform Governance
Platform regulation is not limited to Europe or the United States. Although much debate currently focuses on the latest news from Brussels, California, or Washington, other important regulatory ideas emerge elsewhere. One particularly consequential idea can be found in Taiwan. Simply put, Taiwan wants to, tacitly, democratize platform governance. Concretely, Taiwan wanted to establish a dedicated body that would potentially facilitate far-reaching civil society participation and enable ongoing citizen involvement in platform governance. This article explains what discourses about platform governance can learn from Taiwan and how vivid democratic discourse shapes platform governance beyond traditional regulatory models. Continue reading >>
0
27 February 2023
Action Recommended
The DSA will have a say in what measures social media platforms will have to implement with regard to the recommendation engines they deploy to curate people’s feeds and timelines. It is a departure from the previous narrow view of content moderation, and pays closer attention to risks stemming from the amplification of harmful content and the underlying design choices. But it is too early to celebrate. Continue reading >>
0
20 February 2023
Löschen für die Vielfalt
Das Bundesverwaltungsgericht gelangt mit einiger Anstrengung zu einer generellen Pflicht öffentlich-rechtlicher Rundfunkanstalten, Inhalte auf ihren Social Media-Präsenzen zu moderieren und Nutzendenkommentare ohne hinreichenden Sendungsbezug zu löschen. Die Chance, ein zukunftweisendes Judikat zum Auftrag des ÖRR in der plattformisierten Öffentlichkeit zu erlassen, lässt der 7. Senat ungenutzt, dabei hatte das erstinstanzlich befasste VG Leipzig einen Weg dorthin aufgezeigt. Continue reading >>
0