05 October 2022

Filtering fundamental rights

DSM, DSA and algorithms in digital architectures

On platforms, the protection of fundamental rights is increasingly provided by algorithms. According to Art. 17 of the Directive on copyright and related rights in the Digital Single Market (DSM Directive), they must filter uploads for copyrighted material and, if necessary, block them even before they are published. Striking a fair balance between the freedom of communication and intellectual property, both enshrined in the Charter of Fundamental Rights of the European Union (CFR) and European Convention on Human Rights (ECHR), is thus entrusted to algorithmic filtering systems. In a long-awaited ruling from April this year (Judgement of 26th April 2022), the CJEU has given its blessing to their use. Communication therefore needs to pass an automatic gatekeeper. With the Digital Services Act (DSA) at the door, copyright protection probably was only the first step in regard to automated decision-making. Therefore, the CJEU’s ruling is a useful starting point for the legal ramifications of algorithmic filtering also in the context of the DSA, which just yesterday was approved by the Council.

The DSA, conceived by the Union legislator as the new constitution of the Internet, presupposes the use of algorithmic filtering. Although it does not directly oblige service providers to do so, numerous provisions on content moderation do refer to the use of automatic tools (see Peukert). Therefore, the current as well as the envisioned private policing of large platforms relies on many little helpers, faster and cheaper than the employees at Facebook and Twitter, even more so than the lawyers and judges in member states. Human pre-examination has become impossible due to the sheer amount of user-generated content. Filters are an effective moderation tool that is cost-effective compared to human review. But being fast is easier than being right: the usual method of applying European fundamental rights hangs heavily on the proportionality test, which at least at the current technological level escapes automation: fundamental rights cannot be filtered.

Upload filters: copyright protection only the first step?

With Art. 17 of the DSM Directive, a new liability regime for “online content-sharing service providers” (Art. 2 No. 6 DSM Directive) has found its way into copyright law: Instead of removing unauthorized uses after they become aware of them, service providers are now obliged to prevent them. Without changing the business model or incurring prohibitive costs, they can fulfill this obligation only by using preventive IT tools for automatic content recognition and blocking, so-called “upload filters”. The action for annulment brought by the Republic of Poland was directed against this, based on a single plea: the fundamental right to freedom of expression and information (Article 11 CFR). CJEU Advocate General Saugmandsgaard Øe stated that there was an interference with Art. 11 CFR, but found that Art. 17 DSM Directive itself contained protective mechanisms to safeguard the essence of the freedom of communication as well as the principle of proportionality (Opinion of Advocate General  Saugmandsgaard Øe of 15th July 2021). The CJEU followed this argumentation in its decision (Judgement of 26th April 2022): the Union legislature had provided the special liability rules of Art. 17 DSM Directive with safeguards that ensure an appropriate balance between the right of users of the services to freedom of expression and information and the right of intellectual property protected by Art. 17 II CFR. The court stressed the responsibility of the Member States, when implementing Art. 17 DSM Directive, to ensure that they rely on an interpretation of the provision that is in conformity with fundamental rights. The actual implementation and their legal evaluation is still in flux. The German Act on the Copyright Liability of Online Content Sharing Service Providers of 31 May 2021 (UrhDaG), for example, is already contested before the German Federal Constitutional Court and will foreseeably also be brought before the CJEU.

Copyright law has known technical measures for its protection in the digital environment since the 90s. Today, the conflict is not restricted to the use of filters. One only has to think of the CJEU case law on framing (VG Bild-Kunst/Stiftung Preußischer Kulturbesitz). The CJEU has confirmed that framing of copyright protected content only constitutes an act of making available to the public within the meaning of Art. 3 I Directive on the harmonisation of certain aspects of copyright and related rights in the information society (InfoSoc Directive) if it is carried out by circumventing technical protection measures. However, such cases are about “all or nothing” protective measures: they block the embedding of a copyright protected work in a third-party website, but do not undertake an independent balancing decision between property rights versus freedom of communication. The fundamental rights situation on platforms vis-à-vis the use of filters is different: it can no longer be understood as “all-or-nothing” protection. It presupposes an automatic balancing of fundamental rights in difficult cases – for example, in order to determine whether the exception for parody, caricature and pastiche applies and permits the use of a third party’s work, a balancing of fundamental rights is required (CJEU, Deckmyn/Vandersteen). According to CJEU case law, when interpreting other exceptions and limitations relevant to platforms, such as the exception for quotations, conflicting fundamental rights must also be considered on a case-by-case basis (CJEU, Spiegel Online GmbH/Volker Beck). The DSM Directive, as interpreted by the CJEU, thus takes digital law enforcement to a new level: it de facto obliges fundamental rights to be balanced by algorithms.

Whether this required use of algorithmic filtering systems can realize a proportionate protection of intellectual property (see Lennartz/Möllers GRUR 2021, 1109, criticizing the German implementation as inadequate) or will lead to censorship under the guise of copyright protection (on this, see Kraetzig, forthcoming) is left open. In any case, the CJEU ruling on Art. 17 of the DSM Directive could be writing on the wall: automatized protection mechanisms will leave the field of property protection and will increasingly determine fundamental rights conflicts in the digital environment in the future. The automatic processing of the conflict between the freedom of communication and intellectual property may therefore serve as a harbinger of what is to come for all online content.

Copyright law: filters yes, balancing no

It is no coincidence that copyright is the first area of law in which the use of filters is ordered. Compared to other areas of law, copyright law makes it “easy” for algorithms. This is because property has sharper edges compared to other fundamental rights, since it is further shaped by legislation and doctrine. European copyright law does not provide for any consideration of fundamental rights beyond a narrow catalog of exceptions and limitations (see CJEU, Funke Medien NRW GmbH/Bundesrepublik Deutschland; CJEU, Spiegel Online GmbH/Volker Beck). Therefore, the algorithms “only” have to compare the original work and its possible exploitation and check whether an exception applies. The right holders provide the platforms with the necessary information for the comparison. Algorithms already lack the ability to differentiate between use cases – communication and consumption – which must be treated differently (see Lennartz, Digitale Filter zwischen Konsum und Kommunikation, EuGRZ 2022, 482, 487, forthcoming). In any case, they will not be able to determine whether one of the copyright exceptions and limitations applies without a certain margin of error – the danger of overblocking is real (see Grosse Ruse-Khan, 2020; Holznagel, 2021).

Continental European copyright law is at least limited to the balancing of fundamental rights in legally codified, narrowly defined cases of conflict (Art. 5 (3) InfoSoc Directive). At the same time, the use of filters to protect copyright seems to be less problematic than the direct filtering of speech, for example, for political statements. After all, filtering for protected content concerns the use of another person’s words in one’s own speech. It does not make it impossible to express an opinion, only makes it more difficult to substantiate or prove it with specific content.

Algorithmic filter systems thus are easier to implement in European copyright law than in common law jurisdictions with their fair-use doctrine, in which a case-to-case balancing of the conflicting interests is carried out. The same applies to other areas of law in which  proportionality has a firm place (Tischbirek, 2017) – especially in cases regarding personality rights. For example, in the German legal system, whether the right of personality has been infringed is purely a balancing decision. Since the right has no clear boundaries, its violation can only be determined on the basis of a balancing of interests. Therefore, the DSA is likely to bring algorithms to their limits: it establishes horizontal duties of care for any “illegal content” regulated in Union law or the 27 member states’ legal systems (Art. 2(g) DSA) – which obviously includes the infringement of personality rights. If algorithms cannot guarantee a fair balance between conflicting fundamental rights in the context of copyright law, with its dogmatic approach that refuses to balance interests beyond narrowly defined exceptions, how should they be able to do so in areas of law that are friendly to balancing, as in those addressed by the DSA (see Peukert, 2021)?

Law enforcement through algorithms has limits

Different types of law can be schematized and implemented by algorithms to different degrees (Eidenmüller/Wagner, 2021). The more the legal programme can be written into an “if, …then” variable (see Wagner, Algorithmisierte Rechtsdurchsetzung, AcP 222 (2022), 56, 97) in the sense of a subsumption under facts, the easier the implementation. The algorithm is fed with variables: if certain conditions are met, then A has a claim against B. Algorithms have made enormous progress in recognizing patterns in text, images and behavior through machine learning and the use of Graphics Processing Units (GPU). However, they cannot give these patterns any reason or meaning. Without empathy and a body, they are unable to take a human perspective (Möllers, Herr, Knecht und Maschine in der künftigen Rechtsphilosophie, in: Khurana/Quadflieg/Rebentisch/Setton/Raimondi (Hrsg.), Negativität: Kunst – Recht – Politik, 2018, S. 184). In addition, machine learning may lead to better results, but also makes it more difficult to understand what an algorithm is really doing (Wischmeyer, 2020, p. 75). Algorithms depend more than humans on clear rules for their functioning and for human trust in them. Even if their efficiency continues to increase through technical progress over time, the problem will not be solved. Ultimately, the fundamental rights test is difficult because of the unusually high degree of freedom of decision it entails. And no matter how intelligent machines become: AI-based control of communication looks more like a cyberpunk-nightmare than tech-utopia.

Left to us: algorithms cannot do proportionality

Every area of law knows simple and difficult cases, clear and vague norms. But differences also exist in a structural way, due to differences between legal fields. Every student knows that the various legal sub-disciplines follow radically different dogmatics. Criminal law has its tightly knitted concepts, constitutional law wide areas of unspecified powers (see Lennartz, 2017). The more vague the method, the more difficult it is to execute, for humans as well as for automatic systems. The most prominent example is the German and European dogmatic approach on fundamental rights, which depends less on conceptual distinctions but on ad hoc balancing. The use of automatic systems, therefore, faces specific problems when implementing claims that have a fundamental rights dimension and/or interfere with the fundamental rights of others (see Kraetzig/Lennartz, Grundrechtsschutz durch Algorithmus?, NJW 2022, 254).

The balancing of fundamental rights depends on a proportionality test (Urbina, A Critique of Proportionality and Balancing, 2017). This test however, with its case-by-case balancing, cannot be written in “if, …then” variables. In this respect, it is not possible to formulate instructions that allocate a defined solution to a certain situation. At present, algorithms are therefore not able to carry out a proportionality test. While they may be able to determine a legitimate aim and the suitability of an interference with the freedoms of communication, they are unable to examine a measure’s necessity, which requires a comparison with alternative measures. In any case, they must fail with regard to appropriateness as the core of the proportionality test – it is characterized by a multipolarity of rights as well as of the legal subjects affected. The proportionality test lacks a clear structure that could be written into an algorithm. Hence, the proportionality test developed for the analogue world cannot be translated into the language of algorithms since it resists a translation into “if,…then” variables.

Fundamental rights in digital architectures

If EU secondary law requires private actors to strike a balance between conflicting fundamental rights, it must provide the member states with an architecture in which a balancing of fundamental rights can be realized in their national legal systems. The DSM Directive fails in this respect: on the one hand, it demands the use of filters, on the other hand, it requires the realization of a fair balance between the protection of copyright and conflicting fundamental communication rights of the users. It puts the member states in a dilemma under secondary law. Algorithms cannot (at least so far) perform a fundamental rights test with its principle of proportionality. Algorithmic checks will result in wrong decisions being made. One may or may not consider this necessary due to the explosive proliferation possibilities of digital content and the non-existent retrievability. In any case, what the CJEU has approved for copyright law should not set an example for the protection of fundamental rights in the digital sphere.

Fundamental rights-friendly structures can be achieved without filters. The news feed, which is regularly no longer purely chronological, but algorithmically structured, could be organized in a way that is not solely oriented on maximizing user-engagement with the aim to drive elevated economic returns, but in a way that reduces virality and escalation. In addition, fake accounts and bots should be identified as such in the sense of transparency; in specific constellations they should be prohibited (see Eifert, in: Hermstrüwe/Lüdemann (eds.), Der Schutz der Meinungsbildung im digitalen Zeitalter, 189, 195). In addition to such a regime of obligations for fundamental rights-friendly architectures, fast human complaint procedures must be expanded (see in this direction with regard to fake news Buchheim/Abiri, Tel. & Tech. L. Rev.). Without legal requirements, platforms will not implement such fundamental rights-friendly architectures, especially since the use of filtering systems is a much more cost-effective alternative (see Reda, 2022).

Clean Speech?

Regardless of whether or when automatic filters can perform fundamental rights balancing: just as in the analog world, complete law enforcement is not desirable. To achieve a sterilized communication environment via filtering ignores the practice of democracy. Discussions about “civil disobedience” have shown, despite all the controversies, that although deliberate rule-breaking may deserve sanctions, its significance cannot be reduced to that. A liberal political culture will have to live with rule-breaking (on norm violation and sanction in general, see Möllers, The Possibility of Norms); suppressing it technically is hardly a gain in freedom. Although infringements should be properly sanctioned, they should not be legally suppressed ex ante.

Much of the concern in politics and academia about what is perceived as questionable or even reprehensible behavior in social networks and its imagined consequences for the functioning of democracy may in the end be due to their transparency. Nowadays, it is easier to listen to the crowd – and to be shocked. However, this does not tell us whether the platform as a new forum of communication is the reason for an offending behavior, or if it merely makes visible what was hidden before. Behavior on platforms will always be an expression of the culture in which the communicating citizens live. In the end, democracy as well as communication probably works just fine without Big Tech as big brother and the European legislature as nervous governess.


SUGGESTED CITATION  Lennartz, Jannis; Kraetzig, Viktoria: Filtering fundamental rights: DSM, DSA and algorithms in digital architectures, VerfBlog, 2022/10/05, https://verfassungsblog.de/filtering-fundamental-rights/, DOI: 10.17176/20221005-230853-0.

One Comment

  1. Paul Friedl Wed 5 Oct 2022 at 17:31 - Reply

    Three points:
    1) “Big data”-type, Machine Learning-driven content moderation systems absolutely do not rely on “if…, then…” logics. It is thus quite seriously mistaken to write that “[The proportionality test], with its case-by-case balancing, cannot be written in „if, …then“ variables”, that “at present, algorithms are therefore not able to carry out a proportionality test” or that “[algorithms] in any case, must fail with regard to appropriateness as the core of the proportionality test [as] the proportionality test lacks a clear structure that could be written into an algorithm [and] a translation into „if,…then“ variables.”

    2) Wouldn’t a consideration whether we’re “preempting deviance”, because we’re allowing for “complete enforcement” need to take into account that we don’t know what “total enforcement” looks like? Like, how are we to know whether what say Facebook’s algorithms are doing is indeed a “complete enforcement” of its community standards. What is more: there are and always will be (interesting) practices of deviance even under “algorithmic enforcement”. Look at how people intentionally misspell words to smuggle them past filters etc. etc..

    3) Isn’t it simply irrelevant whether “the platform as a new forum of communication is the reason for an offending behavior” or not? As a matter of fact, many people now have much bigger audiences and, again as a matter of fact, this introduces new forms of communication and new problems (people can offend, manipulate, bully or inform, discuss with, support individuals with whom they simply couldn’t interact with before).

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Copyright, DSA, DSM, Digital Services Act, algorithms, upload filters