16 May 2022

Between Filters and Fundamental Rights

How the Court of Justice saved Article 17 in C-401/19 - Poland v. Parliament and Council

On 26 April 2022, the Court of Justice of the EU (CJEU or Court) delivered its much awaited judgement in Case C-401/19 – Poland v Parliament and Council. The case focuses on the validity of Article 17 of the Copyright in the Digital Single Market Directive (CDSMD) in light of fundamental rights. The judgment marks the climax of a turbulent journey in the area of copyright law, with potential implications for the future of platform regulation and content moderation in EU law.

The legislative process of the CDSMD was mired in controversy and heavily lobbied from all sides. A major point of debate focused on the so-called value gap or “upload filters” provision: then Article 13, now Article 17. The debate featured civil society protests, opposition by digital rights NGOs and Internet luminaries, and multiple expert statements by researchers. Painting with a broad brush, the main criticism of the provision was that it imposed a new liability regime for user-generated content sharing platforms (e.g. YouTube and Facebook) mandating the adoption of content recognition and filtering tools, leading to over-blocking and chilling users’ freedom of expression online.

The CDSMD, including Article 17, was approved in April 2019. Article 17 applies to online content sharing service providers (OCSSPs), a sub-set of hosting service providers. According to the new regime, OCSSPs that carry out acts of communication to the public when they give access to copyright-protected content uploaded by their users are directly liable for those uploads. Taking center stage in this judgment is the liability exemption mechanism, which contains a series of cumulative “best efforts” obligations to: (a) obtain an authorisation; (b) ensure unavailability of specific protected content; and (c) put in place notice and take down and notice and stay down mechanisms.

In May of 2019, the Republic of Poland filed an action for annulment with the Court challenging the validity of the preventive measures required by Article 17(4) (b) and (c) in fine in light of the right to freedom of expression and information in Article 11 of the Charter of Fundamental Rights of the EU (Charter). In the alternative, should these provisions not be severable from Article 17 as a whole, the Court was asked to annul Article 17 in its entirety. The hearing took place in November 2020 and the Advocate General (AG) Opinion was published in July 2021 (see comments here and here). In parallel, pursuant to Article 17(10) CDSMD, the Commission carried out Stakeholder Dialogues and published its Guidance on Article 17 on 4 June 2021, a mere working day before the transposition deadline (see comments here, here and here).

Finally, at the same time as Member States were busy transposing Article 17, the Digital Services Act (DSA) was negotiated and approved. As noted elsewhere (here and here), key provisions of the DSA will apply to the platforms covered by Article 17 CDSMD.

It is against this incredibly complex regulatory and judicial puzzle that the Court’s judgment in Case C-401/19 must be understood.

The Judgement

As a preliminary remark, it is worth mentioning that the potential limitation to fundamental rights resulting from Article 17(4) CDSMD is attributable to the EU legislature. Therefore, the core issue before the Court was whether the EU legislature, when designing Article 17, included enough safeguards to ensure its compliance with fundamental rights. This question is different from – and does not prejudge any future examination of – national transpositions of Article 17 or measures determined by OCSSPs to comply with that regime [para. 71]. Since the judgment has already been commented on extensively (e.g. here, here, here, here), I provide only a brief summary of the Court’s findings and arguments.

Regarding the substance of the dispute, the Court confirmed that Article 17 requires OCSSPs to carry out a prior review of uploaded content in cases where rights holders have provided “relevant and necessary information”, as required by paragraph (4)(b) [para. 53]. Depending “on the number of files uploaded and the type of protected subject matter in question, and within the limits set out in Article 17(5)”, such prior review of uploads will require the use of automatic recognition and filtering tools. Consequently, for platforms with the largest user base (e.g. YouTube and Facebook), Article 17(4) requires the use of so-called “upload filters”.

Because this prior review and filtering likely restricts an important means of disseminating online content, the specific liability regime in Article 17(4) leads to a limitation on the exercise of the right to freedom of expression and information of users of OCSSPs, as guaranteed by Article 11 of the Charter [paras. 55, 58, 82] and Article 10 ECHR. However, this limitation is justified under Article 52(1) Charter, as it is provided for by law and respects the essence of those rights and freedoms [paras. 63 et seqs.]. Importantly it is justified in relation to the legitimate objective of Article 17 CDSMD, i.e. ensuring a high level of protection for rights holders under Article 17(2) Charter [para. 69]. In this context, the Court emphasizes that the liability mechanism under scrutiny is more appropriate and effective to achieve this aim than an alternative version of that mechanism without paragraphs (b) and (c), in fine [para. 84].

The Court then advances six arguments why the limitation imposed by Article 17(4) is justified and does not disproportionately restrict the right to freedom of expression and information of users of OCSSPs [paras. 84 et seq]. In essence, the Court carefully interprets key aspects of paragraph (4) and its interplay with the safeguards and mitigation measures in paragraphs (5) to (9) in order to establish how the provision must be interpreted in a manner that respects fundamental rights of users and strikes a fair balance between competing rights and interests. Much of the considerable implications from this judgment stem from these arguments, some of which are reviewed in the following section.

On this basis, the Court concludes that Article 17 includes appropriate safeguards to ensure the right to freedom of expression and information of the users, and a fair balance between that right of users and the the right to intellectual property [para. 98]. However, Member States must implement the provision in a fundamental rights compliant manner. Furthermore, the authorities and courts of the Member States must ensure “that they do not act on the basis of an interpretation of the provision which would be in conflict with those fundamental rights or with the other general principles of EU law, such as the principle of proportionality” [para. 99].

What filters may come

The first important implication of the judgement is that the Court recognizes that Article 17(7) includes an obligation of result. This means that Member States must ensure that these exceptions are respected despite the preventive measures in paragraph (4), qualified as mere “best efforts” obligations. The different nature of the obligations, underscored by the fundamental rights-basis of the exceptions, indicates a normative hierarchy between the higher-level obligation in paragraph (7) and the lower-level obligation in paragraph (4). This point, already recognized by the AG and in the Commission’s Guidance, is reinforced by the Court’s recognition – in line with a 2019 academic recommendation – that the mandatory exceptions, coupled with the safeguards in paragraph (9), are “user rights”, not just mere defenses [paras. 86–88]. Although the legal ramifications of this (affirmative) “rights” qualification will have to be developed over time (on this topic, see here), the Court puts this controversy to rest and sets the stage for the subsequent interpretation of the liability regime in Article 17.

The second and related main implication of the judgment is that the Court rejects the possibility of interpretations of Article 17 that rely solely ex post complaint and redress mechanisms as a means to ensure the application of user rights [paras. 85–95]. That was for instance the position defended by France, Spain and Portugal during the hearing before the Court. Instead, the judgment clarifies that Member States’ laws must first and foremost limit the possibility of deployment of ex ante filtering measures; assuming that occurs, the additional application of ex post safeguards is an adequate means to address remaining over-blocking issues. This conclusion should be welcomed, especially in light of existing evidence that complaint and redress mechanisms are seldom used by users (see e.g. here). For instance, the recent YouTube Transparency Report shows that out of more than 700 million Content ID claims made in the first half of 2021, only 0.5% were disputed by the uploader.

But what is then scope of permissible filtering under Article 17?

Here, it is important to contrast the judgment with the Commission’s Guidance and the AG Opinion. The Guidance states that automated filtering and blocking measures are “in principle” only admissible for two categories of content: (1) “manifestly infringing” and (2) “earmarked”. Outside these categories, uploaded content “should in principle go online and may be subject to an ex post human review when rightsholders oppose by sending a notice” (p. 20). Although the Guidance could do a better job of defining “manifestly infringing”, the concept nevertheless aligns with CJEU case law, especially Glawischnig-Piesczek’s reference to “identical or equivalent” content. Conversely, the novel and fuzzy category of “earmarked” content is meant to “be limited to cases of high risks of significant economic harm, which ought to be properly justified by rightholders”; it refers to content that “is particularly time sensitive”, such as “pre-released music or films or highlights of recent broadcasts of sports events” but not excluding “[o]ther types of content” (p. 22 and fn. 34).

This categorisation, especially the endorsement of filtering of “earmarked” content that is not manifestly infringing, has been subject to significant criticism (e.g. here and here). It was also explicitly rejected by AG Øe in his Opinion (para. 223; commentary here).

Against this background, the Court states unequivocally that only filtering/blocking systems that can distinguish lawful from unlawful content without the need for its “independent assessment” by OCSSPs are admissible; only then will these measures not lead to the imposition of a prohibited general monitoring obligation under Article 17(8) CDSMD [paras. 85–86, 90-92, applying inter alia by analogy ,C‑18/18 Glawischnig-Piesczek, paras. 41–46 ].

Furthermore, these filters must be able to ensure the exercise of user rights to upload content that consists of quotation, criticism, review, caricature, parody, or pastiche. On this point, it is noteworthy that the judgment [para. 85] explicitly endorses by reference the AG Opinion (paras. 164, 165, 191–193). In those passages, AG Øe delimits the scope of permissible filtering and blocking measures deployed by OCSSPs, stating inter alia that “they must not have the objective or the effect of preventing such legitimate uses” that providers must “consider the collateral effect of the filtering measures they implement”, as well as “take into account, ex ante, respect for users’ rights”.

But how to ensure this? As the Guidance notes, existing content recognition tools for larger platforms are based on fingerprinting (other technologies identified include hashing, watermarking, use of metadata and keyword search). Such tools are sometimes developed in-house (e.g., YouTube’s ContentID or Meta’s Rights Manager), and other times acquired or licensed from third parties (e.g., Audible Magic or Pex). However, as major platforms have admitted during the stakeholder dialogues, their tools cannot assess the contextual uses required by user rights in Article 17(7).

In my view, when combining the Court’s statements with the previous case law and current market and technological reality, the logical conclusion is that only content that is “obviously” or “manifestly” infringing – or equivalent content – may be subject to ex ante filtering measures. Beyond those cases, for instance as regards purely “earmarked content”, it is difficult to see how the use of ex ante content filtering tools is consistent with the judgment’s requirements.

How to precisely define the categories of “obviously” or “manifestly” infringing content is up for discussion. The AG Opinion offers interpretative guidelines on how to limit the application of filters to manifestly infringing or “equivalent” content (paras. 196 ff.), with the consequence that all other uploads should benefit from a “presumption of lawfulness” and be subject to the ex ante and ex post safeguards embedded in Article 17, notably judicial review (para. 193). In particular, the AG emphasized the main aim of the legislature to avoid over-blocking by securing a low rate of “false positives”. Still, the concrete definition of what are acceptable error rates for content filtering tools will also vary according to the technology and market paradigm at the moment of assessment.

It also remains to be seen whether this reasoning applies more broadly to other types of illegal content beyond copyright infringement. If it does, it might help to shape the scope of prohibited general monitoring obligations versus permissible “specific” monitoring, with relevance for future discussions on the DSA. In drawing these lines, caution should be taken in the application of the “equivalent” standard in Glawischnig-Piesczek, which likely requires a much stricter interpretation for filtering of audiovisual content in OCSSPs than textual defamatory posts on Facebook.

Conclusions and Outlook

In conclusion, although Article 17 survives, its validity is subject to the strict application of safeguards that ensure the right to freedom of expression and information of the users of OCSSPs. In designing the outline of these safeguards, the Court clarified the normative prevalence of freedom of expression-based user rights over preventive measures, rejected the application of ex ante filtering measures that cannot distinguish lawful from unlawful content without the need for its independent assessment by OCSSPs, and (somewhat) clarified the scope of permissible filtering in light of fundamental rights. In my view, the application of the judgment to the market reality and practices of content filtering tools means that ex ante measures of this type are only permissible for “obviously”, “manifestly” or “equivalent” infringing content.

As regards immediate consequences for Member States, the judgment calls into question the validity of (portions of) national implementations that rely solely or predominantly on ex post safeguards without also restricting the scope of permissible filtering (e.g. Spain and Italy), or that have incorporated mechanisms such as “earmarking” (e.g. Austria). A trickier question is what effect the judgment will have on the significant number of national laws that opted for verbatim transpositions of Article 17. On this matter, some commentators argue that such transpositions might be insufficient without also including specific ex ante safeguards as required by the Court. Conversely, others argue that a “minimalistic transposition” is the best way to ensure compliance with EU and respect the prescriptive nature of Article 17. Only time will tell.

However, since both the Commission Guidance, the AG and the Court have now recognized the special nature of the right of communication to the public and specific liability regime in Article 17, including guidelines on how to interpret it, verbatim transpositions without further guidance leave national authorities and courts in a difficult position. This is particularly so since more detailed national transpositions – especially the German UrhDaG (e.g. containing safeguards in the form of “uses presumably authorized by law”) – have to some extent been vindicated, and will push platforms to adjust their content moderation systems accordingly, arguably for the entire EU territory. As such, more sophisticated laws might in practice dictate how copyright content moderation under Article 17 looks like across the EU.

For the Commission, there will be a need to review and amend its Guidance. At the very least, it should eliminate references to the permissibility of filtering of “earmarked” content, while providing a clearer definition of “manifestly infringing” content, better aligned with the AG Opinion. In a context where Member States and OCSSPs are struggling with national implementations, this could be a golden opportunity for the Commission to contribute to real harmonization and limit the risk of fragmentation.

Finally, as regards the future of intermediary liability and the DSA, this judgement might play a role in further defining the scope of prohibited general monitoring obligations versus permissible “specific” monitoring. To what extent that is the case, however, remains to be seen. What seems clear is that this judgment only closes the chapter on the annulment of Article 17. Next up: preliminary references!

This research is part of the following projects: the reCreating Europe project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 870626; the author’s VENI Project “Responsible Algorithms: How to Safeguard Freedom of Expression Online” funded by the Dutch Research Council (grant number: VI.Veni.201R.036).

SUGGESTED CITATION  Quintais, João Pedro: Between Filters and Fundamental Rights: How the Court of Justice saved Article 17 in C-401/19 - Poland v. Parliament and Council, VerfBlog, 2022/5/16, https://verfassungsblog.de/filters-poland/, DOI: 10.17176/20220516-182406-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.