02 March 2021

The Digital Services Act and the Reproduction of Old Confusions

Obligations, Liabilities and Safeguards in Content Moderation

In mid-December 2020, subject to great expectations, EU Commissioners Margrethe Vestager and Thierry Breton presented two large legislative proposals, aimed at defining a new regulatory regime applicable to internet intermediaries. One of them is the Digital Services Act (DSA), in form of a Regulation, which will establish a series of fundamental rules and principles regarding, essentially, the way intermediaries participate in the publication and distribution of online content. It focuses especially (but not only) on content hosting and sharing platforms, such as Facebook, Twitter, YouTube or TikTok.

While the DSA is intended to refit the 20-year-old E-Commerce Directive, it reproduces a central confusion in its predecessor: The interplay between a lack of knowledge or awareness of illegality remains a precondition to enjoy liability exemptions, however, the DSA encourages platforms proactive investigation of hosted content, which might trigger aforementioned knowledge or awareness. The inclusion of a Section 230-like ‘good Samaritan clause’, meant to facilitate proactive, own-initiative investigations by platforms, complicates matters further, and will likely incentivize the over-removal of hosted content.

Intermediary Liability: From the E-Commerce Directive to the DSA

The E-Commerce Directive contains, among other relevant aspects, the general intermediary liability regime applicable to hosting services at the EU level. In order to retain immunity, platforms must not have actual knowledge of illegal activity or information and/or not be aware of facts or circumstances from which an illegal activity or information is apparent. Upon obtaining such knowledge or awareness, platforms must act expeditiously to remove or to disable access to the illegal content (Article 14 E-Commerce Directive).

The case law of the EU Court of Justice (ECJ) has provided criteria to determine where such knowledge and/or awareness exists. As established in L’Oréal, rules set out in Article 14.1.a) of the E-Commerce Directive “must be interpreted as covering every situation in which the provider concerned becomes aware, in one way or another, of such facts or circumstances”. The Court also limits liability to cases where the intermediary “plays an active role of such a kind as to give it knowledge of, or control” over the hosted content. In other words, intermediaries enjoy liability immunities inasmuch as they perform a role of a mere technical, automatic and passive nature. Despite the ECJ’s efforts in a very limited number of cases, it remains challenging to establish criteria according to which intermediaries’ interventions can clearly be classified as active or passive. Plenty of interventions or activities, particularly regarding content moderation, remain in a grey area.

In any case, voluntary proactive measures to monitor, detect and remove illegal content do not necessarily lead online platforms to lose their liability exemptions. Certain types of voluntary measures are, in fact, promoted by several EU documents and policies. Particularly relevant documents are the 2016 ‘Code of conduct on countering illegal hate speech online’ – launched with and signed by Facebook, Microsoft, Twitter and YouTube –, the 2017 Commission Communication on tackling illegal content online, and the Recommendation on measures oriented to the same purpose. In addition, some recently adopted legislation incorporates new obligations regarding the adoption of proactive measures – foremost in the 2019 Copyright Directive.

All this being said, as a general principle, intermediaries can become liable in the EU, when they are proven to have overlooked a particular illegality when implementing voluntary measures in such a way as to create actual or constructive knowledge that strips them of immunity.

Further, making the protection conditional on a blurry definition of “passivity” incentivizes a hands-off approach that may both result in an increased quantity of online illegalities, and in the failure to satisfy users who prefer not to be exposed to objectionable or irrelevant material. The E-Commerce Directive does thus not incorporate a proper good Samaritan clause, as I show in previous writing. The principle finds one of its earliest and most-acknowledged embodiments in Section 230(c) of the Communications Decency Act, which is included in the United States Telecommunications Act of 1996. When platforms are granted full immunity for the (almost all) content they handle, as under Section 230(c), the law fosters the adoption and discretionary implementation of private policies regarding illegal and other types of offensive or harmful content, over the law.

Liability regimes under the DSA

Will the DSA significantly alter this legal landscape? Has the DSA established a good Samaritan clause that would protect any measure taken in good faith by platforms to deal with illegal and other forms of objectionable content?

The DSA does not repeal the basic provisions established under the E-Commerce Directive. In fact, it contains identical provisions regarding hosting service providers in its Article 5, thus keeping the core of the current conditional intermediary liability regime untouched. This being said, the DSA incorporates new regulatory “layers”, which may lead to even more challenging interpretation issues, that will probably need to be addressed by the Court in Luxembourg as well.

Apart from the provisions included in Article 5, Article 14(3) establishes that notice and action mechanisms fulfilling certain criteria give rise to actual knowledge or awareness. In addition, Article 6 clarifies that intermediaries may not lose their liability protections “solely because they carry out voluntary own initiative investigations or other activities aimed at detecting, identifying and removing, or disabling of access to, illegal content, or take the necessary measures to comply with the requirements of Union law, including those set out in this Regulation”.

Important clues regarding liability provisions under the DSA are also to be found in Recital 22, according to which  providers can acquire actual knowledge and awareness through “own-initiative investigations or notices submitted to it by individuals or entities in accordance with this Regulation in so far as those notices are sufficiently precise and adequately substantiated”. This line has intrigued scholars: does it imply, for example, that investigations originating in unsubstantiated claims are not sufficient basis for knowledge? Secondly, Recital 25 reinforces and elaborates that the mere fact that providers undertake investigative activities “does not lead to the unavailability of the exemptions from liability set out in this Regulation, provided those activities are carried out in good faith and in a diligent manner”. In order to be shielded from liability, investigations must aim at “detecting, identifying and acting against illegal content” or at complying with “the requirements of Union law, including those set out in this Regulation as regards the implementation of their terms and conditions”. Recital 25 also acknowledges the “residual” applicability of the dichotomy between active versus passive/neutral, in particular to cases where the mentioned exemptions would not concur.

Therefore, without prejudice to the applicability of the general regime – Article 14 E-Commerce Directive, now Article 5 DSA-Draft – the DSA has introduced additional provisions that appear to encourage, to a certain extent, the adoption and implementation of proactive content moderation policies by platforms, which may also significantly affect delineations of liability. Prima facie protection is granted to certain voluntary initiatives taken by platforms, rendering the general regime based on the combination of knowledge/awareness and a lack of neutrality/passivity inapplicable in a relevant number of cases. However, this triggers further interpretative problems.

Interpretative Problems

Own-initiative investigations remain protected by immunity shields when they “solely” aim at two main objectives: (i) dealing with illegal content or (ii) complying with other obligations intermediaries may have according to the DSA itself and other relevant EU legislation. What are these legal-but-not-necessarily-substantive obligations?

Particular attention should be paid to the obligations for very large platforms (over 45 million users) to “put in place reasonable, proportionate and effective mitigation measures, tailored to […] specific systemic risks”, per Article 27 DSA-Draft in connection with Article 26. Systemic risks are conceptualised, among others, as the dissemination of illegal content, as creating negative effects for the exercise of the fundamental freedoms of expression and information, and as the creation of negative or foreseeable effects on the protection of civic discourse, or to electoral processes and public security via “intentional manipulation of their service, including by means of inauthentic use or automated exploitation of the service”. These provisions are extremely open and vague – particularly the regulatory obligations for platforms vis-à-vis legal-but-harmful content (Article 26(1)(c)). At the same time, there is a clear risk of reciprocal contradictions or tensions – especially the protection of free expression and the mitigation of risks (Article 26(1)(b)). Besides and despite the reference to reasonableness and proportionality in Article 27, the specific nature and scope of the specific mitigation measures is left to the discretion of platforms and, in the last instance, to the decisions from still not clearly identified national regulatory bodies under the overall oversight of the European Commission.

Further, how should “solely” be interpreted in this context? This adverb seems to limit immunity protections to cases where platforms have not undertaken any other activity, beyond the mentioned own investigations, which would indicate specific knowledge of a concrete piece of content. In other words, the DSA does not cover other possible actions or measures that may lead the competent authority to establish the existence of actual knowledge or awareness. Possible examples include the case of the reception of an order to provide information under Article 9,  or a not properly substantiated notice by a third party when it has led to the consideration of a specific piece of content.

Effect on Content Moderation Policies

The DSA thus contains a series of interconnected provisions, that do not only affect platforms’ conditioned liability, but also have a clear impact on the adoption and enforcement of content moderation policies. These provisions are based on the following main rules:

  1. The DSA maintains the E-Commerce Directive’s general prohibition of the imposition of general monitoring obligations (Article 7).
  2. Platforms may be obliged to establish and implement content moderation policies in order to deal with illegal and certain types of objectionable content (developed and overseen by national regulatory bodies in coordination with the Commission) (Articles 26 and 27).
  3. The good Samaritan clause included in Article 6 provides partial liability protection to platforms engaging in own-initiative investigations, or other activities aimed at detecting, identifying and removing, or disabling of access to, illegal content, as well as the necessary measures to comply with the requirements of Union law, according to what is established in Articles 26 and 27.
  4. However, the good Samaritan clause does not cover activities other than own-initiative investigations. These are situations where the intermediary achieves actual or specific (and thus not presumed or constructed) knowledge of an illegality, provided that the intermediary plays an active role of such a kind as to give it knowledge of, or control over the hosted content, as per the parameters already established by the ECJ.

In order to provide targeted results, a good Samaritan clause needs to be accurately crafted and avoid any possible counterincentive. The current wording of the DSA reproduces difficulties from the E-Commerce Directive. One can still conclude that the more platforms play an active role in moderating the content they host, the more likely they will stumble upon but overlook a particular illegality, therefore incurring a greater risk of liability. Article 6 could thus lead to more removals, as it would be safer for the hosting providers engaging in proactive monitoring to remove more, in avoidance of liability.

In this context, particular problems arise toward measures taken by platforms on the basis of their own Terms of Service (ToS). ToS go beyond the fulfilment of the obligations included in the DSA (and other pieces of legislation), regarding the mitigation of certain risks or, more broadly, the dissemination of legal but harmful content. In the “flexible” model of structural obligations defined by the DSA – a model where obligations are defined in an open-ended way – platforms risk penalties for the incomplete fulfilment of their obligations, and at the same time, they may be held liable for ToS-based measures going beyond such obligations if they lead, once again, to the acquisition of knowledge of a piece of illegal content.

Final thoughts

The DSA proposal contains, no doubt, many interesting and innovative provisions. In many cases, they represent a reinforcement of the rights of users and speakers, vis-à-vis online intermediaries. However, the confusion regarding the notion of actual knowledge, awareness and their relationship with a possible “active” role played by platforms has not been solved and lives on in the sometimes vague language of the proposal. The importance of counting on a liability regime that provides sufficient legal certainty to platforms and users suggest that these issues may still be the object of thorough attention and lengthy debates during the different phases of the adoption of the Regulation.