31 August 2021

Five Reasons to be Skeptical About the DSA

In an effort to establish a “safe, predictable and trusted online environment” for the EU, the Digital Services Act (DSA) proposal sets out an extensive catalogue of due diligence obligations for online intermediaries, coupled with tight enforcement rules. Whereas the power of Big Tech about the digital public sphere is indeed a reason for concern, a freedom of expression perspective on the DSA proposal reveals that it is problematic in several respects. It partly reinforces Big Tech’s control over communication and moreover fights fire with fire by establishing a powerful public/private bureaucracy able to monitor and potentially manipulate online communication trends.

What Freedom of Contract?

The first reason to be skeptical about the DSA is its relation to freedom of contract. The proposal not only covers illegal content of all kinds but also “information incompatible with […] terms and conditions” of intermediaries (Art. 2(g)(p)(q) DSA). Addressees of the DSA have to be transparent about their contractual speech restrictions (Arts. 12(1), 13(1)(b), 15(2)(e), 20(4) DSA), and they have to apply and enforce these in a “diligent, objective and proportionate manner” (Art. 12(2) DSA).

On the one hand, this scope of application is a correct acknowledgment of the fact – brought to light inter alia by experiences with the German Network Enforcement Act (NetzDG) – that by far most content moderation measures adopted by Big Tech are based on their terms and conditions, which they apply on a global scale. If the EU wants to effectively reign in on this practice, it thus has to address terms and conditions. On the other hand, the DSA provisions on point are highly problematic in that they are insufficiently tied to the power of the addressees. None is targeted to very large online platforms (VLOPs). Arts. 12 and 15 DSA even apply to micro or small conduit, caching and hosting services. Whereas courts may eventually draw distinctions between the content moderation of startups and that of VLOPs, Art. 12(2) DSA will, in the meantime, put an additional burden on SMEs while granting broad discretion to VLOPs. Whereas the former have to cope with an additional financial burden in competing with Big Tech, the latter remain free to define what may and can be said on their platforms – note that Art. 12(2) DSA only regulates the application and enforcement of speech restrictions, not their content.

Such an undifferentiated rule is untenable from the perspective of the freedom of contract. According to the German Federal Constitutional Court, the patron of Drittwirkung, in principle all persons have the freedom to choose when, with whom and under what circumstances they want to enter into contracts. Only under “specific circumstances” does the right to equality have horizontal effects between private actors. Whether Facebook’s and other VLOPs’ community standards present such “specific circumstances” has not yet been settled. What is clear though is that the contractual relationships between SME intermediaries and their users are – beyond consumer protection and antidiscrimination laws – subject to party autonomy. As a consequence, Art. 12(2) DSA should be moved to the VLOP section of the DSA.

Preventing Vague Risks

The second reason to be skeptical about the DSA is its risk prevention approach. Art. 26 DSA obliges VLOPs to constantly identify, analyse and assess “any significant systemic risks” stemming from the functioning and use of their services in the Union. According to Art. 27 DSA, they have to put in place reasonable, proportionate and effective measures to mitigate these risks, for example by adapting their algorithmic content moderation or recommender systems. Again, the DSA proposal delegates wide-ranging and highly sensitive decisions to Big Tech.

Of particular concern in this context is Art. 26(1)(c) DSA, which orders VLOPs to assess the risk of

“intentional manipulation of their service, including by means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security”.

This extremely broad and vague provision not only covers fake accounts and bots spreading illegal content (recital 57) but all kinds of “intentional manipulations” of a VLOP service with a “foreseeable negative effect” on “civic discourse” or effects (of any kind) “related to electoral processes and public security”. Such “manipulations” need neither be illegal nor violate terms and conditions when they occur. This follows from Art. 27(1)(a) DSA, according to which VLOPs may have to adapt their terms and conditions in order to manage a systemic risk. Further questions arise: What information is to be banned from the EU Internet via the notion of “civic discourse”? Is exercising freedom of expression online a systemic risk to be mitigated? Rephrasing Art. 26(1)(c) DSA will hardly provide sufficient clarity. Instead, the provision should be deleted.

Over-blocking

The third reason to be skeptical about the DSA is a classical freedom of expression concern: over-blocking. The size of this problem is difficult to ascertain, but the study of Liesching et al. on the practical effects of the German Network Enforcement Act and numerous decisions of German courts ordering Facebook and other platforms to put back posts that had been deleted for violation of community standards show that over-blocking is real. The put back obligation of Art. 17(3) s. 2 DSA proposal is an implicit acknowledgment of this phenomenon.

Nonetheless, the DSA proposal integrates the private practice of automated content blocking into its compliance regime (cf. Arts. 14(6), 15(2)(c), 17(5), 23(1)(c) DSA). It remains a fundamental contradiction of the DSA that algorithmic decision-making is both a reason for its proposal and a measure accepted if not required by it. Worse still, the DSA will create compliance duties with regard to any type of illegal content. The broader though the scope of application of the DSA, including hard cases at the borderline between legality and illegality, the higher the risk of false positives.

When it comes to algorithmic enforcement of copyright on sharing platforms such as YouTube, the Commission apparently shares this skepticism. In her June 2021 guidance on the implementation of Art. 17 of the 2019 Digital Single Market Directive (DSMD), the Commission states that “automated blocking […] should in principle be limited to manifestly infringing uploads” whereas “uploads, which are not manifestly infringing, should in principle go online and may be subject to an ex post human review when rightsholders oppose by sending a notice”. Advocate General Saugmandsgaard Øe similarly finds that Art. 17 DSMD is compatible with the right to freedom of expression and information only if interpreted to the effect that “ambiguous” content is not subject to preventive blocking measures. In line with these cautious approaches, the German act transposing Art. 17 DSMD takes great pains to avoid “disproportionate blocking by automated procedures”.

The ensuing question for the DSA is this: If automated decision-making does not ensure a balance of all fundamental rights at stake in the area of copyright, then why is it appropriate for all types of illegal content – including copyright infringements but also very sensitive issues like allegedly defamatory political speech? The answer: It is not, and therefore, the DSA should allow automated decision-making only in cases of manifestly illegal content.

Regulating “Harmful” Content, in Particular “Disinformation”

The fourth reason to be skeptical about the DSA is that it is not limited to fighting illegal information but that it also addresses the issue of “harmful” content, in particular “disinformation”, which is generally understood as

“verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm”.

It is true that the DSA proposal neither defines nor directly regulates, in the form of removal obligations, this “delicate area with severe implications for the protection of freedom of expression” (DSA explanatory memorandum). However, recital 5 already refers to the role of intermediaries in the “spread of unlawful or otherwise harmful information and activities”. And several recitals mention “disinformation” as one reason for obliging VLOPs to be transparent about the advertisements they display (Art. 30 DSA with recital 63), for future codes of conduct (Art. 35 DSA with recital 68) and for crisis protocols to be facilitated by the Commission in response to “extraordinary circumstances affecting public security or public health” (Art. 37(2)(a) DSA with recital 71). The DSA proposal generally forms part and parcel of EU anti-disinformation policies. Recital 69 refers to the 2018 Code of Practice on Disinformation, and in her May 2021 Guidance on strengthening that Code, the Commission promotes its respective suggestions as an “early opportunity for stakeholders to design appropriate measures in view of the adoption of the proposed DSA”.

The difficulty with this public-private fight against “disinformation” has become abundantly clear though in the course of the COVID-19 pandemic. In order to get the facts right and tackle COVID-19 disinformation, the Commission initiated a monitoring and reporting program, under which platforms are “asked” to make their respective policies and actions public. As part of this program, Facebook reported in February 2021 that “following consultations with leading health organizations”, including the WHO, it will remove the “debunked” claim that “COVID-19 is man-made or manufactured”. On May 26, 2021, Facebook informed the public, however, that

“in light of ongoing investigations into the origin of COVID-19 and in consultation with public health experts, we will no longer remove the claim that COVID-19 is man-made or manufactured from our apps.”

An adequate comment on this affair can be found in Hannah Arendt’s 1971 essay “Lying in Politics”. The “right to unmanipulated factual information”, posits Arendt, is the “most essential political freedom”, without which “all freedom of opinion becomes a cruel hoax”. I agree, and therefore the DSA should not establish any direct or indirect removal obligations concerning “disinformation” or other “harmful” yet legal content.

Establishment of a Communication Oversight Bureaucracy

The fifth and final problem with the DSA proposal is that it would create a bureaucracy with the power to supervise not only DSA compliance but general communication trends. If the aim is to cabin Big Tech’s power, then establishing a public/private superstructure with even more power and new information asymmetries is not the solution.

The DSA bureaucracy consists of several interconnected state and non-state actors. The central player in this network is the Commission, which estimates that it needs 50 additional full time positions to manage its various DSA-related tasks. In addition, each Member State shall designate a Digital Services Coordinator (DSC, Art. 38 DSA). The 27 DSCs form an independent advisory group “on the supervision” of intermediaries, named “European Board for Digital Services” (EBDS), which is, again, chaired by the Commission (Arts. 47-48 DSA). The Commission, national DSCs and the EBDS operate in a coordinated manner (Arts. 45-46 DSA).

On the DSA addressee side, VLOPs have to appoint one or more DSA compliance officers who have to ensure inter alia that VLOPs cooperate with authorities (Art. 32 DSA). Further civil society actors complement the DSA bureaucracy, namely entities that are awarded by a DSC the privileged status of a “trusted flagger” (Art. 19 DSA), organisations performing independent DSA audits (Art. 28 DSA) and transnational bodies developing and implementing voluntary industry standards (Art. 34 DSA).

At first sight, these actors are merely there to enforce the DSA. In light of the all-encompassing scope of application of the DSA, the fulfilment of this task requires, however, to oversee all communication transmitted or stored by intermediaries. And indeed, Art. 49(1)(e) in conjunction with recital 89 s. 2 mandates the EBDS to identify and analyse “emerging general trends in the development of digital services in the Union”.

Such a bird’s eye view on online communication in the EU presupposes enormous amounts of up-to-date information, which the Commission and the 27 national DSCs will indeed be able to gather under the DSA: Firstly, intermediaries and VLOPs in particular have to make much relevant information available to the public, namely speech restrictions applied by them (Art. 12(1) DSA), the number of their active users (Art. 23(2) and (3) DSA), the main parameters of recommender systems (Art. 29(1) DSA) and further data to be included in transparency reports (Arts. 13, 23, 33 DSA). Host providers of whatever size have to publish all blocking decisions and their respective legislative or contractual grounds in a publicly accessible database managed by the Commission (Art. 15(4) DSA). Finally, VLOPs have to compile and make publicly available a real-time repository of all commercial and non-commercial, including political, advertisements (Arts. 2(n), 30, 36 DSA). According to recital 63, these ad repositories are meant to “facilitate supervision and research into emerging risks”, including “manipulative techniques and disinformation”.

Secondly, competent authorities will be supplied with much additional, granular information about what is going on online. Copies of all orders to act against illegal content and to provide information are transmitted through an “information sharing system” (ISS) to the Commission, other DSCs, and the EBDS (Arts. 8(3), 9(3), 67 DSA). Intermediaries receiving such orders are obliged to inform the issuing authority of the effect given to it, specifying what action was taken at which moment in time (Arts. 8(1), 9(1) DSA). The proposal is silent as to whether these compliance reports may be channelled through the DSA ISS. It is in any event likely that problems in this context will be addressed as “emerging trends” in EBDS meetings. Finally, the parts removed from public transparency reports of VLOPs for reasons of confidentiality, security or “harm” to recipients still have to be submitted to the competent DSC and the Commission (Art. 33(3) DSA).

Last but not least, national DSCs and the Commission may order VLOPs to provide access to, and explanations relating to, their databases and algorithms, including “data on the accuracy, functioning and testing of algorithmic systems for content moderation, recommender systems or advertising systems, or data on processes and outputs of content moderation or of internal complaint-handling systems” (Arts. 31(1), 57(1) s. 2 DSA with recital 64). The Commission expects that it will conduct two such in-depth analyses for every VLOP every year.

Taken together, the DSA proposal would turn the tables on who knows what online. Whereas Big Tech nowadays possesses more information about online communication than public authorities, the latter would in the future occupy the top of the information hierarchy – across all intermediaries. Such a panoptical position creates new risks of manipulation and misuse, which the proposal does not address at all. For example, the information sharing and further correspondence between the Commission and national authorities will be kept secret (cf. Art. 63(4) DSA). EBDS meetings will take place behind closed doors. A public communicative sphere with such intense yet opaque involvement of executive authorities does not, however, deserve the trust the DSA proposal is meant to foster in the first place.


SUGGESTED CITATION  Peukert, Alexander: Five Reasons to be Skeptical About the DSA, VerfBlog, 2021/8/31, https://verfassungsblog.de/power-dsa-dma-04/, DOI: 10.17176/20210831-233126-0.

One Comment

  1. Carlos Soares Mon 9 May 2022 at 14:37 - Reply

    Thank you for this article. It is very interesting and informative.

    Something that the DSA does not seem to cover is the limitation of time spent online. I feel that for those users who would not need online services for professional reasons, there should be a limit of the time they can spend online.

    For minors that should be decided by their parents and for adults they should receive use reports of the time they spend online so they know.

    Thanks in advance for a reflection on the time spent limits and reports in case you would find it useful.

    If you do, please let me know by email.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
DSA, Digital Services Act, Platform Governance


Other posts about this region:
Europa