07 November 2022

The EU’s new Digital Services Act and the Rest of the World

The European Union’s Digital Services Act (DSA) is a major milestone in the history of platform regulation. Other governments are now asking themselves what the DSA’s passage means for them. This post will briefly discuss that question, with a focus on platforms like Facebook or YouTube and their smaller would-be rivals.

Direct Global Impact

The Good: Transparency, Fair Processes, Improved Platform Practices

The DSA will have major spillover effects for the rest of the world. In some cases, this will lead to real benefits for users, mostly in the form of platform features or internal systems built for the DSA, but deployed globally. For example, platforms’ more clearly articulated speech policies under Article 14 and better explanations of algorithms under Articles 27 and 38 will improve understanding both inside and outside the EU. The largest platforms will likely also globally deploy some specific user protection measures, such as improved tools for communicating with the “accusers” and the “accused” in notice and action systems. Positive changes made as part of very large online platforms’ (VLOPs’) risk mitigation efforts under Article 35 seem likely to be global, as will more indirect benefits resulting from things like improved researcher access to data under Article 40.

The Bad: Fundamental Rights Risks and Competition Burdens

Not all of the DSA’s spillover effects will be beneficial, however. The harms will be harder to identify, but I believe they will be real. One set of risks involves Internet users’ rights. Civil society groups have raised the alarm, for example, about future back-room negotiations between regulators and platforms as part of Article 36 “crisis response mechanisms” or Article 35 “risk mitigation” measures. If history is any indication, platforms like YouTube, Facebook, Microsoft, and Twitter – the companies that negotiated the Hate Speech Code of Conduct and created a controversial upload filtering system for terrorist content — will readily make concessions to European regulators in order to protect their own businesses. The resulting standards have been publicly criticized by civil society groups for bypassing democratic processes and forfeiting users’ fundamental rights. Whatever we think of the current set of regulators and platform representatives who will negotiate comparable new agreements under the DSA, we should be wary of granting too much discretion and power over fundamental rights to their successors.

The other predictable global harm will be to competition. The DSA burdens even very small platforms with obligations that today’s incumbents never shouldered, or else took on only much later in their development. Facebook, for example, first released a transparency report in 2013, when it was worth $139 billion. It first allowed users to appeal removals of photos, videos, and posts (but not comments) in 2018, when the company was worth $374 billion and had some 35,000 employees. Newer market entrants will take on similar obligations at a much earlier stage: once they reach just €10 million and fifty employees. (These are the platforms above the DSA’s “small or micro” category. A chart listing which DSA obligations will affect companies of different sizes is here.)

The DSA also requires transparency and user notice-and-appeal operations on a scale that even the largest incumbents have never attempted. YouTube, for example, currently allows appeals for the roughly 9 million videos it removes every three months. But it does not yet do what the DSA will require: offering appeals for the additional billion comments it removes in the same time period. That’s more than a hundred-fold expansion. YouTube will presumably spend the money to extend appeals and other DSA requirements to comments – a category of online speech that can be important in attracting and retaining users, but that is often high in quantity and low in quality. Academic researchers attempting to assess sentiment or political valence of YouTube comments, for example, have complained that they are frequently “irrelevant”, “trivial”, and “tedious” to review. For smaller companies, simply eliminating comments may be the more affordable choice. A second, and perhaps even more important change in the scale of operations under the DSA may likely come from its extension of transparency and notice-and-appeal operations to content that is demoted or rendered less visible, rather than removed.

Other DSA obligations, like Article 21’s out-of-court dispute resolution requirement for disagreements about content moderation, are much more untested. The threat of outside review may incentivize better moderation by platforms in the first place. And settlement bodies will surely remedy incorrect moderation decisions in many cases. But there are also a lot of ways that they may go awry, including by providing conflicting outcomes that encourage forum-shopping by users and create pervasive fragmentation and inconsistency in interpretation of platforms’ community standards. Platforms above the “small or micro” DSA category will all, under the DSA, have to participate in this experimental system. They also have to fund it — paying their own costs and those of users who prevail in disputes.

As I discussed in a previous post, it is not clear that requiring platforms with just a few hundred employees to build out detailed and cumbersome new content moderation, “due process”, and transparency capabilities will have upsides sufficient to justify the barriers to market entry these burdens will create. If we want smaller platforms to one day rival today’s giants, perhaps we should not treat them like Meta or Google so early in their growth.  In this respect, too, the DSA will have worldwide effect. Companies like Facebook and Twitter grew by being globally available, and expanding gradually in regions where significant user bases developed. Their successors will not have this flexibility. Investors and entrepreneurs around the world will factor in the now-substantial compliance costs that come with attracting EU users before they even consider launching new platform businesses.

The Future: Uncertain

Those are my predictions. The DSA’s future is uniquely difficult to game out, though. The DSA superficially resembles another major regulation, the GDPR, particularly in its standardized compliance practices and reliance on regulatory action. But while the GDPR built on long-established legal structures, platform practices, and regulatory relationships, the DSA’s mechanisms and systems have been, until now, theoretical or tested only at much smaller scale.

That makes the DSA, like any other cutting-edge tool or system, something of an experiment. Some of its innovations will probably be great successes. Others will not. If Article 17 truly requires platforms to notify users every time their content is demoted or otherwise restricted in visibility, for example, users may rapidly tire of the resulting flood of notices. Or platforms may refrain from deploying beneficial measures to, for example, demote “borderline” content in order to avoid costs and hassle. That would leave users in the EU more exposed to potential disinformation, racial slurs, and other harmful content. The DSA’s unprecedented and extensive appeal mechanisms, similarly, will have some predictable benefits. But it could also turn out that users who avail themselves of measures like Article 21’s out-of-court dispute mechanisms are disproportionately far-right trolls, crackpots, and contrarians. At a minimum, research suggests they may be mostly men. That would leave us in need of different tools to protect the rights of online speakers who are marginalized or simply less assertive, as well as readers and viewers whose rights to access information have been harmed by improper takedowns. As a final example, the Commission may build its planned, unprecedented database under Article 24, hosting billions of notices about platforms’ content moderation decisions, only to discover both high costs and important limitations. This may occur in particular if the platforms’ removal of any personally identifiable information means that researchers using the database often have no idea what content was actually removed, or which users were actually affected.

Impact on National Laws Around the World

Lawmakers around the world are champing at the bit to enact their own new platform regulations. My suggestion to them would be to wait a few years before enacting laws that look like the DSA. There is plenty of other regulatory work to be done in the meantime. The U.S., for example, is in dire need of a real federal privacy law. We could also use basic legal reforms to enable “adversarial interoperability” or “competitive compatibility” – allowing new technologies to interoperate, build on, and draw users away from today’s incumbents. There is room for productive legal debate and reform relating to more ambitious “middleware” or “protocols, not platforms” approaches to content moderation, as well. Any “DSA 2.0” in other countries will be better if it builds on the demonstrated successes and inevitable failures of particular DSA provisions, after that law has actually been launched and operationalized.

There are a few more specific lessons from the DSA that bear notice in other countries.

Internal company “due process” changes

To DSA drafters’ credit, many of its rules in areas like content moderation and transparency reflect longstanding asks from global civil society. The DSA also avoided problematic “turnaround time” requirements of the sort enacted in Germany  or required under the EU Terrorist Content Regulation and proposed in other countries including Nigeria, which would require takedown on 24 hours’ notice. Lawmakers in other countries should take heed of the DSA’s approach, but also be aware of the potential harms from unnecessary international fragmentation in laws’ details. Platforms of any size, but particularly small ones, would struggle with similar-but-not-identical requirements across borders – with resulting waste of operational resources, damage to competition, and risk of further Internet balkanization. One tool to address this concern might be the modular model proposed by former U.S. FCC Commissioner Susan Ness and Chris Riley. Following that approach, lawmakers might select some standardized legal language or requirements for consistency across borders, while adopting their own rules where there are grounds for national divergence.

Regulatory relationships

Left-leaning thinkers in the U.S. have long been attracted to the idea of creating new regulatory bodies, or empowering existing ones, to assume roles similar to those held by the Commission and DSCs under the DSA. Absent significant change in the U.S. Congressional balance of power, that does not seem likely to happen. Any U.S. “DSA 2.0” would likely lack that very important component of the EU’s new system. The same may be true – and perhaps should be true – in many other parts of the world. Some activists in some Latin America countries, for example, have long cautioned against empowering regulators in this manner. Indian experts have similarly been critical of the role assumed, and rules proposed, by that country’s Ministry of Electronic and Information Technology.

Platform removal obligations for “lawful but harmful” speech

A major concern in platform regulation, both inside and outside of the EU, is about the impact of speech that is legal but causes harm. This category of “lawful but awful” speech exists, in some form, within any human rights-compliant legal system. The DSA chose not to regulate such speech directly by prescribing new content prohibitions to be enforced by platforms, but instead to regulate the systems and processes by which platforms enforce their own Community Guidelines or other speech rules. That avoids major human rights questions that would arise from laws restricting previously lawful speech. I think it is also wise for reasons of administrability and fair process, as I have discussed elsewhere. But some countries may be tempted to instead follow the UK, where lawmakers have now spent several years in an on-again / off-again flirtation with regulating “harmful” speech.

 “Must-carry” obligations

Courts in countries from Germany to Brazil have ordered platforms to reinstate content that the companies themselves deemed unlawful or violative of their Terms of Service. Lawmakers in Poland, Mexico and elsewhere around the world have considered legislation to create carriage obligations. Legislators have also enacted (Australia) or considered (U.S., UK) de jure or de facto carriage requirements for specific content, usually relating to news or elections. Few U.S. experts would have considered such obligations feasible until very recently, when very strange and crudely crafted “must-carry” laws were enacted in two states: Texas and Florida. The resulting litigation has sent an epoch-defining First Amendment question hurtling toward America’s newly reckless, conservative-dominated Supreme Court. Other countries’ incremental creep toward carriage mandates for major platforms may abruptly be bypassed by tremendous changes in the U.S.

Conclusion

The DSA is a far better law than most that have been proposed in other parts of the world. I have encouraged U.S. lawmakers to emulate it in many respects. But lawmakers around the world should view it as a starting point, rather than an end point, in considering potential regulations in their own countries. That means looking at the law’s substantial strengths, but also asking how to do better.


SUGGESTED CITATION  Keller, Daphne: The EU’s new Digital Services Act and the Rest of the World, VerfBlog, 2022/11/07, https://verfassungsblog.de/dsa-rest-of-world/, DOI: 10.17176/20221107-215642-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Brussels effect, DSA, Digital Services Act, Platform Governance, content moderation


Other posts about this region:
Europa