Will the DSA work?
On money and effort
The Digital Services Act (DSA) is an ambitious project. It constrains private power to protect the freedom of individuals. Arguably, it is based on ordoliberal thinking that if competition does not discipline private power enough to facilitate individual freedoms, the state must intervene to prescribe basic rules of the game to constrain it; competition can do the rest. Wisely, the DSA shifts away from the unproductive debate about liability for third-party content as the only policy lever to achieve change. Instead, it moves the conversation to accountability for how systems enabling risks are being designed.
The DSA has many components but, in its essence, it is a digital due process regulation bundled with risk-management tools. It creates universal due process guarantees, invites transparency to private decision-making and institutionalizes constant risk management by larger players. Europeans gain enforceable procedural rights owed to them by private parties operating the digital ecosystem. Regulators gain tools to hold such providers accountable for what science tells us goes wrong with their designs. Victims, NGOs, and industries gain tools to better enforce their rights at scale.
But will these tools work?
The rulebook is there. It is a tremendous achievement. But setting the rules of the game does not mean we also master its outcomes. The real struggle begins now.
My main concern about the DSA resides also in its strength – it relies on societal structures that the law can only foresee and incentivize but cannot build; only people can. These structures, such as local organisations analysing threats, consumer groups helping content creators, and communities of researchers, are the only ones to give life to the DSA’s tools. They need to be built bottom-up and sometimes locally in each Member State. If their creation fails, the regulatory promises might turn out to be a glorious aspiration. How to avoid that?
Here is my to-do list.
We need a vibrant community of specialized trusted flaggers, consumer associations, dispute resolution bodies, content moderation professionals and content creators. We need their joint efforts to standardize what makes sense. We need to educate Europeans about their new rights and scientists about their newly gained tools to conduct research. We need to invest money and energy. And finally, we need a mixture of private and public enforcement to make the DSA a success story.
Let me address each of these points one by one.
Local trusted organisations
Why are local organizations key? Let me illustrate this with the example of trusted flaggers (Article 22). The DSA grants them preferential treatment when they notify problematic content — but only if they have a track record of quality – that is precision in targeting what is illegal.
The unique European challenge for the DSA is that such organizations must almost always be local. Excellent German consumer organisations are unlikely to help Spanish-speaking consumers. Devoted Dutch anti-hate speech groups will not have the skills to find and notify Romanian content. Skilled Estonian groups fighting against hybrid threats will not help in Slovakia. Thus, the capacity needs to be built up across all Member States to fight all the unique challenges of today, ranging across various forms of extremism, terrorism, war propaganda, hate speech and beyond.
For quality and predictability to emerge from chaos, local organisations are indispensable. Without trusted flaggers, there will be fewer trusted notices and fewer good decisions. But we should also be honest. To achieve quality in the enforcement through such local players, they cannot operate on a shoestring budget. Various actors must help facilitate their work: service providers by providing them with the right interfaces and help with standardization of how notices are exchanged (Articles 16(1), 44(1)); local authorities and citizens by investing and supporting their work; researchers by providing them with the right tools and insights.
It should be in everyone’s interest to help such organisations in their efforts to improve and grow. No one was helped by the wasteful practices of the past when the low quality of notices was a crime without punishment. The DSA is a chance to reward betterment: helpful notifiers get carrots, bad actors get sticks (Article 21(5), Article 22(1), Article 44(1) vs Articles 23(2), Article 22(6), Article 21(5)).
It is not enough to invest in regulators. Member States and citizens must also invest in their local civil society. The legislators were warned very early about this. We cannot look away from the problem of funding and simply hope for the best.
Individuals must actively use the tools
This brings me to the role of individuals. I do not expect that all Europeans will read the DSA provisions before going to sleep — though, as the famous GDPR app shows, it can be therapeutic.
But it is inescapable that only those Europeans who are aware of their rights can properly enforce them. The DSA assumes active individuals. The DSA gives recipients of some services a right to understand how content moderation decisions are made, obtain an explanation of each decision, appeal it internally within the service and get a second opinion from experts (Articles 17, 20, 21). Users and notifiers can also ask for help from consumer associations and other non-profit groups (Article 86). They gain various tools of transparency to demystify how impactful digital services operate and make content moderation decisions. For instance, they can browse through explanations given to those whose content is removed (Article 17(5)) and view aggregate statistics (Articles 15, 24, 42); for VLOPs, parents will be able to read the reports about how they mitigate risks posed to their children (Article 42(4)). Granted, average parents will not read such reports. But many devoted journalists (and perhaps some academics) can read it for them.
The DSA tries to overcome the prototypical problem of user apathy by empowering users to defend themselves by two means. First, it grants them a fast, cheap, and much more credible remedy than just internal re-assessment. Second, it invites consumer associations to assist them when seeking redress.
If the internal content moderation processes fail, the affected users or notifiers can obtain a second expert opinion issued by out-of-court dispute settlement bodies (Article 21). Where some people see “de facto courts”, I see a second external expert opinion. On the micro level, the dispute settlement bodies give users a fast and cheap remedy to seek redress by asking external experts. On the macro level, due to its payment structure, this regime incentivizes providers to make fewer mistakes because each mistake costs them money and reputation. When these small costs pile up, they might become significant to force changes in providers’ systems and rules.
And let’s be clear. The DSA does not take away all the power from platforms. It does mostly not limit what legal content can be prohibited by providers under their community guidelines – that is a power that providers retain. Thus, if providers do not like how out-of-court bodies read their rules, they can change them and make them clearer. But once they put the rules in black and white, they cannot claim to be orange without actually changing them. The DSA limit only some grossly unfair policies (Article 14). Thus, if vague clauses serve only to enforce grossly unfair outcomes, there is now a stick that can be relied upon by individuals.
All the talk about out-of-court bodies as de facto courts in my view clouds the most important point of their existence. If regulators do a good job in monitoring and certifying these bodies – which is undoubtedly crucial, the DSA can constrain a private power in a significant way without limiting a platform’s rulemaking. It promises individuals to get an interpretation of these rules by someone who has not written them and has no clear stake in individual outcomes. Impenetrable jargon in the terms and conditions will not be the provider’s advantage anymore (Article 14). Moreover, the DSA embeds consumer associations in such processes (Article 86). It thereby allows more expertise to enter the conversation in the open, and even gives organisations tools to defend individuals who lack means and expertise – whether as notifiers or content creators (Articles 86 and 90).
To be sure, the goal of this tool is not to eradicate mistakes – to disagree is part of human nature. Even if it works it can only minimize mistakes, reduce their arbitrariness, and improve the legitimacy of the underlying decision. But for activities on such an industrial scale, this is probably the best outcome we can hope for.
Without active individuals who invoke their rights, none of this will work. Consumer and other groups can do a lot to empower individuals by making them aware of their rights.
The DSA’s heroes: Researchers
The success of the DSA’s risk management rulebook for very large online platforms (VLOPs) and very large online search engines (VLOSEs) is probably the most open-ended. But instead of emphasizing how amazing the regulators must be – which is surely true – I want to put the spotlight on my DSA heroes: the researchers.
While regulators are crucial, they will do little if they do not have sparring partners among researchers who help them to distinguish yellow press headlines from real causes of problems.
The DSA tries to create a tool to manage all kinds of risks. Systemic risks stemming from content moderation, recommender systems, advertising and other parts of the design of services must be reviewed for how they de-risk the distribution of illegal content, impact fundamental rights and some other protected interests (Article 34). By trying to be future-proof, the risk management mechanism remains very broad and gives out little detail about methods by which to investigate systemic risks. Moreover, unlike in other narrow sectors, here the relevant risks are to an individual, communities and society at large — so basically everything we cherish.
Starting from scratch can overwhelm and disorientate regulators as to what the enforcement priorities should be and how to deal with them. And this is where researchers are key.
Researchers can help to identify what counts as a risk (Article 40(4)). In effect, they help to shape the agenda for regulators and providers of digital services. They also monitor and assess those risks, their causes and contributing factors, and suggest methods and tools to mitigate them. Their suggestions have direct relevance to providers’ compliance with the DSA. To be able to do so, they require special access to any data held by providers on a project basis (Article 40(8)). Such access cannot be easily refused by providers or regulators (Article 40).
The DSA thus changes the norm: now researchers pick their projects and platforms, not the other way around. And once the research shows some risks, their causes, or suggests a way forward, it cannot be ignored by providers or regulators (Articles 40(4)).
But this tool’s Achilles heel lies also in funding. The researchers engaging in such data-intensive projects need money to be able to conduct them properly and independently. If the only funding available to do research comes from the industry, even if remotely, we have a problem.
European academia needs specific grants for researchers who want to make use of the data opened up by the DSA. However, the financial support must be equally independent of the authorities that act as regulators. Researchers cannot act as a check on the abuse of state power exercised by regulators if they also need funding from the same authorities.
If the funding to conduct the risk-mitigation research is indispensable, controlling the funding means controlling the access to data. Thus, if the only funding comes from regulators or the industry, we risk again that someone will set research priorities for us.
DSA as a baseline, not the only standard
When celebrating the due process rights in the DSA, we should not forget about its blind spots too. Infrastructure providers are subject to only very light due diligence obligations. Superusers, such as influencers, and trusted content creators, such as journalists, academics, and others, are not offered stronger rights although they might need them. But this per se is not an issue as long as the DSA’s due diligence obligations are not perceived as the final world – the golden standard that may not be exceeded. Many, not all, blind spots can be overcome by DSA-plus agreements or practices. To the extent that they are not anti-competitive, they should be encouraged.
Let me offer an example. To individual users acting as content creators, such as influencers, artists, bloggers, and hobbyists, the DSA grants rights to defend their life’s work. Such content will be soon protected by due process requirements against allegations made by others.
Until now, the incentives were mostly lined up in the opposite direction – that is to remove their content whenever there is a potential legal risk. The DSA prescribes the steps and processes for platforms to follow. If a notice is received, it must be examined, decided, and explained with care (Articles 14, 17, 20). This does not necessarily always mean by humans, but with an eye on the accuracy of the aggregate decisions. But the DSA still assumes that most of the content is equally important and that all mistakes are equally problematic. For the universal due process obligations, this stance is understandable. However, it should not imply that content creators might not be afforded stronger procedural rights. Why shouldn’t investigative journalists in war zones be better protected against abuse or takedown?
While the DSA does not explicitly provide incentives for content creators to team up, if they set up organizations, they can collectively negotiate to gain extra procedural rights for their content as trusted sources. The DSA provides a place for such agreements in the Codes of Conduct, which have regulatory relevance (Articles 45, 37(1), 35(1), and even Article 14).
The DSA should be a (baseline) standard, not the only standard. It should be a trigger for co-regulation and competition on top of the basic rules.
Private and public enforcement
Finally, and importantly, enforcement will not happen overnight. Public authorities need resources, and the European Commission needs strong partners in the Member States. There will always be too few officials chasing too many problems. But coordination among Member States can help to pool resources and avoid needless duplication. For instance, the Russian war propaganda, using the same techniques across the continent, is surely better fought together, even though the local threats can slightly differ. The regulatory initiatives around the GDPR suggest that the cooperation of national authorities in the digital space is possible. There is no reason to doubt that the same can happen around the DSA. However, the vastness of challenges covered by the DSA should not be underestimated. The DSA offers baseline expectations, toolkits, and vocabulary — but it will take many experts with different skills to construct a healthy digital public sphere.
That being said, I still think that without private enforcement, the DSA risks, at least in some areas, causing similar dissatisfaction as GDPR enforcement. The DSA has undoubtedly learnt from the GDPR’s shortcomings in many ways, such in its institutional design and stronger risk-auditing systems. However, if I am right that there will always be too few officials chasing too many problems, the only way to complement the limited public enforcement is by going to the courts as plaintiffs. The DSA facilitates only some private enforcement actions, such as those against unfair interfaces (Article 25 DSA), or by non-profit groups (Article 90). The good news is that it does not fully pre-empt private enforcement on the national level. In my view, due diligence obligations thus can give rise to the corresponding rights of individuals on the national level, and often they should.
There is an important a role left for the national parliaments which can introduce explicit private claims making various due diligence obligations directly actionable. The DSA mostly deals with public enforcement of due diligence obligations, and is less concerned with how to convert them into legally enforceable claims of inviduals. The care is owed to individuals, but it is less clear how individuals, not regulators, can enforce the DSA’s promises. For instance, if a user’s account is terminated against all the rules, the DSA only formulates expectations of care, but not a private claim through which the concerned individual can go to the court. Sometimes the legal vehicle to do this already exists in the national law, such as contracts and torts; in other cases, they must be created.
Conclusion
I know that there is a lot of hope that the DSA can serve as a model abroad. But I think we first need to prove that it can work where it was drafted. This brief essay shows that there are many points where the DSA can fail – trusted flaggers, out-of-court settlement bodies, consumer organisations, researchers, national regulators, and obviously, the European Commission. But every point of failure is also an opportunity that mostly did not exist before the DSA. All these shiny tools have one thing in common – without investing our time and money, they cannot work as intended.