Rethinking Rights in Social Media Governance
Why fundamental rights are not enough to remedy the injustices of contemporary social media
Historically, EU internet regulation has focused on economic goals like copyright infringement and market integration. This is starting to change, however. In the context of the broader ‘techlash’ against the power and exploitative practices of major platforms, EU lawmakers are increasingly emphasising ‘European values’ and fundamental rights protection. Recent EU platform regulations rely heavily on fundamental rights to protect individual interests against state and corporate overreach. In the Digital Services Act, the EU’s major upcoming reform to platform regulation, fundamental rights are even more strongly emphasised. In turn, most of the critical scholarship on these regulations judges them according to their compliance with fundamental rights, highlighting ways that they might offer inadequate protection.
Without rejecting the importance of human rights to democracy, or the benefits of stronger fundamental rights protection in EU social law, this near-exclusive focus on fundamental rights as the primary normative framework for social media regulation needs to be questioned. Issues like discrimination, stereotyping, surveillance, and arbitrary censorship are concerning in ways that cannot be fully understood or addressed in terms of legal rights, individual harms, and universal values. Relying only on human rights to guide both social media law and academic criticism thereof is excluding other normative perspectives that place greater emphasis on collective and social interests. This is deeply limiting – especially for critical scholarship and activism that calls for the law to redress structural inequality.
Social media and social injustice
My own research focuses on an area which is not (yet) a major concern for policymakers in most countries, but is increasingly well-documented by academic and journalistic evidence: that the ways platforms organise media content tend to reinforce and exacerbate structural social inequalities. For example, multiple studies and leaked documents indicate that Facebook disproportionately deletes content from women and people of colour, while failing to remove hate speech and harassment targeting these groups. Survey evidence and user accusations suggest similar problems on Instagram, YouTube and TikTok, while studies show that most platforms’ bans on sexual content lead to widespread arbitrary censorship of queer people, sex workers and other marginalised users.
Platforms’ advertising-based business models also reinforce inequalities. They don’t just require continual surveillance, which is inherently riskier for marginalised groups whose data is more likely to be used against them (for example, by law enforcement or lenders). They also require users to be profiled and classified in ways that are often simplistic or demeaning, as when platforms group users only by binary gender. This can produce direct discrimination, for example when job adverts are targeted by race or gender. It can also have more insidious effects, like promoting regressive stereotypes. For example, influencer expert Sophie Bishop points out that YouTube’s top influencer list – which is heavily shaped by its targeting and recommendation algorithms – gives the impression that women are universally interested in makeup and shopping, while men like music and video games.
There is increasing public awareness and criticism of how technology can reinforce inequality, and issues like these are increasingly recognised by EU policymakers. So how should the law prevent discrimination and redress inequality? Looking at statements and regulatory initiatives from EU institutions, as well as most legal scholarship in this area, the answer is clear: stronger fundamental rights protection. The consensus is that rights like freedom of expression, privacy, non-discrimination and human dignity represent the core values on which the EU is founded and on which we can all agree. They protect and give shape to the public interest and the interests of vulnerable groups, as a counterweight to the economic interests of platform companies and the dangers of excessive state control.
The problem is that human rights are not just a synonym for everything good in the world. They are a legal framework which offers a particular way of identifying and litigating problems: they protect the interests of identifiable individuals who can point to identifiable actions or decisions that have harmed them. Of course, as advocates of human rights in content moderation have argued, human rights are not only about individual legal entitlements: they also provide a language we can use more broadly to identify shared values and to articulate and discuss problems and solutions.
But in both of these roles – as a legal framework and as a language for political discourse – human rights are not neutral. This point has been made by decades of scholarship in critical legal studies, feminist political theory, history, postcolonial theory, and law and political economy. Human rights structure our thinking and our legal institutions in particular ways, promoting certain values and ideologies, drawing attention to some issues and obscuring others.
The limits of rights frameworks
Thinking about social media in terms of individual rights influences how the law addresses discrimination and inequality. EU platform regulation consistently emphasises legal protections for individual rights as a way to prevent excesses of state and corporate power. For example, the key safeguard against over-broad censorship under the Copyright Directive and Terrorist Content Regulation is that users can appeal to platforms to reinstate their content, and platforms are obliged to have regard to their rights. The Digital Services Act will extend these procedural protections to all content moderation. But as a strategy to promote free and equal online discourse, this is profoundly flawed.
First, the significance of such individual remedies is in practice extremely limited. In copyright law, where they have long been available, studies consistently show that people hardly ever use them, for many reasons: they are time-consuming, intimidating (no one wants to risk being sued by Warner Music) and poorly-understood. Where they are used, they will – like most individual legal rights – disproportionately benefit more privileged users who have the time, information and resources to use them. Such rights are also structurally incapable of representing all of the diffuse interests at stake: users whose content is removed might occasionally get it reinstated, but these remedies don’t protect the potentially millions of users prevented from seeing the content, or the broader public interest in free online discourse.
Finally, these individual remedies let people challenge particular decisions, but not the broader principles, systems and biases behind them. For example, queer users whose content is censored could challenge removals of particular posts that clearly don’t violate platform policies, but such appeals procedures don’t let them challenge the reasonableness of banning all sexual content, or the widespread biases that make fellow users and moderators more likely to see queer self-expression as explicit.
More broadly, thinking in terms of individual rights disproportionately directs our attention to issues which fit this framing. The biggest focus for regulators so far has been content moderation, which obviously restricts individuals’ freedom of expression. There has been little attention to equally important issues which bear on equality and inclusion, but are not easily articulated in terms of individual harm. For example, as Anna Lauren Hoffmann argues, questions about how tech platforms shape social and cultural norms – for example, by promoting content which reinforces gender stereotypes – are not captured by the conceptual framework of rights and discrimination.
Queer users, people of colour and sex workers also frequently accuse platforms of ‘shadowbanning’ content – continuing to host it, but not showing it to an audience. It’s clearly concerning when platforms systematically suppress marginalised voices, but this also doesn’t fit comfortably within a rights framework. There is no identifiable baseline level of algorithmic amplification that people could have a right to. Platforms’ content recommendations are hugely complex – every user sees different content, ranked differently – and based on secretive, constantly-changing criteria, making it almost impossible for users to prove they were unfairly demoted compared to others.
Moreover, recommendation systems have social impacts, even when they don’t affect individuals. For example, Zeynep Tufekci has suggested that Facebook promotes ‘feel-good’ content that invites likes and shares over challenging political topics like Black Lives Matter protests. Platforms’ choices about what content to promote are a legitimate topic of political concern, but they aren’t easily discussed in terms of fundamental rights – and consequently, EU law has so far largely ignored them.
Fundamental rights in the DSA
This may change somewhat when the Digital Services Act comes into force, creating a raft of new obligations for the biggest platforms. Among other things, they will have to investigate systemic risks to fundamental rights like non-discrimination and freedom of expression, and take mitigation measures. Thus, fundamental rights will not only have the limited protection of individual remedies – platforms will also be legally obliged to respect them. Recommendation systems and other design decisions with systemic effects are explicitly encompassed by these provisions.
However, while the impact of these provisions remains to be seen, there are reasons to keep our expectations in check. What rights like non-discrimination require in specific situations is often uncertain – EU discrimination law is notoriously complex and context-dependent – and the novel concept of ‘systemic risks’ to rights is even more unclear. What does it mean for someone’s rights to be at risk, and how widespread must this risk be to be ‘systemic’? These questions will be answered, in the first instance, by the platforms themselves, who will decide how they conduct their risk assessments.
Scholarship on corporate compliance with equality and privacy law suggests that when corporations are responsible for identifying and mitigating risks, they typically construct and define those risks in the ways that most align with their own interests and least interfere with their existing business practices. As evelyn douek suggests, if we require platforms to use human rights language without mandating substantive changes, the likely outcome is that they will use this language to showcase superficial reforms and justify business decisions they would have made anyway.
Limited ambitions
This points to a final problem with human rights discourse: it can serve to legitimate harmful state and corporate activities and obscure underlying structural injustices, by focusing attention on decisions that visibly and severely harm individuals and suggesting that, if these would go away, everything would be fine. This focus on individual harms systematically distracts attention from background political-economic factors which influence how such decisions are made, and who is most vulnerable to them.
For example, decades of research in political economy of the media has shown that advertiser-funded media systems strongly influence how content is produced and organised and whose voices are heard – typically favouring elite interests, underserving the working class and promoting feel-good, depoliticised content over political debate and confrontation. Inequalities in social media are crucially shaped by platforms’ business models, which require users to be continually surveilled and classified according to their value as consumers, and incentivise the suppression of controversial or non-mainstream content that might not put people in the mood to shop. If our biggest demand for reform is that social media companies comply with human rights, we overlook these bigger questions about who should own, fund and control online media.
Changing platforms’ business models would require political struggle: it would go against decades of neoliberal internet policies and the interests of some of the world’s most valuable companies – and all their shareholders. The seemingly apolitical, consensual language of fundamental rights obscures this reality, falsely promising solutions that everyone will agree with. But even if platforms respect the outer limits of acceptability that fundamental rights law is designed to provide, they will still primarily be guided by profit and the interests of their real clients, advertisers – not the need to actively create a more inclusive and egalitarian media system. Can’t we imagine more ambitious and progressive aspirations for social media governance?