The debate about social media contracts has been emphasising the power of Big Tech giants to set their own standards and rules to govern their digital spaces. It is not surprising, indeed, that social media contracts have primarily been discussed from the perspectives of the processing of personal data for the purposes of targeted advertising, or as instruments to set standards of content moderation. Despite the relevance of these points, this focus detracts attention from the users, particularly those who are capable of monetising content within their communities, such as influencers, also by spreading harmful content.
The possibility to profit from the creation and sharing of online content has pushed the growth of content monetisation practices, which are increasingly driven by influencers, cultural entrepreneurs or, more generally, professional content creators. Similar to corporations and governments as users, and unlike average users, influencers do not exclusively enter platforms with the spirit and expectation of finding a community to share opinions or ideas with. Rather, influencers spend most of their time in these digital spaces monetising their online presence, often learning how to exploit the logic of social media to increase their own profit. From traditional monetisation schemes, such as sponsored ads, to new sources of income coming from social media, influencers aim to capture users’ attention and engagement through their content. Towards this end, harmful content, such as disinformation, can be instrumentalised to maximise views and engagement, and, therefore, monetisation, as underlined by the case of disinformation for hire about the Covid-19 vaccinations.
The possibility to profit from the dissemination of harmful content triggering views, engagement, and ultimately monetisation does not only concern the contractual relationship between social media and influencers, but also affects how other users enjoy digital spaces and expect to be protected based on the commitments of social media to tackle the spread of harmful content, for instance, by demoting or removing content. The monetisation of harmful content by influencers should be a trigger, first, to expand the role of consumer law as a form of content regulation fostering transparency and, second, to propose a new regulatory approach to mitigate the imbalance of powers between influencers and users in social media spaces.
Harmful But Not Illegal
The answer to the challenges raised by the spread of harmful content is not straightforward. The monetisation of content is not only about making profits from creating and sharing it, but also concerns the exercise of constitutional rights, primarily freedom of expression. Even when sharing disinformation, users are still exercising their right to freedom of expression without violating legal norms. Likewise, the spreading of hateful statements against unprotected groups may be unpleasant but not necessarily illegal. More generally, what is morally wrong is not necessarily illegal, thus supporting “a right to do wrong“. This constitutional approach leads to tolerating even unpleasant or unkind statements as underlined in the case Handyside v. The United Kingdom, and this approach would also extend to influencers. As a result, harmful content does not always qualify as illegal content.
Still, these considerations do not mean that legal content cannot be harmful. The exploitation of free speech for the purposes of content monetisation challenges the protection and limits of the same constitutional right. Relying on harsh statements to stimulate more engagement, and, therefore, more revenue, need not always qualify as criminal conduct, but may still make social media spaces less safe for users. Likewise, proposing a certain diet or lifestyle is not illegal per se but it could lead to users’ distress and addiction. Even influencing public opinion by relying on political speech to hide strategies of content monetisation does not qualify as an illegal conduct, but it can be harmful for democracy.
These examples underline the critical question raised by the monetisation of harmful content on social media: should constitutional democracies, and social media contracts, tolerate content monetisation strategies based on the sharing of harmful content? Generally, constitutional democracies reject the idea of profiting from illegal conduct, such as in the case of selling weapons or drugs, and social media have indeed introduced policies on content monetisation of illegal/harmful content, as underlined by the case of YouTube.
This challenge has captured the attention of policy-makers. At least from a European perspective, the approaches to tackle illegal content have been complemented by an increasing responsibilisation of social media to also address harmful content. The Digital Services Act, for example, expressly refers to harmful content when it comes to online advertising as a source of “significant risks, ranging from advertisements that are themselves illegal content, to contributing to financial incentives for the publication or amplification of illegal or otherwise harmful content and activities online, or the discriminatory presentation of advertisements with an impact on the equal treatment and opportunities of citizens” (Recital 68). Likewise, the amendments to the Audiovisual Media ServiceDirective require video-sharing platforms to take appropriate measures for addressing content that could impair minors and the general public (Article 28b). Outside the Union, the United Kingdom is still discussing its Online Safety Bill, which focuses on online harms and the duties of care of online platforms to tackle this content.
Commercial But Not Political
The challenges of content monetisation are not exhausted by whether social media contracts should reject the economic exploitation of harmful (but legal) content, but also by whether the exploitation of political speech for (hidden) commercial purposes should be tolerated. The spread of harmful content by influencers requires looking at these actors as not neutral or, at least, not equal to other users. When opinions, or more generally speech, are potentially polluted by economic interests, the primary challenge is not only to ensure that incentives are disclosed but also to avoid the exploitation of harmful content for the purposes of content monetisation.
Not all users are equal in social media spaces, as they are not in the offline world. Corporations and governments can exercise broader influence than a single individual user. Likewise, political figures enjoy a broader protection of their speech and a lower expectation of privacy. Media outlets participate in social media spaces as users, even if they are different from other users who are active in reporting events. This framework also applies to influencers who, for instance, pursue climate change activism (i.e. eco-influencers), thus engaging with political speech and addressing topics in the public interest. These users, nevertheless, play a different role when compared to states, corporations or media outlets in democratic societies. For instance, media outlets create content to pursue a public interest function that is often regulated and driven by professional standards. Influencers are also content creators, but their (political) speech, and monetisation schemes, can be driven by purposes defined by their sponsors or plain economic interests.
Such a lack of distinction is particularly relevant when looking at the increasing engagement of influencers in public interest subject areas. Users need not always be aware of the commercial nature driving certain political speech by influencers. For instance, sponsoring a certain product could be hidden by a discussion on the sustainability of the same product. Likewise, the same approach could be driven by engaging with unfair commercial practices, when, for example, businesses pay influencers to negatively compare the quality of competitors’ products and services. The blurring lines between commercial speech and political speech is not a novelty, but it is potentially harmful for democracy. Indeed, constitutional democracies struggle with regulating commercial speech when it is attracted in the scope of political speech, i.e., the magnetic effect, as underlined by the European proposal for a Regulation on the transparency of political advertising and the European Media Freedom Act.
European institutions have partially considered different roles and positions of users in social media spaces. The Digital Services Act introduces a new approach to content moderation. Particularly, it shapes contractual relationships, for instance, by introducing the new mechanism of trusted flaggers as users with a recognised expertise in a certain area. According to Article 22, online platforms shall take the necessary steps to ensure that notices submitted by trusted flaggers are prioritised. In this way, the Digital Services Act recognises that some actors are different from other users, thus giving priority to some complaints over others. Even though the case of trusted flaggers is not significant for limiting the monetisation of harmful content, it defines a potential way to distinguish users in social media spaces.
Free But Not Fair
The challenges raised by the monetisation in social media spaces leads to wondering whether contract law can provide a solution to limit the spread of harmful content. It is about reflecting on the relationship between contractual freedom and fairness which goes beyond the single vertical relationship between the user and the platform. Indeed, the monetisation of harmful content touches on how, from a horizontal perspective, social media can ensure the possibility of the safe use of their services. In other words, the perspective moves from an individual to the collective dimension of social media spaces. Contractual obligations between users and social media are not only related to the bilateral relationship user-platform, but also connected to the other contracts of social media users who expect these platforms to provide a safe environment.
Social media contracts are expressions of contractual freedom between social media when designing their standards and users when deciding to participate in social media spaces. The monetisation of harmful content moves the perspective from freedom to fairness, highlighting social media contracts’ potential role in protecting users, exposed to online harms driven by content monetisation practices. It would indeed be unfair if users would be subject to harmful content on social media spaces while other users, primarily influencers, can profit from that through content monetisation practices.
This unfairness in social media contracts can be considered a trigger for the Union to protect users’ rights in social media spaces, also considering the role of consumers in the European Charter of Fundamental Rights. Consumer law can also be considered a form of content regulation. By setting standards and obligations, it shapes how content and procedures should look like to consumers in social media spaces. This approach can also be extended to social media contracts and their relationship with influencers. For instance, mandatory rules on transparency can also lead social media contracts to provide more information about the moderation and monetisation of influencers’ content. This approach could also limit the imbalances between social media and influencers. Therefore, consumer law can provide critical guidance for social media to design tailored contracts which reflect fairness in the multi-layered relationship of governance of their digital spaces.
Nonetheless, consumer law also faces challenges when dealing with the relationship between users and influencers. For instance, one of the primary points is whether users should be considered as consumers not only in relation to social media, but also in relation to influencers. The relationship between influencers and users is not always a matter of consumer law, considering that influencers do not always fall within its scope. As a result, a solution could be based on the introduction of a different approach. For instance, the platform-to-business regulation can provide a regulatory model to consider how to adjust imbalances of powers in social media spaces, particularly looking not only at the vertical relationship between social media and users but also between influencers and users.
This framework defines a trajectory to limit the monetisation of harmful content in social media spaces by expanding the role of consumer law as an instrument to mitigate contractual unfairness between influencers and users. It can designate a new regulatory approach to horizontal contractual relationships on social media, leading services to consider the spread of harmful content not only between influencers and average users, but also in the user contracts of privileged influencer users.