This article belongs to the debate » The Rule of Law versus the Rule of the Algorithm
29 March 2022

Regulating Recommending: Legal and Policy Directions for Governing Platforms

Digital platforms have strategically positioned themselves as intermediaries between individuals, businesses, organisations, governments, and others. Platform companies frequently adopt business models based around extensively tracking user behaviour and using that information to supply targeted advertising, algorithmically personalise services, and grow user engagement, revenue, and market position. While platform capitalism can be immensely profitable, the problems this brings are increasingly stark. As we have argued elsewhere, it’s time to regulate recommending.

Recommending in Platform Capitalism

To personalise services in pursuit of user engagement with their platforms – a key metric for platforms seeking to maximise the audience for targeted advertising – platform companies typically use recommender systems to decide what their users should see. This allows platforms to algorithmically select media, products, or services shown to individuals or groups according to some determination made by their system of relevance, interest, or importance to them – a practice called ‘recommending’. As well as on content-oriented platforms, recommender systems are increasingly found in many other areas to suggest products and services, including in finance, healthcare, and retail, where algorithmic ordering can bring its own problems and complications.

The effect of widespread recommending by platform companies is that much online space is algorithmically constructed. Such is its ubiquity, you’ll likely come across some form of recommending wherever you go online. Google, for example, uses recommender systems across its services, including to personalise search results to show links that bring revenue to Google. Social platforms like YouTube, Facebook, Reddit, Twitter, TikTok, and Instagram use recommender systems provide a personalised feed of content for each user, as they seek to keep users engaged and drive advertising revenue. Netflix and Spotify use recommender systems to present personalised selections of content to users as well as recommendations for further content to keep users watching, listening, and subscribing. Amazon uses recommender systems to predict user interests and induce them to buy products from Amazon rather than elsewhere.

At a non-technical level, we can identify three forms of recommending:

  1. ‘Open recommending’ involves recommendations from a pool of content which is user-generated or aggregated without being specifically selected by the platform (though platforms may include their own content alongside that produced by users). Google, YouTube, Facebook, Reddit, Instagram, TikTok, and Amazon all use open recommending. On YouTube, for example, any user-uploaded video is by default brought into the recommender system.
  2. ‘Curated recommending’ selects from a pool of content which is curated, approved, or otherwise chosen by the platform rather than provided directly by users or automatically brought in from elsewhere. Netflix is a popular example of a curated system, as videos in its library are selected by Netflix (alongside content produced by Netflix itself). Curated recommending is often used where more traditional forms of media requiring licensing are involved, such as music, films, or TV shows.
  3. ‘Closed recommending’, where recommended content is produced by the platform or platform itself. For example, where a news organisation provides a personalised feed of stories and articles to its users, all of which are produced or commissioned by the organisation itself.

Open recommending is the biggest contributor to systemic societal problems (such as disinformation, violent extremism, and hate speech) as well as potential harms of a more individual nature (such as around mental health issues). There is substantial and growing evidence from academic research, journalistic investigations, and elsewhere that open recommending plays a major role in the spread of material promoting far right extremism,1) disinformation,2) and conspiracy theories3) (including anti-vaxxer conspiracy theories, even before the covid pandemic), for example. The metrics and rankings underpinning recommending can be manipulated and inflated by automated accounts (bots) intended to artificially push such content to a larger audience.

Engagement and its Consequences

These and other problems relate fundamentally to the role of recommending in platform companies’ business models. Though they generally frame recommending as personalising services to benefit users by showing them what they want to see, recommending ultimately serves the interests of platform companies themselves: encouraging users to stay engaged with their platform or to make a purchase, bringing revenue, and building market position. It is not that platform companies lie when they say that they use recommender systems for personalisation, but simply saying that recommender systems are used for personalisation does not acknowledge that the purpose of personalisation is showing people content the platform predicts will bring the greatest user engagement. Put another way, rather than showing people what they want to see, recommending shows people what the platform wants them to see (though, of course, these may often overlap).

Prioritising for engagement can produce two interrelated feedback loops on either side of platforms that together distort online spaces. One feedback loop involves users being shown content related to that which they and other ‘similar’ users have previously viewed, liked, or shared. The more users interact with that content, the more likely it will be recommended in the same way, potentially resulting in more viewing, liking, and sharing (and thus engagement with the platform), thereby leading the system to promote that and other similar content further. The second feedback loop involves producers – or creators – of content, whose revenue from platforms’ monetisation programmes is tied to engagement metrics. This incentivises production of the controversial, polarising, shocking, or extreme content that recommending prioritises. As engagement metrics are inflated by the recommender system and the feedback loop on the user side, this further incentivises this kind of content.

Platform companies may argue that their systems are content neutral – that they don’t actively select for controversial or shocking material. They’ll also say that the roots of these issues lie far beyond platforms, in socio-economic or political causes. Both claims are, broadly speaking, true. But theoretical content neutrality with engagement as a priority in practice produces undesirable effects – controversial, shocking, or potentially harmful content is often the most ‘engaging’. Theoretical content neutrality often thus means systematically promoting material with a corrosive effect on society.

The Regulatory State of Play

Despite these systemic problems with open recommending, policy and regulatory discussions have generally focused more on what users say, the content they produce and share, and how they interact with one another. Yet regulating users’ content and interactions brings significant freedom of expression concerns that cannot easily be addressed, as well as difficult questions about why actions that would be legal offline should be illegal online. Moreover, individual items of content often aren’t really a public policy problem. Arguably, it doesn’t matter much if a video promoting an extremist conspiracy theory is uploaded to YouTube if it’s only seen by 10 people. What matters is whether that video reaches a large audience and whether it is placed in a context that reinforces it.

If a platform’s recommender system works to inflate the audience for the conspiracy theory video such that it is seen 10 million people, then that is more of a concern. If the recommender system places it alongside other, similar content that together works to reinforce or legitimise its message, then that is potentially a problem worthy of intervention. Regulatory efforts against content itself are therefore often misguided, both on freedom of expression grounds and because they target the wrong thing. Instead, the dissemination of content – how it is given audience and context by platforms’ recommender systems – is arguably a more pressing concern.

Regulatory proposals have thus far failed to adequately address recommending, however. Much has been made of the European Commission’s proposals for digital issues in three key areas: the Digital Services Act (DSA), the Digital Markets Act (DMA), and the Artificial Intelligence Act (AI Act). Though the DMA does include provisions against self-preferencing (the practice of platforms recommending their own products and services ahead of others), the DSA doesn’t go much beyond lukewarm risk assessments and minimal transparency obligations for platforms’ recommender systems. And, though recommenders often involve some ‘AI’, platforms’ recommending is generally outside the AI Act’s obligations for providers of ‘high-risk’ AI systems. Beyond the EU, the UK’s Online Safety Bill has similarly received much attention, yet its largely incoherent proposals ultimately focus too much on controlling users and their communications – even in the guise of regulating platform companies – and risk doing more harm than good.

Some Directions for Regulation

To confront some of these problems with recommending, we make several regulatory proposals. First, chronological or other non-selective ordering of content feeds should be the default, with algorithmic recommending available to users only on an opt-in basis. It should be at least as easy for users to opt back out of algorithmic feeds as to opt in. Platform companies should have specific, detailed transparency requirements around recommending user content, with obligations to share information with users, regulators, and others, as appropriate.

We recognise that problems with recommending are systemic in nature, inherent to platform companies’ business models, and cannot be addressed by individual users. In any case, opting-in and user controls are limited solutions that will be manipulated by platform companies, much as they manipulate data protection controls to ensure access to users’ personal data for their targeted advertising systems. The law therefore needs to go further than simply offering users choice and control to also provide substantive protections for individuals and society.

To that end, platform companies must be required to put other priorities ahead of engagement. They should face specific obligations to reduce their algorithmic dissemination of certain material – such as containing hate speech, dangerous conspiracy theories, violent extremism, incitement to violence, or promoting suicide, self-harm, or eating disorders – by excluding it from their recommender systems. Detecting this kind of material is difficult, so we shouldn’t overreact if some slips through, and we should err on the side of inclusion in recommending if classifications are uncertain. But where platform companies show themselves to be systematically incapable of open recommending responsibly by minimising the algorithmic dissemination of this kind of material, or repeatedly unwilling to do so, they should simply be prohibited from open recommending at all. Where they continue despite such a prohibition, they should face fines and other penalties.

Crucially – because platform companies often don’t do much until the incentives or risks push them far enough – content recommended while under such a prohibition should be beyond any content-related liability shielding (under the E-Commerce Directive or the forthcoming DSA). The basic principle is that law should not shield a company from liability for doing something that law itself prohibits. There should also be other enforcement actions building up to prohibition (much the same way as in GDPR), and powers for regulators to audit and review the design, development, use, and functioning of platforms’ algorithms and to require other detailed information from platform companies about their systems and processes.

Consequences and Context

Of course, there are still freedom of expression concerns raised by regulating recommending in this way. But they are quite different to – and arguably less serious than – the kinds of freedom of expression problems with directly regulating users’ content and interactions. In effect, the distinction is between regulating what users say and how they relate to one another, on one hand, and, as we propose, regulating how platforms choose what to show to users, on the other. People as users of platforms must remain free to say and do undesirable things online as they are offline, within the bounds of the right to freedom of expression, but there is no good reason why a platform’s algorithms should artificially inflate their audience.

Our proposals are a high bar for platform companies and their systems to clear. Yet none of this would mean platforms couldn’t in principle continue to recommend to users (on an opt-in basis), though they would need to pay more attention to potential harms instead of engagement, and recommending would potentially carry greater risk for them. However, it’s unlikely that most platforms, as they currently are, would or could comply with what we propose today, and many might find themselves prohibited from open recommending. Some platform companies may need to substantially reconfigure their platforms and their services. It might even be the case that some leading platforms can never meet this standard. Even if they could, there would undoubtedly be a hit to platform companies’ bottom line, and they may say that they simply process too much content to be able to recommend responsibly. Nevertheless, it is the minimum we should insist on.

Platform companies that have strategically sought positions of significance and influence cannot now shirk the responsibilities that should come with it. Nor should the interests of their private profit come before the overriding interests of society and the public good. If platforms process too much information to meet their responsibilities, then they should process less. If they are too big, then they should be smaller. If their service moves too fast, then it should slow down. If their recommender systems cannot be used responsibly, then they should not be used at all.

While regulating platform companies’ recommending is necessary, it is in no way sufficient to address all the problems with platforms. Ultimately, these complex problems require multifaceted responses from a range of legal and policy areas to intervene against platform companies’ business models and the online platform ecosystem more generally: competition law, data protection and governance, structural changes towards decentralisation, better mechanisms for oversight and audit, and more.

Of course, most of the problems with recommender systems and online platforms are rooted deep in social, political, and economic problems and are unlikely to be solved simply by regulating platform companies. For that, structural social, political, and economic reform is needed, far beyond platforms, to rebuild societies and communities. Nor should we ignore the outsized role of traditional media outlets in seeding and spreading disinformation and conspiracy theories. Indeed, it would be facile to claim that regulating recommending can solve society’s problems. But we should recognise the role that platforms play in exacerbating those problems and act, before it is too late.

References

References
1 Derek O’Callaghan, Derek Greene, Maura Conway, Joe Carthy, Pádraig Cunningham (2014) ‘Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems’, Social Science Computer Review, 33(4), pp.459-478. Available at https://doi.org/10.1177%2F0894439314555329; Mark Ledwich and Anna Zaitsev (2020) ‘Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization’, First Monday, 25(3). Available at https://firstmonday.org/ojs/index.php/fm/article/view/10419; Joe Whittaker, Sean Looney, Alastair Reed, and Fabio Votta (2021) ‘Recommender systems and the amplification of extremist content’, Internet Policy Review, 10(2). Available at https://doi.org/10.14763/2021.2.1565; Jonas Kaiser (2018) ‘How YouTube helps to unite the Right’, Alexander von Humboldt Institute for Internet and Society – Digital Society Blog. Available at https://www.hiig.de/en/how-youtube-helps-to-unite-the-right; Mozilla Foundation (2021) ‘YouTube Regrets: Report’. Available at https://foundation.mozilla.org/en/youtube/findings; Kelly Weill (2018) ‘How YouTube Built a Radicalization Machine for the Far-Right’, The Daily Beast. Available at https://www.thedailybeast.com/how-youtube-pulled-these-men-down-a-vortex-of-far-right-hate; Morgan Keith (2021) ‘From transphobia to Ted Kaczynski: How TikTok’s algorithm enables far-right self-radicalization’, Business Insider. Available at https://www.businessinsider.com/transphobia-ted-kaczynski-tiktok-algorithm-right-wing-self-radicalization-2021-11
2 Soroush Voshougi, Deb Roy, and Sinan Aral (2018) ‘The spread of true and false news online’, Science, 359(6380), pp.1146-1151. Available at https://doi.org/10.1126/science.aap9559; Samantha Bradshaw and Phillip N Howard (2018) ‘Why Does Junk News Spread So Quickly Across Social Media? Algorithms, Advertising and Exposure in Public Life’, Oxford Internet Institute / Knight Foundation. Available at https://comprop.oii.ox.ac.uk/research/working-papers/why-does-junk-news-spread-so-quickly-across-social-media; Elise Thomas (2021) ‘Recommended Reading: Amazon’s algorithms, conspiracy theories and extremist literature’, Institute for Strategic Dialogue. Available at https://www.isdglobal.org/isd-publications/recommended-reading-amazons-algorithms-conspiracy-theories-and-extremist-literature; The Royal Society (2022) ‘The online information environment Understanding how the internet shapes people’s engagement with scientific information’. Available at https://royalsociety.org/topics-policy/projects/online-information-environment; Paul Lewis (2018) ”Fiction is outperforming reality’: how YouTube’s algorithm distorts truth’, The Guardian. Available at https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth
3 John C Paolillo (2018) ‘The Flat Earth Phenomenon on YouTube’, First Monday, 23(12). Available at https://firstmonday.org/ojs/index.php/fm/article/view/8251/7693; Julia Carrie Wong (2019) ‘How Facebook and YouTube help spread anti-vaxxer propaganda’, The Guardian. Available at https://www.theguardian.com/media/2019/feb/01/facebook-youtube-anti-vaccination-misinformation-social-media; Matt Reynolds (2019) ‘Think Facebook has an anti-vaxxer problem? You should see Amazon’; Julia Carrie Wong (2020) ‘Down the rabbit hole: how QAnon conspiracies thrive on Facebook’, The Guardian. Available at https://www.theguardian.com/technology/2020/jun/25/qanon-facebook-conspiracy-theories-algorithm; Marc Faddoul, Guillaume Chaslot, and Hany Farid (2021) ‘A longitudinal analysis of YouTube’s promotion of conspiracy videos’, arXiv preprints. Available at https://arxiv.org/abs/2003.03318; Elise Thomas (2021) ‘Recommended Reading: Amazon’s algorithms, conspiracy theories and extremist literature’, Institute for Strategic Dialogue. Available at https://www.isdglobal.org/isd-publications/recommended-reading-amazons-algorithms-conspiracy-theories-and-extremist-literature

SUGGESTED CITATION  Cobbe, Jennifer; Singh, Jat: Regulating Recommending: Legal and Policy Directions for Governing Platforms, VerfBlog, 2022/3/29, https://verfassungsblog.de/roa-regulating-recommending/, DOI: 10.17176/20220329-131208-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.