Algorithm Centrism in the DSA’s Regulation of Recommender Systems
The regulation of recommender systems is often framed as an issue of algorithmic governance. In this post I want to argue that this focus on recommender algorithms can be restrictive, and to show how one can go about regulating recommender systems in a broader sense. This systemic view pays closer attention to recommendation outputs (i.e. recommendations) and inputs (i.e. user behavior), and not just processing logics.
This post builds on arguments developed in my 2020 paper ‘The Soap Box as A Black Box’, published in the European Journal of Law Technology. After revisiting my basic critique of algorithm-centric regulation, I then apply it to the newly proposed Digital Services Act (DSA), which is the EU’s first major attempt to regulate recommending. As we will see, some of its key provisions on recommender systems (Articles 29 and 15) are open to a critique of algorithm centrism. Others show a more promising, systemic approach (e.g. Articles 26-27 and 30).
Opacity of Outputs: How Personalization Obscures Recommendations Outcomes
The dominant metaphor of algorithmic governance is that of a ‘black box’: a process of which the inputs and outputs may be known, but the internal algorithmic logics remain secret. This problem of ‘algorithmic transparency’ garnered attention with the rise of machine-learning technologies, whose algorithms defy human comprehension due to their extreme complexity, as well as by intentional corporate secrecy (see Burrell 2016). Fields as disparate as medicine, judicial sentencing, and media governance now face this problem of algorithmic transparency: Is it technically, legally, commercially feasible to peer inside the black box, and understand the algorithms that govern us?
Most recommender systems are black boxes (excepting, for instance, relatively simple ranking logics such as reverse chronology). But I want to emphasize that recommendations are also opaque in other ways. The black box metaphor does not go far enough, because with recommender systems not only the algorithm but even the basic outputs are obscure. This point may not be intuitive. After all, individual users can still see what recommendations (i.e. outputs) they receive. But since these recommendations are personalized to each user, the aggregate output remain opaque at a systemic level. With much effort, dedicated researchers can try cobble together a systemic understanding by surveying large samples of users, but platforms have thwarted these efforts with technical and legal obstruction (as Amélie Heldt, Mattias Kettemann and I discussed in an earlier Verfassungsblog post here). The result, in short, is that we lose the collective ability to see what others are seeing. Leaving aside why recommender systems show the public what they do, we can barely even grasp what they are showing.
Complexity of Inputs: Recommenders as Complex Sociotechnical Systems
The black box metaphor also risks understating the complexity of user behavior as a primary input in recommender systems. There is great complexity not only in the algorithmic processing of users’ behavior, but also in the basic scale, variance and dynamism of this behavior. Even with relatively simple algorithms such as reverse chronology, the human factor can make recommenders deeply complex and unpredictable. (Sophisticated recommenders can incorporate countless different types of other inputs besides, but here I focus on user behavior as the dominant influence in most commercial, engagement-optimized systems.) Once we appreciate the significance, complexity and agency of users within the sociotechnical process of recommending, the algorithm becomes less central to diagnoses and solutions.
Users influence recommenders in various ways. For starters, users upload the content available for recommendation. Users also furnish crucial engagement signals such as clicking, liking, upvoting, commenting, following, subscribing, and so forth, which teach the algorithm what content to prioritize. What makes these algorithm-user interactions truly complex is that, over time, they influence each other mutually and recursively: user behavior serves as an input for the machine-learning models to learn and adapt to, and users are in term shaped in their habits, routines, and networks by the algorithm’s offerings. Experts warn that popular understandings tend to overestimate the control exercised by algorithms, and underestimate that of user communities (e.g. Rieder, Matamoroz-Fernandez & Coromina 2018;Munger & Philips 2020).
One important takeaway from this sociotechnical view of recommendations is that we should not be overly optimistic about our capacity to discern causality, much less corporate intent, to link observed recommendation outcomes with specific algorithmic design choices. Even the engineers who design these systems, with unfettered access to the relevant data, have difficulty grasping the full effect of their choices. The recursive interactions between algorithms and users make for a complex, path-dependent system, in which the bare algorithmic logics never tell the full story. Ideally, in the long term, close scrutiny of algorithms would allow us to understand and hold accountable recommender design choices. For the time being, by necessity, it may be more feasible to hold platforms responsible not for algorithmic logics but for systemic outcomes. For this, the essential first step remains an adequate view of recommendation outputs, and how these change over time.
Another takeaway for regulators is that recommender system outcomes are shaped not only by ranking algorithm per se, but by the broader array of features and affordances that shape content production and engagement. For instance, platforms must make crucial design choices in whether to permit only positive engagements (e.g. ‘liking’ or ‘upvoting’) or also negative engagements (e.g. ‘disliking’ or ‘downvoting’). When the issue is framed as one of algorithms, then such input features may be taken as a given, and the debate devolves to haggling about their relative priority. For instance, Facebook’s Newsfeed algorithm has been criticized for giving greater priority to ‘angry’ reactions than positive reactions such as ‘likes’. But this algorithm-centric critique risks overlooking the more fundamental, constitutive design question whether Facebook ought to have added additional emotional reacts in the first place. A shift to recommendation systems returns to view these constitutive choices in the creation and selection of recommendation inputs, and how platforms afford user content production and engagement.
Algorithm-centrism in the DSA
The proposed Digital Services Act (DSA) would regulate recommender systems with several duties, many of which reflect the same preoccupation with algorithms over outputs. Several of its provisions, I believe, are open to a critique of algorithm-centrism, including its rules on recommender audiences (Article 29 DSA) and uploaders (Article 15 & 17). A more promising systemic approach can however be seen in the rules in the rules on systemic risks (Article 26 & 27) and ad archives (Article 30).
Articles 15 & 17 on Statements of Reasons and Appeals
Whilst Article 29 DSA protects users in their role as recommendation audiences, recent amendments would also protect users as recommendation candidates. Both the Parliament and Council version of Article 15 DSA, on the “statement of reasons”, demand that intermediaries must notify and explain to users their content moderations decisions, including modifications to the visibility of content such as “demotion”, also known as downranking. Relatedly, the revised versions of Article 17 DSA would allow users to appeal such content moderation decisions. One might say these rules resemble a form of due process for downranking; duties to give reasons and to hear appeals.
Articles 15 and 17 speak to a very real concern. Undisclosed and unaccountable suppression of user content, often referred to as ‘shadow banning’, is feared by many social media users. But I worry that an algorithm-centric perspective may undermine the DSA’s proposed solutions. The essential problem I see is that downranking or “demotion” can occur in many different ways, in terms of its algorithmic operationalization, making it difficult to define effectively for due process purposes. Recommender governance involves countless design choices that affect user ranking – which are so significant that they demand due process? The line is not easily drawn. The archetypical case of demotion might involve a human moderator singling out a specific account or post and imposing a substantial percentage-point reduction (let’s say, 50%?) in its likelihood of being recommended. But demotion can also be effectuated through more generic and indirect measures, such as blacklisting certain phrases, hashtags, formats, outgoing URLs, and so forth. Such measures can affect a relatively small group of users, or many millions across the entire service. Ought these users then be notified? Another way of putting the problem is to ask which aspects of the ranking algorithm are part of ranking routine, and which are a downranking intervention. As long as we cannot agree on a baseline of normal or ordinary treatment – quite a challenge indeed in the technically complex and politically fraught field of recommendation governance – then a definition of ‘downranking’ may be just as elusive.
Due process rights for downranking aren’t just technically challenging. They also risk overlooking the more fundamental problem in which concerns about ‘shadow banning’ are rooted: the general lack of information as to how users are positioned in recommender systems.1) At present, most users receive scant information besides aggregate view counts (the exception, of course, being advertisers who pay good money for the privilege of receiving detailed analytics). This lack of information makes it exceedingly difficult for users to determine whether they are being disadvantaged at all. Are they being recommended often, or are users finding them through other channels such as subscriptions, external links or web searches? These facts are of course known to the platform, but they are kept as a resource to be monetized and hidden from the user in question. An influencer quoted in recent reporting by Josephine Lulamae for AlgorithmWatch goes to the core of the matter: “Why do they make it so difficult for us to see how our content is performing?” The DSA has no plans to address this more basic uncertainty; it goes directly to the arcane complexities of specific ranking logics and foregoes the basic, prior question of outputs.
Article 29 on Recommender Systems
Article 29 of the proposed DSA, titled ‘Recommender Systems’, requires Very Large Online Platforms (VLOPs) to inform users about their recommender systems. They must disclose “the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters”. There is also an element of user choice, in that these options must include “at least one option which is not based on profiling”.
This is perhaps the best example of algorithm-centrism. Article 29 appeals to notions of user empowerment, autonomy, and informed choice, but it does so through a very narrow understanding of user choice as a matter of selecting algorithmic weightings (“to modify or influence those main parameters”). Based on past experiences, such as data protections infamous ‘cookie walls’, we already have reason to believe that such complex and abstract choice features are unlikely to receive much attention or engagement from the average user, much less the vulnerable user who may need it the most. At the same time, I fear that Article 29’s algorithmic focus overlooks how users actually engage with and customize their recommendations in practice: not by choosing between abstract algorithmic logics, but simply by engaging with content outputs; by subscribing, liking or following sources they prefer; or blocking or unsubscribing from content they wish to avoid. As any Twitter or Facebook user knows, the most defining feature of their experience is which accounts they choose to follow or befriend. As discussed, these constitutive choices about the affordance of different forms of content engagement fall by the wayside when we focus solely on ‘the algorithm’.
Can we be more creative in leveraging how users actually engage with content and recommendation outputs in practice? We might demand, for instance, that platforms offer users the ability to block recommendations from specific sources, or to prioritize recommendations from others. In fact, many platforms already do precisely this. With that in mind, I would suggest that users already enjoy an impressive array of options to customize their content feeds. It may be premature for the DSA to go much further in regulating these highly specific technical details, but let it be clear how modest – and in my view unrealistic – the present proposal for algorithmic parameter choices is. It fails to appreciate how users already exercise choice in practice. Once we look past the algorithm as the sole locus of control, the “problem” of audience empowerment is cast in entirely new light, and a broader set of existing features and design choices comes into view.
Towards Systemic Regulation
The above has shown the preoccupation with algorithms in key DSA provisions on recommender systems, and their relative disinterest in both outputs and inputs. Thankfully, other provisions reflect a more systemic view. For Articles 26 and 27 on ‘systemic risks’, the title says it all. These provisions demands that VLOPs monitor their recommender systems for (inter alia) threats to certain fundamental rights and other public interests. The must take appropriate mitigation measures, and report on these actions. For all its other possible flaws, at least this article refers to recommender “systems” in a broad sense and does not unduly limit itself to algorithms. Ideally it would bring into regulatory view some of the more ambitious, systemic solutions discussed above, such as improved output transparency for uploaders and heightened scrutiny of platform content engagement features.
Another bright spot is Article 30 on ad archives, which is firmly focused on outputs. Under this provision, VLOPs are not merely required to explain algorithmic targeting of their advertisements, but also to document at a systemic level what ads they carry, and whom these have reached (a measure which, my research suggests, has already had measurable impact in self-regulation). Finally, Article 31 on research access, though it is not expressly aimed at recommender systems, could also help to gain a better understanding of these systems.
All of the above boils down to a simple plea: let’s try to walk before we run. We – audiences, speakers, watchdogs, regulators – should start with what platforms are recommending, before diving into why.
References
↑1 | I credit prof. Joris van Hoboken with this insight, shared with me in conversation. |
---|