The Shape of Personalisation to Come
Whether Done in the Interest of Consumers or Traders, Personalisation Requires a (Platform) Governance Perspective
Change can happen unexpectedly fast in the business of social media. While targeted advertising is still a money-making machine for social media platforms, its motor has begun to sputter. Since Apple gave its users control over whether apps can track their behaviour in 2021, data for targeting has suddenly become scarcer. Subscription-based business models are on the rise. Alternative ways of monetisation, such creating content that is little else but advertising, are blossoming.
The adaptation of monetisation strategies does not mean the end of big-data analytics as a core value proposition behind social media. The advertising-based business model created an ecosystem which merged economic engineering with data science. As Viljoen et al. state in their critical take on the platform economy, platforms deploy mechanism design to select an outcome and then reverse-engineer rules and conditions to achieve it. Mechanism design had been developed in economic theory to arrange settings in which participants reveal their preferences. Auctions, which are now widely used to distribute online advertising, are one such mechanism by which advertisers can be incentivised to bid the price they are actually willing to pay for placing their ad.
In theory, mechanism design serves social welfare by optimising the allocation of value in society. As Viljoen et al. argue, in its real-world application, the allocation of options to consumers optimises platforms’ revenue and/or informational advantage. Mechanism design has been turned from a tool to reveal preferences to one for inferring preferences from behavioural data or statistical associations. However, data on people’s past choices does not always reveal their preferences. Inferences can be wrong and the large-scale effects of an algorithm distributing social-media content that has been trained on biased data are significant. Failing to identify preferences correctly is one issue, deliberate manipulation is another. Not every algorithmic recommendation for content and not every targeted advert aims to meet consumers’ “true” preferences; some merely aim to take advantage of consumers’ failures to act rationally. With artificial intelligence (“AI”), the potential is even greater for companies to discover and exploit biases and vulnerabilities in consumers that they themselves are not aware of.
The point of this dive into economic engineering of personalised environments on digital platforms is to highlight the intentional creation of algorithmically curated choice sets for consumers. How can the law ensure their fairness?
In this short essay, I consider the law’s potential for reform to address personalisation done in the interest of platforms, advertisers, and traders. I contrast this effort with the option to revolutionise the law by personalising the law itself in the interest of consumers. In the history of science, it has been said that the ultimate test for sticking to a paradigm is whether it is a good guide for future problems. Applied to the theoretical debates about personalisation and the law, the need for reform or revolution should be measured against the problems stemming from the unleashing of personalisation technologies.
From the Consumer Society to the AI Society
Since the 1950s, Western societies have developed from consumer societies into service societies and more recently into digital societies. Today we seem to be on the verge of morphing into AI societies. So far, consumer law has accompanied these changes through reform, sticking to its information and reasonable consumer paradigms. The information paradigm stipulates that consumers will make rational decisions if only they receive enough information. The benchmark against which violations of consumer law are measured is an average consumer who is reasonably well informed, observant, and circumspect in the choices she makes. Behavioural law and economics has of course long questioned consumers’ ability to decide rationally, due to limitations in their cognitive capacity, as well as the emotions and motivations involved in making a decision. In light of these behavioural insights, it’s quite literally an expensive normative passion for the law to keep the reasonable consumer paradigm alive, demonstratively leading consumers to overspend.
Bracketing behavioural insights, when moving from the consumer to the service society, reforming the law under the existing paradigms was arguably sufficient. Under the impression of the crass information asymmetries of the digital and especially the AI society, reforms may no longer be enough to guide consumers through the problems created by personalisation. Revolution looks increasingly attractive. For the paradigms of consumer law, the future is therefore now.
Pushing the Limits of the Current Legal Order (Reform)
Looking at today’s problems with digital consumer vulnerability, we may ask whether reforming the current legal status quo could make digital markets fairer. Alternatively, we could try to use the existing paradigms to undo personalisation in the interest of platforms, advertisers, and traders. The first approach is evolutionary, the second is conservative. I argue that both could be beneficial for consumers.
An Evolutionary Approach
Let us begin with the evolutionary approach. It has been shown before that even within the traditional elements of law, there is room for reform and modest granularity. I have argued elsewhere that when consumers are solely targeted based on their behavioural profile, judges could apply consumer law in a more granular manner by tightening the average consumer test. Instead of considering how an idealised, reasonable consumer would react to targeted ads, judges could assume that the targeted consumer behaves just as predicted by the data-based behavioural profile which advertisers created of her.
This may be called a negative form of personalising the law: taking the personalisation done in the interest of advertisers at its face value. If a consumer receives targeted advertising based on an inference about her current mood, say because she just lost her job, would it then not be fair to assume that she is in fact as susceptible to commercial messaging because of her anxious emotional state, as predicted?
The idea is not to argue that judges should always give up their discretion and assume consumers behave as predicted by targeted ads. The particularities of the choice environment should have a moderating influence. In a concentrated market environment in which consumer profiles are distributed via ad-intermediaries with great market power, the likelihood of a targeted consumer seeing adverts which are not based on the same exploitative personal profile can be diminished. In such a situation, restoring fairness via a tighter average consumer test appears normatively satisfying. The evolutionary nature of this approach lies in further granularizing consumer law in cases where an exploitative advert meets a concentrated market for ad-intermediary services.
A Conservative Approach
What could a conservative approach look like? Elsewhere, I have argued for adding noise to targeted ads, that is, randomly exposing consumers to non-targeted, hence noisy, adverts. This approach is conservative, as noisy environments re-establish the preconditions for reasonable consumer choice. However, they do not guarantee reasonable outcomes. Even within non- or less-personalised choice sets people will act irrationally.
The adding of noise to targeting would instead counter the economic engineering done by digital platforms with a regulatory metric. In our study, we developed an index – the Concentration-after-Personalisation Index (CAPI) – which allows the detection of concentration in the exposure of consumers to targeted adverts. Going beyond the assumptions in the evolutionary approach mentioned above, we showed how easily consumers can be exclusively targeted by exploitative advertisers, even without market power. Adding noise requires, however, a normative choice as to how much noise should be added to balance advertisers’ and consumers’ interests. In the paper, we show that the optimal degree of noise can be calculated by using the CAPI.
In both approaches, personalisation is still done in the interest of platforms, advertisers, and traders. The law would merely react and either incorporate (the evolutionary approach) or dilute (the conservative approach) the personalised prediction or choice set. In both cases, consumer law enforcement would be dependent on information about targeting metrics and ad distribution which is largely held by ad-intermediaries. With a view on the infrastructure of online advertising, the adding of noise would have to be done by ad-intermediaries. The suggested reforms of the consumer law status quo would thus not perfectly guide through current problems with the (market) power of platforms. The question is thus whether personalised law – depicted by some as the “future of law” – would fare better.
Choosing a New Paradigm (Revolution)
The revolutionary paradigm shift of personalised law would implement the behavioural insights unearthed by big data technology, only now with the interest of the consumer in mind. Information disclosed to consumers could be tailored to their behavioural needs, thus personalising the content of the law itself. This would mean giving up on the average reasonable consumer and overcoming its criticism from behavioural researchers. Personalised law would, however, allow us to keep the information paradigm alive. Consumers could receive personalised qualities and quantities of information, enabling them to take more rational decisions.
There is a difference in design between the reformative suggestions above and the idea of personalised law considered here. The former mitigate personalisation done in the interest of advertisers and traders. Above, I called this a negative approach. Personalised law, however, would mean personalisation done in the interest of consumers. It would change the benchmark of consumer law and I may thus call this a positive approach.
If personalisation was to be done with the legal interests of both consumers and traders/advertisers in mind, the complexity of the law would inevitably increase, as Grigoleit has pointed out. The need for new legal tools to quantitatively calibrate interests would rise. With the CAPI mentioned above, we showed how a quantitative index can indeed be used to determine an optimal degree of noise, considering the competing interests of consumers and advertisers.
This essay does not aim at taking a decisive stance either for or against personalised law. Instead, it concludes by considering how well personalised law would guide through future problems, hence applying the test of scientific history for sticking with a paradigm or adopting a new one. This obviously involves some future gazing, as truly personalised law is yet to come. The problems to consider are thus really the future’s future problems.
The Future’s Future Problems
Some of the future’s future problems will likely be created by personalised law itself. For example, it could give away information about the decision-making profile of a consumer and become an additional data source for inferential analytics. Assume that you have four, and I have two weeks to withdraw from a contract. In aggregation with the withdrawal rights from the population, this difference may give away insights to advertisers and traders about our respective ability to make economically sound decisions.
We should be careful to assume that personalisation will work equally well for all consumers. Personalised laws may misfire just like personalised ads. If we fail to accurately target the law and such failure creates vulnerabilities in consumers, then we find consumers being made vulnerable by law. Should this happen, it may lower the acceptance of personalised law amongst consumers. Some may not be able to understand why the law treats them differently from the people they know while suffering from disadvantages through personalisation.
Some of the future’s problems, however, will likely look very much like the problems we are facing today. Adding to the point just made about misfiring, just like the inferential analytics done in the interest of platforms, advertisers, and traders, tailoring laws based on behavioural data could suffer from bias. In a recent study, Agan et al. show that data from people’s past choices does not necessarily reflect their preferences and thus produces a biased data set. They demonstrate the large-scale, negative effects of training a recommender algorithm on such biased data. Drawing on the difference between “thinking fast and thinking slow”, the study shows that a set of past snap choices contains a higher degree of bias than a set of deliberative choices. Personalising laws would therefore require careful consideration as regards the circumstances of those past consumer decisions that are supposed to form the data base of personalisation.
It is, however, challenging to assume that unbiased data is readily available for consumer law issues. For example, to generate insights about consumers’ decision-making processes when shopping online, personalised law would have to draw on data of their past purchases. This data will most widely be generated on private platforms. As mentioned above, platforms are designed to optimise revenue and informational control. They infer preferences and future behaviour in their own interest. For social media platforms, it may very well be more important to discover what makes users click on an ad or engage with content than to find out what users’ “true” preferences are.
Whichever interests will shape the future of personalisation – those of consumers or those of platforms, advertisers, and traders – as long as the necessary data will be generated on digital platforms, several core problems of platform governance remain salient: informational imbalances, concentration, and the economic engineering of choice sets.
These problems cannot all be addressed by consumer law. With its Digital Services Act (DSA), the European Union (EU) recently chose to regulate platforms somewhat more tightly, but left the state of consumer law largely untouched. At the same time, the DSA frequently refers to consumer protection. The remit of its liability regime for hosting services draws upon the average consumer paradigm (Art. 6(3) DSA). Changing consumer law by giving up on the average consumer could thus granularise platforms’ liabilities. While Article 25 DSA protects users’ “informed decisions” from manipulative or distorting interface design, it excludes practices which are covered by consumer law. Even a conservative reform measure such as adding noise to targeting may thus have to be implemented via consumer law, although it merely reinstates the preconditions of reasonable consumer choice through platform governance.
It appears that personalising consumer law properly would require conditions of data generation on platforms that consumer law alone cannot guarantee. At the same time, the DSA as a centre piece of EU platform governance demarcates its remit by reference to the old paradigms of consumer law. Whether we will see reform or revolution in the law may thus depend on where the current legal paradigms will first cease to guide well through problems of personalisation: consumer law or platform governance law. Wit