The Rule of Law versus the Rule of the Algorithm
Employers do not usually prosecute droves of their own employees, for eerily similar alleged offences, without asking questions. The United Kingdom´s Post Office was an exception in that regard. Between 2010 and 2014, it accused many of its own staff of theft, fraud, or false accounting. 700 of them were ultimately convicted by courts. The Post Office´s Chief Executive did not insist on an analysis of the strange criminal pattern emerging in her company, she did not believe her employees who protested their innocence, she trusted the algorithms. Wrongly so, as it later turned out. The software had a glitch. That discovery came too late for many employees who had lost their job, money, time, reputation and their dignity. Some went to prison.
The rule of the algorithm won in the company and in the courts. At least until the truth was discovered. How do we prevent such miscarriages of justice in the future? Algorithms are far more pervasive and far more complex than they were in the early 2010s – a trend which will only continue in the future.
It turns out, legal experts are sceptical that justice can prevail without intervention. When we chose the title of this symposium, we thought it might be controversial. We expected that at least some of the authors would argue that algorithmic threats to the rule of law were solvable, or that responsibly-implemented algorithms could even help the delivery of justice. None of the experts did.
In the series of articles which we will present to you in the next days, we find no techno-optimism. That should give everybody pause – especially to the advocates in favour of algorithmic solutions for every problem.
The articles
In this symposium, we will read about fundamental questions concerning the compatibility of algorithms and the rule of law. We will read about algorithms used to apply the rule of law (in legal tech or in administrative decision-making) and about rule of law concerns about the use of algorithms in any field.
We will start with colleagues working for the EU, which is eyeing the regulation of AI, via its AI Act, among other instruments. The AI Act foregrounds “the risks associated with certain uses of such technology” for individuals and society, and “their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial”. The regulation of complex technologies is not a new challenge for European policy-makers, but the pervasiveness of AI in contemporary society requires much wider discussion. Especially, as the risk this technology poses seems to be the central theme of upcoming legislation. Paul Nemitz and Eike Gräf stress that in addition to the discussion on how to use algorithmic systems lawfully, the question of when it is beneficial to use them deserves more attention. Catelijne Muller, Christofer Talvitie and Noah Schöppl develop this question, towards a proactive and anticipatory approach, which consistently returns to a question zero: “what kind of society do we want to live in, and where does AI truly help us achieve that?”
At the advent of ‘legal tech’, Laurence Diver and Pauline McBride show concern about the erosion of the role of human decisions in an environment being taken over by AI – legal decision-making is underpinned by discretion, creativity, and dissent, which are qualities not inherent to AI. Circularly, human flourishing and agency, a function of the rule of law often represented by the protection of ‘autonomy’, is stifled, as algorithms mediate individuals’ freedom to choose their actions. Within this context, Stanley Greenstein offers a more fundamental critique, and argues that AI basically evades the legal concepts of contestability and accountability.
In order to align algorithms with the rule of law and address risks, ‘top-down’ regulation alone may not be the best or only solution. Complementary proactive contestation via litigation can provide the basis for effective transparency for AI decision-making and better accountability, while leading from the bottom up, argue Perry Keller and Archie Drake. Ana Valdivia and Javier de la Cueva describe an attempt at proactive contestation in the Spanish context, using it as an example of the ‘paradox of efficiency’. States implement algorithmic systems for the sake of efficiency without being aware that ideologically-charged technology should not propose nor develop solutions contra legem. Jacob Livingston Slosser, Henrik Palmer Olsen and Thomas Troels Hildebrandt propose a Turing Test-type of administrative decision-making where a human must judge and decide between decisions, where AI and humans make competing decisions.
Understanding, transparency and explainability are core themes throughout the articles, in the transparency and explainability of ‘black box’ recommender algorithms, for example. Paddy Leerssen asks, whether we are restrictively preoccupied with the algorithm itself, and whether this focus does not distract us from remedying undesirable outcomes and addressing environmental factors. Nevertheless, Jennifer Cobbe and Jat Singh make several proposals, on how to regulate recommending. While they argue that platform users must remain free to say and do undesirable things online, there is no good reason why a platform’s algorithms should artificially inflate their audience. Dissenting slightly in the news recommender context, Sarah Eskens shows that it is largely unnecessary and, in any case, contrary to the rule of law to regulate how news media deploy recommender systems to select and rank the news for individual users. If there was empirical evidence that certain news recommender systems have harmful effects on individual rights and societies, she proposes an alternative solution.
Finally, Shmyla Khan points out that conversations on the governance of AI are eurocentric. We must ask ourselves whether the conclusions reached in the European context translate seamlessly elsewhere, or raise new problems; “algorithmic transparency is difficult to practice in contexts where Rule of Law mechanisms are weak.” Anuj Puri adds that algorithms subvert the individualistic focus of the rule of law, which requires a new collective formulation – a rule of law 2.0 – which can accommodate that automated decision-making systems determine an individual’s fate in a collective setting.
We have learned a lot from these articles, and we hope that they will move the debate forward. As you will read, we need to keep asking the most fundamental questions. We should encourage utopian thinking for better solutions. At the same time, the years to come will allow us to become very concrete. With legislation like the AI Act or the Digital Services Act, the rule of law will be expressed through detailed articles, their interpretation and their application. We can verify how platforms formulate policies on algorithmic choices.
We welcome that future, while recognizing that the expertise of the legal community on how AI functions is generally low. In order for this regulatory future to deliver on our utopian vision, we will need more meetings of technical minds with legal minds.