This article belongs to the debate » The Rule of Law versus the Rule of the Algorithm
03 April 2022

Rule of Law, AI, and “the Individual”

The institutional safeguards formulated under the Rule of Law tend to focus on “an individual” or “the individual” who can be the bearer of the rights and protections it awards. This pre-digital formulation worked well in an era where law was the pre-eminent form of social regulation. However, increasingly, individual interests are impacted not only on the basis of the actions and choices of the concerned individual, but also on the basis of data collected about her social context and that of other similarly situated individuals (Barocas and Nissenbaum 2014, Vold et al 2019). In order to reconcile these tensions, in this blog, I argue for supplementing the existing individual protections recognized under the Rule of Law framework with recognition of collective interests in order to strengthen the Rule of Law in the age of AI.

Rule of Law

While analyzing the conceptual aspects of the Rule of Law, the definition expounded by Lord Bingham serves as a locus classicus:

[A]ll persons and authorities within the state, whether public or private, should be bound by and entitled to the benefit of laws publicly made, taking effect (generally) in the future and publicly administered in the courts.1)

The conceptual debates surrounding the Rule of Law usually revolve around thin and thick formulations, which tend to focus on formal and substantive aspects respectively (Skaaning 2010). The Venice Commission for the Council of Europe identified six elements of the Rule of Law in its 2011 report:

  1. Legality, including a transparent, accountable and democratic process for enacting law;
  2. Legal certainty;
  3. Prohibition of arbitrariness;
  4. Access to justice before independent and impartial courts, including judicial review of administrative acts;
  5. Respect for human rights; and
  6. Non-discrimination and equality before the law. (Venice Commission 2011)

The 2016 Rule of Law Checklist, adopted by the Venice Commission at its 106th Plenary Session, revised this list to five core elements while stating that “a strong regime of Rule of Law is vital to the protection of human rights” (Venice Commission 2016). However, none of these formulations focus on group interests or group aspects of individual interests. As Bedner states “Not many defining the rule of law include group rights into their concept, as these are controversial to qualify as human rights” (Bedner 2010).  The price for the non-inclusion of group interests under the ambit of the Rule of Law is paid for by individual interests as well.

AI

Beyond the existing formulations of the Rule of Law lie technological design choices that impact individual interests not only according to her data but also of others like her. For instance, as Kearns and Roth state in their book on the ethical algorithm, “[T]he “collaborative” in collaborative filtering refers to the fact that not only will your own data be used to make recommendations to you, but so will everyone else’s data.”2) However, the use of this “collaborative model” is not restricted to relatively trivial matters such as movie recommendations but also extends to automated predictive systems, making decisions that impact individual rights and liberties on the basis of mass data collection.

The use of machine learning-based prediction models to determine individual interests, like their predecessor models which drew upon statistical inferences, can be challenged on grounds of accuracy as well as legitimacy (Underwood 1979). In this blog, I am focusing on questions of legitimacy, which are intricately linked to accounts of autonomy. From the perspective of political morality, many of the leading accounts of the Rule of Law rely on the intrinsic value of individual autonomy. When it comes to automated decision-making, data collection and decision-making is not targeted solely at a person’s individual existence but is aimed at her social context and also that of other individuals like her. Hence, the account of individual autonomy that can offer the best safeguard against automated decision-making is relational autonomy, which arises out of an individual’s social context (Oshana 2020). The most severe consequences of AI bias where the individualistic conception of the Rule of Law has been found lacking, meriting an expansive reinterpretation steeped in collective interest, have been noticed in cases of predictive policing (Couchman 2019) and recidivism prediction systems (Larson et al 2016). In both these cases, questions of individual rights and liberties are not being determined solely on the basis of the individual’s actions but also the larger social context. In order to tackle this expansive automated decision-making model, the Rule of Law also has to expand its purview and account for the collective interests in data collection, processing and automated decision-making.

Another relevant example in this regard pertains to the 2020 UK A-levels and GCSE grades controversy. Following the coronavirus lockdown in 2020, students did not appear for exams but were awarded their grades on the basis of an estimated grade provided by the teacher, which were then revised by an algorithm on the basis of school’s performance in previous years (BBC Explainers 2020, Burgess 2020). The algorithmic moderation, which was aimed at preventing grade inflation (Kelly 2021), resulted in almost 40% of students receiving lower than expected grades (Adams et al 2020). Following an uproar and protests by the students, the grades were restored to the evaluations awarded by the teachers (Kolkman 2020).

While the grading controversy arose out of algorithmic determination and not an AI system (Burgess 2020), it provides useful insights into automated decision-making. The Rule of Law safeguards, such as the prohibition of arbitrariness, respect for equality and human rights, address the inquiry from an individual interests perspective. The Wire’s article, examining the grading controversy while discussing remarks from Binns, similarly states:

People want the decisions about their lives to be personal and not based on historical data over which they have no control. One size doesn’t fit all and people are mostly concerned with their own individual results and the fairness of them. That means each individual having their potential reflected in something and it not being the result of an aggregate. (Burgess 2020)

These remarks succinctly capture the tension between individual expectations of the Rule of Law and the collective determination nature of automated decision-making. At the heart of this inquiry is the ethical dilemma surrounding the judging of an individual on the basis of actions not attributable to her. At a system design level, the normative debates surrounding machine learning are grappling with resulting ethical issues via individual fairness models that are aimed at ensuring that similar individuals are treated similarly and group fairness models that are aimed at statistical parity between groups (Dwork et al 2012, Binns 2020). From a Rule of Law perspective, we need a novel formulation, a Rule of Law 2.0,3) that seeks to protect individual interests in their larger social context and also accounts for collective interest in relation to automated decision-making.

The Individual

In a previous paper, while articulating a theory of group privacy, I have argued that limitations in individual privacy arise out of a constricted conception of the individual (Puri 2021). In order to enhance the scope of individual privacy, we need to enhance our understanding of the individual, and for the purposes of legal regulation, include her social identity which comprises her membership in social groups in addition to her personal identity. I have also argued for the recognition of collective interest in privacy to supplement individual privacy (Puri 2021). A similar argument can be made about the recognition of group interests in Rule of Law frameworks, in order to supplement individual interests. If the institutional safeguards designed to protect individual interests are being undermined on the basis of data inferences drawn at group level (Barocas and Nissenbaum 2014),4) then in order for the Rule of Law to remain relevant, it is imperative to extend its ambit to protect group interests.

Limitations of the Argument

The existing formulation of the Rule of Law, which is aimed at maintaining the supremacy of law, placing constraints on executive discretion, prohibiting arbitrariness, ensuring access to justice, protecting equality and respect for human rights, has underlying normative and epistemic assumptions. Some of the normative assumptions, as mentioned earlier, are geared towards individual interests. The other normative assumptions are geared towards a predominance of law as the governing social norm, which is increasingly being challenged by automated decision making. Related to this are the epistemic suppositions in form of the ability of decision-makers to provide explanations, thus facilitating access to justice and reducing arbitrariness. The group formulation proposed in this blog is aimed at the problem of normative assumptions highlighted above, and does not seek to address the epistemic problem arising out of the algorithmic black box.

Conclusion: Rule of Law 2.0

I have argued in favour of the recognition of collective interest via policy in order to strengthen the Rule of Law in the age of automated decision-making. In articulating my vision for the Rule of Law 2.0, I rely on the observations of Van der Sloot and Van Schendel who outline their vision for how procedural law should adapt to the transition to a data-driven society,

This change requires several adjustments to the legal regime, both to make the best possible use of the opportunities this change has to offer and to lay down safeguards against dangers and risks. To facilitate this process, a number of changes is needed to the current, individual-centred legal paradigm, such as laying down a protective regime for non-personal data, providing protection to public interests and societal harms and granting a bigger role for representative and collective actions and public interest litigation. (Van der Sloot and Van Schendel 2021). 

The way forward is a collective formulation of the Rule of Law. Automated decision-making systems determine an individual’s fate in a collective setting. In order to protect her interests, it is imperative that the institutional safeguards designed by the Rule of Law place a similar premium on an individual’s social context, as well as the collective interest in relation to the automated decision-making.

References

References
1 Tom Bingham, The Rule of Law 8 (2010).
2 Michael Kearns and Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design 117 (2020).
3 I use the words “Rule of Law 2.0” to convey the need to modernize Rule of Law to include collective interest in relation to automated decision making.  Cohen has argued for “Rule of Law 2.0” in context of protection of fundamental human rights in the networked information era.  See: Julie Cohen, Between Truth and Power The Legal Constructions of Informational Capitalism (237, 266-268) 2019.
4 Linnet Taylor, Luciano Floridi, & Bart van der Sloot, Introduction: A New Perspective on Privacy, in GROUP PRIVACY: NEW CHALLENGES OF DATA TECHNOLOGIES 1, 16 (Linnet Taylor, Luciano Floridi, & Bart van der Sloot eds., 2017).

SUGGESTED CITATION  Puri, Anuj: Rule of Law, AI, and “the Individual”, VerfBlog, 2022/4/03, https://verfassungsblog.de/roa-individual/, DOI: 10.17176/20220403-131019-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.