This article belongs to the debate » The Rule of Law versus the Rule of the Algorithm
30 March 2022

Proactive Contestation of AI Decision-making

Liberal democracies have an artificial intelligence problem. The disruptive impact and complex harms of artificial intelligence (AI) decision-making, including their intrusive surveillance, unjustifiable biases, and deceptive manipulations matter in all societies, but they matter more in open, pluralist democracies, which depend on messy human accountability processes. AI decision-making systems are notoriously resistant to demands for external scrutiny. Their processes are not only cloaked by technical complexity and opacity, but are also frequently protected by broad assertions trade secrecy and confidentiality rights (Liu et al, 2019).

This is surely the antithesis of meaningful self-government and its necessary commitment to openness in public decision making (Huq, 2021). Instead, it offers the potential for integrated, systemic control over human life through the interwoven decision-making powers of state authorities and major tech businesses (Cohen, 2019). Unsurprisingly, there has been an outpouring of ideas about how liberal democracies should conceptualise and respond to the dangers posed by AI. Novel overarching concepts include surveillance capitalism, digital authoritarianism, techno-nationalism, techno-populism, and techno-paternalism. While conceptualised differently, they all identify AI decision-making as a core concern that bends towards the pervasive exercise of power immune from traditional checks and balances, as well as direct political action. Plainly, the ’techno-libertarian’ aspirations of the early internet era are now a distant mocking memory.

A New Public Transparency Model

To put concepts into practice, we believe that the legal and regulatory response to AI decision-making requires a new model for public transparency that consciously reinforces the role of proactive contestation. In doing so, this model would seek to tighten the links between transparency, justification, and accountability. To provide the basis for effective transparency for AI decision-making and better accountability, the model draws on current debates to emphasise the visibility of AI systems, the explainability of their technical processes, and the contestability of their decisions, including the legal frameworks that determine the legitimacy of those decisions. Re-designing the requirements for public transparency in this way can be part of a renewal human autonomy in an era of algorithmic management of human affairs.

Undoubtedly, there are formidable challenges for this new public transparency model to be recognised in substantive or procedural law. It would raise barriers to the introduction of AI into governmental and commercial services, which would slow innovation and growth. Additionally, the goal of visible, explainable, and contestable AI decision-making is not readily applied to AI systems in practice. AI decision-making encompasses much more than the use of algorithms and is better described in terms of socio-technical systems (Wieringa, 2020). That means considering development processes end-to-end, multiple iterations of system lifecycles and humans involved in diffuse ways beyond direct use as well as data sources and outputs. It is, consequently, difficult to design new regulatory oversight for these socio-technical systems and will be much more so for strengthened public transparency rights (Cuéllar & Huq, 2022).

Old Public Transparency in Decline

Initially, legal and regulatory responses to the challenges of AI decision making were cautious, emphasising the ‘ethical model’ of ‘soft self-regulation and codes of practice’. Governments are now increasingly looking towards AI risk regulation, yet continue to rely on existing legal rights and processes to ensure adequate public transparency and associated paths to accountability. Unfortunately, core elements of the established liberal democratic public transparency model are failing. Freedom of information (FOI) rights have, for example, played a definitive role in widening public oversight of the state in recent decades, but these are of diminishing effectiveness in the face of AI complexity and ‘black box’ opacities. FOI access rights, which are deliberately limited to existing information held by public authorities, are insufficient to achieve meaningful AI public transparency, which requires rights to explanation and contestation (Bloch-Wehba, 2021).

More recently, data protection law has acquired a central role in efforts to access information for public interest as well as private purposes. Under the GDPR, data subjects may assert rights to access their own personal data from public and private data controllers. Yet, as this powerful tool is limited to rights to existing information without additional rights to explanation of decisions concerning the data subject and, like FOI rights, these are subject to significant restrictions, including limits on access to trade secrets or other commercially confidential information. Given these limitations, attention has turned to Article 22 GDPR, which arguably supports rights to both explanation and contestation in relation to automated decision-making that has legal or other significant effects (Gellert, Bekkum and Zuiderveen Borgesius, 2021). In the sphere of information law, this degree of public empowerment is a novelty and could be crucially important in some cases. Nonetheless, the scope for Article 22 GDPR-based explanation and contestation rights is ultimately confined to the defined needs of data subjects, including objection to further processing, correction of inaccuracies and deletion of personal data. The GDPR was not designed to provide general public interest transparency tools, which data protection authorities recognise and enforce.

Consequently, in many circumstances effective public transparency for AI decision-making will often only be possible through civil litigation, including tort claims, and judicial review of acts by public authorities. In their judicially controlled disclosure processes, rights to explanation and contestation can be pursued. This may also occur when parallel disclosure rules are used to defend against administrative penalties and criminal prosecutions. Ironically, this was the extent of public transparency rights before the advent of FOI and data protection access rights, which transformed public transparency in previous decades (Keller, 2019).

Achieving transparency through litigation is, however, constrained by long standing structural and procedural controls. Decision-making by major businesses is, for example, not typically subject to the transparency and accountability standards applied to public authorities. Moreover, rights, duties and remedies pursued in the courts generally concern the impact of decisions on specific individuals rather than groups or societal interests. It will be difficult to litigate AI decision-making that has aggregate harmful impacts on societies when the harms to specific individuals are varied and limited.

Proactive Public Interest Contestation

Despite the serious challenges of securing a new model of public transparency through established rights and remedies, contestation is developing through the efforts of public interest organisations. Working within these legal frameworks, proactive contestation efforts are challenging not just the harmful consequences of decisions, but the setting and defining of legal rules and standards by public bodies, as well as technical requirements and commercial norms by major commercial actors. Proactive contestation aims to secure favourable judicial and regulatory interpretations of rules and standards for decision-making, including definitions of harm and rules of liability. Alternatively, it may seek the invalidation of laws and regulations, or indeed dominant commercial practices, to shift the rules that govern life in the digital era. Increasingly, proactive contestation involves challenges to AI decision-making (Drake et al, 2021).

Nonetheless, proactive contestation of norms of this kind requires a significant investment of resources and expertise. This means identifying and investing in a specific instance of harm in which the issues affecting the public generally are salient and a victory through the courts or a regulatory decision is likely. To be maximally effective, proactive contestation through litigation also needs to be allied with other public advocacy efforts. In such circumstances, few outside professionalised civil society organisations or networks will take up the challenge of proactive contestation of AI decision-making in the public interest. And even these organisations will often need to rely on crowd sourced funding for their representative or class action campaigns and the high-level expertise needed to win in the courts (Tomlinson, 2019).

Innovations in organised public advocacy have received, at best, a cautious welcome by governments. There is, undoubtedly, recognition that individual citizens or consumers have limited capacities to use legal and regulatory processes effectively to contest AI decision-making. Civil society organisations are thus not only essential to public representation in digitised societies, but often useful in furthering the goals of regulatory authorities, who frequently lack the resources to achieve enforcement targets. Nonetheless, few governments welcome concerted external intervention in the formulation or delivery of their policies, especially where intervention aims to place additional obligations or restrictions on public authorities and businesses.

This is certainly evident in the United Kingdom in relation to representative actions, which neither the government nor the courts have more than tepidly supported. In the European Union, where there has been greater support in principle for representative or class actions, key threshold requirements regarding proof of harm and standing can still be very effective in disabling seemingly broad representative or class action rights. In the United States, famously associated with major class actions, courts are for example increasingly reluctant to expand these thresholds (Solove & Citron, 2021). Indeed, while the GDPR has strongly influenced new U.S. consumer privacy legislation, such as the California Consumer Privacy Act, its private right of action has not been copied there.

Civil society organisations are also vulnerable to criticism of their claims of representing the public interest when engaging in proactive contestation. Not only are they self-appointed in this role, but often work with profit-seeking law firms in representative or class actions. Their confrontational methods are, moreover, seemingly at odds with government assurances that the harms of AI decision-making are better addressed through risk regulation administered by expert bodies. Yet, every important strand in the developing arsenal of AI governance comes with drawbacks, including for example algorithm auditing (Koshiyama, 2021). Effective AI governance lies in their combination.

 Rule of Law and Democracy Arguments for Proactive Contestation

Proactive contestation, which may seek to overturn norms and practices, can be seen as a threat to essential stability and consistency required by the rule of law. Nonetheless, there are strong rule of law arguments to support proactive public interest contestation of AI decision-making, including efforts to participate in rule and standard setting. Procedural rule of law includes rights essential to contestation, including the rights to a hearing before an impartial tribunal, to present evidence and to make legal arguments and to a reasoned explanation for a decision, as well as being entitled to be treated as an ‘active intelligence’ (Waldron, 2010). As Kaminski and Urban argue, ‘[a]llowing individuals to contest decisions reveals whether a decisional system is unfair, inconsistent, arbitrary, unpredictable, or irrational […] Contestation and its accompanying procedural protections, such as reason giving, require that a decision-maker demonstrate examinable commitment to an outcome and describe the reasons for it.’ Procedural rule of law is thus grounded in liberal democratic commitments to dignity, autonomy, and voice (Taekema, 2021).

These important rule of law arguments are supported by democracy-based arguments for direct public participation in decision-making. Contestation is self-evidently a legitimate force for normative change in democratic societies, which should be wary of narrowing or closing off legitimate avenues for contestation. In recent decades, weaknesses in representative democracy have seen the emergence of alternative conceptions of democracy that support direct participation, such as participatory and deliberative democracy. In the face of technocratic styles of government, which abet the rise of AI decision-making in public administration, proactive contestation is consequently playing an important part in the re-thinking and renewal of democratic processes.

Conclusion

As AI decision-making becomes ubiquitous, there is an urgent need for a new model of transparency based on rights of visibility, explainability and contestation. In particular, societies that are committed to the rule of law and democracy should commit themselves to enabling proactive public interest contestation of AI decision-making, even when it is disruptive. This will include innovations across substantive and procedural law to ensure public participation in the meaningful transparency and accountability of AI decision-making, ranging from expanding rights of action regarding AI harms to renewal of FOI rules to incorporate selective rights to explanation. The alternative of techno-paternalism, in which the public is merely an object entitled to protection, as determined by the ‘well-functioning, big machine’ of the ‘Algorithmic Leviathan’, will be fatal to liberal democracies (König, 2020).


SUGGESTED CITATION  Keller, Perry; Drake, Archie: Proactive Contestation of AI Decision-making, VerfBlog, 2022/3/30, https://verfassungsblog.de/roa-proactive-contestation-of-ai-decision-making/, DOI: 10.17176/20220330-131217-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.