05 October 2023

Automated Decision-Making and the Challenge of Implementing Existing Laws

Who loves the latest shiny thing? Children maybe? Depends on the kid. Cats and dogs perhaps? Again, probably depends. What about funders, publishers, and researchers? Now that is an easier question to answer. Whether in talks provided by the tax-exemptcult of TED’, or in open letters calling for a moratorium, the attention digital technologies receive today is extensive, especially those that are labelled ‘artificial intelligence’. This noise comes with calls for a new ad hoc human right against being subject to automated decision-making (ADM). While there is merit in adopting new laws dedicated to so-called AI, the procedural mechanisms that can implement existing law require strengthening.

The perceived need for new substantive rules to govern new technology is questionable at best, and distracting at worst. Here we would like to emphasise the importance of implementing existing law more effectively in order to better regulate ADM. Improving procedural capacities across the legal frameworks on data protection, non-discrimination, and human rights is imperative in this regard. This includes establishing adequate oversight mechanisms, requiring robust risk and impact assessments, and ensuring a participatory approach when designing systems that incorporate ADM. It is also crucial that lopsided regulatory impacts are avoided when applying the law in a given sector.

The need for better enforcement and oversight

With the European Union in the final stages of approving its AI Act, the United Kingdom proposing a ‘pro-innovation approach’ to AI regulation, and the United States publishing its Blueprint for an AI Bill of Rights, people can be forgiven for having the impression that new laws are urgently needed to better regulate ADM. While having tailored laws focused on ADM can have upsides, such as helping raise awareness about the problems and pitfalls of related technology, existing laws already applicable to technologies that incorporate ADM are far from deficient in being capable of providing sufficient regulation. Blaming existing law for the drawbacks of ADM applications is easier than trying to answer why the law struggles to address harms already happening and keep pace with developments in the digital technology sector.

As Susie Alegre and Elizabeth Renieris help emphasise, idealising new law as the solution to the problems posed by ADM is counterintuitive to improving this very catch-up. It is past time to stop playing regulatory whack-a-mole with new technologies. The reality is that new laws can take considerable time to be constructed, debated, approved, enter into force, incorporated (into domestic law, if international law), and enforced. By the time a new law is operational, the technology it was aimed at regulating might have changed, maybe significantly. Then there is the matter of interpretation. Consider how long it has taken to reach consensus on points of law that already apply to ADM, or those laws that have been around for decades, such as privacy, on which there is still little agreement as to what they actually mean for the purposes of practice. How would new rules be any different? Such rules may also merely duplicate what is already enforceable. Not to mention would be products of further processes that require investments of capital, resources, and time.

These concerns are pertinent to claims that in order to better regulate ADM, there is a need to create a new human right against being subject to ADM. Yet it is difficult to envisage what such a new rule would add to those already applicable to ADM across the legal frameworks governing data protection, non-discrimination, and human rights. Combined, rules from these bodies of law offer numerous ways of regulating ADM, in addition to providing recourse to individuals and groups for harmful impacts of its use. Yet the extent to which these rules can do so effectively depends on their enforcement and oversight. Attention tends to fall on how substantive rules across these areas can be interfered with by the design, development, and deployment of ADM. There is comparatively less focus on the procedural mechanisms that are needed in order for substantive rules to have tangible impact on ADM.

Requiring risk and impact assessments

Part of bolstering procedural capacities is ensuring there are means of gathering and scrutinizing necessary information relating to ADM systems. Evidence must be gathered and verified before applying the law to a given situation. This knowledge accretion is crucial to understand how particular ADM systems operate, what measures exist to mitigate or eliminate possible harms, and what can be reasonably expected when they are deployed. This is why requiring thorough and robust risk and impact assessments for ADM is vital. The current draft of the EU AI Act stipulates that providers and deployers should conduct risk assessments and human rights impact assessment for systems considered to be ‘high risk’. Several commentators have also called for mandatory human rights impact assessments for any technology incorporating what may be classified as ‘AI’, even systems that at first may be labelled as ‘low risk’. But it is the implementation of the risk and impact assessment process that can help identify actual and potential harms of ADM systems, which makes it easier for there to be timely interventions for avoiding or mitigating harm. As do these procedures facilitate assessments regarding the apportionment of responsibility when things do go wrong.

Participatory approaches to designing ADM systems

Participation of those impacted by systems incorporating ADM in their design and deployment is also a crucial aspect of governance. For example, if an employer wishes to implement a worker-surveillance system that incorporates ADM, such as to deter theft, then workers that will be subject to that system should be involved in designing it – lest the code not account for the reality that individuals need to drink, eat, defecate, urinate, and rest while at work, including in order to do their jobs well. Although this overall problem is compounded by the numerified ranking of productivity and impact based on gameable metrics, now rife in many sectors, it can be alleviated. By incorporating into its design the insights of people with hands-on experience and knowledge of the context in which the ADM would operate, such systems have the potential to allow humans and computer code to function in harmony. Pre-deployment efforts of this sort are in the interests of providers and deployers of ADM systems, as they help avoid potentially costly and time-consuming issues arising in the future, such as being fined or sued.

Oversight mechanisms: Public, private, or both?

To ensure that existing legal frameworks, as well as any new additional regulations, are appropriately applied to ADM systems, there is a need for accompanying oversight mechanisms. Different calls have been made for an international AI council or other forums to ensure that AI is governed at the international level. Yet it may be that creating new bodies tasked with the oversight of ADM falls into similar traps of those associated with creating new laws.

Private companies have publicly committed to adopt internal guidelines, review their ADM systems, and even establish their own oversight mechanisms. However, there is a limit to how much we should defer to the private sector when it comes to ensuring ADM does not harm individuals, groups, and societies more generally. Voluntary commitments from companies being part of the regulatory framework pose risks. Combined with the lobbying prowess of large technology companies, such an approach could end up resulting in the rules being written by those whose very power they are aimed at governing. As Linda Griffin has noted, there are ‘inherent risks when technical capabilities and computing power are concentrated in private entities primarily guided by what is best for shareholders’. These interests are not the same as those of the public, which require being meaningfully integrated into the regulation of ADM.

This is why public bodies, such as legislatures, regulators, and courts, should play a central role in ensuring that existing legal frameworks enshrining rules on data protection, non-discrimination, and human rights are appropriately accounted for by providers and deployers of ADM systems. Such bodies could take different forms, from the proposed Office for AI stemming from the draft of the EU AI Act, to the Digital Regulation Cooperation Forum (DRCF) in the UK. What matters is that these bodies are democratically accountable and equipped with enough powers and capabilities to successfully oversee the design and use of ADM systems, monitor compliance with existing laws, and ultimately ensure that providers and deployers minimise risk, impact, and potential harms on individuals and groups.

Avoiding lopsided regulatory impacts

Other pressing factors that besiege discussions and decisions about improving the procedural mechanisms for the effective application of substantive rules to ADM are capacity-building and resource allocation. Both require financing. Investment in related machinery is therefore key. Failures to increase related budgets can contribute to the continued exposure of individuals and groups to the harms generated by ADM. However, funding is not the only problem.

In addition to the matters of capacity-building and resource allocation that need to be addressed if procedural machinery is going to be in a position to provide for any substantive rule to form part of effectively regulating ADM, ensuring that lopsided regulatory impacts do not occur is crucial. For instance, the complexity involved in trying to navigate the ‘regulatory thicket’ of data protection frameworks has resulted in ‘harm to small and medium enterprises (SMEs), especially in less developed countries, and a boon to large companies’. The introduction of new substantive rules can affect competition in a given market, sometimes to the detriment of businesses that are smaller in comparison to their potential competitors. With respect to the design, development, and deployment of ADM, there needs to be careful implementation of the law to ensure that provider and deployer diversity is fostered. Creating conditions that do the opposite helps monopolies become entrenched. While the law is a means to conserve and consolidate power in the hands of its creators and beneficiaries, one of its uses is also as a check on power. It is essential that regulation of ADM balance these realities in ways that do not tilt the scales even further in favour of technology companies that already boast market dominance, and too often abuse it.

Shaping a promising future

With challenging issues such as those raised by the development and use of ADM, there sometimes comes with them a sense of defeatism and inevitability. Perhaps part of this feeling is owed to the problem alluded to above of automatically accepting the production outcomes of companies in the digital technology sector, and assuming that the processes upon which they rely should and will remain as they do. But change is always possible. More attention needs to be directed away from the products produced by digital technology companies, towards the processes by which they are created. By focusing attention on the processes behind the products, and not only the products themselves, piecing together an accurate picture of what is being provided becomes possible. Effective and meaningful regulation could be assisted by this shift. Furthermore, with the aim of shaping a future in which developments and applications of ADM are fair and just, the legal tools that already exist should be sharpened at least in addition to, and perhaps even instead of, trying to create new shiny laws.

This article is based on research that received funding from the British Academy (grant no. BAR00550-BA00.01). For more of our thoughts on this subject, see Elena Abrusci and Richard Mackenzie-Gray Scott, ‘The questionable necessity of a new human right against being subject to automated decision-making’ (2023) 31 International Journal of Law & Information Technology 114-143 – available open access.


SUGGESTED CITATION  Mackenzie-Gray Scott, Richard; Abrusci, Elena: Automated Decision-Making and the Challenge of Implementing Existing Laws, VerfBlog, 2023/10/05, https://verfassungsblog.de/automated-decision-making-and-the-challenge-of-implementing-existing-laws/, DOI: 10.17176/20231005-233624-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
AI, Artificial Intelligence, EU, Regulation, automated-decision making


Other posts about this region:
Europa