15 November 2023

Biden, Bletchley, and the emerging international law of AI

On 30 October 2023, US President Biden issued a ‘landmark’ Executive Order to ensure that ‘America leads the way in seizing the promise and managing the risks’ of AI. In the two weeks that passed since, commentators across the world have analyzed its merits and drawbacks. Yet rather than its substance, the aspect of this Executive Order that I find most interesting, is its timing. The Order was issued four days after the plenary session of the Council of Europe’s AI Committee (CAI), where an international treaty on AI is being negotiated; two days ahead of the UK’s AI Safety Summit, which resulted in the Bletchley declaration calling for international cooperation to manage AI’s risks; four days ahead of the UN First Committee’s adoption of a draft resolution on lethal autonomous weapons; and on the very same day as the G7 leaders’ agreement on Guiding Principles and a Code of Conduct on AI.

Each of these global fora positions itself as a venue to discuss, create and shape international AI regulation, alongside national and supranational venues aspiring similar ends. The European Union’s efforts to finalize the impending AI Act, a comprehensive regulation set to become ‘a blueprint’ for AI legislation across the world, can for instance be seen against the same background. The significance of Biden’s Executive Order can therefore only be understood when taking a step back and considering the growing global AI regulatory landscape. In this blogpost, I argue that an international law of AI is slowly starting to emerge, pushing countries to adopt their own position on this technology in the international regulatory arena, before others do so for them. Biden’s Executive Order should hence be read with exactly this purpose in mind.

The race to AI regulation

A while ago, I wrote that the race to AI (in which the US, the EU, China and others are eagerly pursuing leadership over the technology) is gradually being complemented by a race to AI regulation. At that time, the regulatory race was however primarily driven by soft law initiatives. The European Commission’s High-Level Expert Group had issued AI Ethics Guidelines (2019), Japan published a guidance document titled “Social Principles of Human-centric AI” (2019), and China issued ‘Ethical Norms for New Generation Artificial Intelligence’ (2021). Since then, the stakes have changed significantly.

First, AI systems and their uptake continued to increase in scale, along with media reporting on the subject (not always shying away from sensationalism, yet also shedding light on important AI misuses). As awareness about the technology’s risks matured, international organizations started putting AI higher on their agenda, and announced their intention to propose new legal frameworks (not always devoid of the underlying desire to be the first to do so). Second, a growing number of regulators also explicitly proclaimed the need for new binding legislation. Shortly after the European Commission proposed a sweeping AI Act (2021), China adopted legal rules on algorithmic recommendation systems (2022), followed by rules targeting deep fakes (2023) and, most recently, interim measures for generative AI (2023). Canada too proposed binding rules in its Artificial Intelligence and Data Act (AIDA) in 2022, indicating a willingness to potentially accept an economic setback in exchange for better citizen protection. Third, the technology itself and its many manifestations also changed. The advent of ChatGPT and the like, which enable the public at large to delegate text and hence also thought to AI, as well as other generative applications, only underlined the sense of urgency when it comes to its governance.

Today, the race to AI regulation is fiercer than ever, fueled by various motivations. The most obvious is the genuine concern that AI systems – despite all their beneficial potential – can also cause significant individual, collective and societal harm, which may impede the very goals of their adoption. Moreover, if people don’t trust the technology (or rather, if they don’t trust the technology’s developers and users), they won’t rely on it, hence forsaking its benefits. Consequently, clear rules are needed to manage AI’s risks and to offer developers legal certainty while preserving their appetite for innovation. The question is, of course, how these two aspects ought to be balanced. In addition, regulators hope to gain a first-mover advantage by going in early with new AI rules. This could enable them to disseminate their AI standards in global fora, to export them to other jurisdictions (like the EU’s so-called ‘Brussels Effect’ in the privacy space, for instance), and to retain a competitive advantage for domestic actors by providing them with a head start to implement the new rules. In theory, at least.

Catching-up

In light of these developments (and given the Biden administration’s closer cooperation with the EU on technology and the protection of liberal democracy) the US could not lag behind. In what seemed like an attempt to catch up in the race to AI regulation, the White House published a Blueprint for an AI Bill of Rights in 2022, followed by the more recent Executive Order ‘on Safe, Secure, and Trustworthy Artificial Intelligence’ which explicitly builds on the Blueprint. The Executive Order covers a lot of ground – from a mandate for developers of ‘the most powerful AI systems’ to share their safety test results and other critical information with the U.S. government, to the establishment of standards and best practices to detect deepfakes, a call on Congress to pass bipartisan data privacy legislation, and principles to protect workers. There is, however, not much new ground. That said, the Executive Order also puts forward a crucial goal on which the US does seem to deliver: ‘advancing American leadership abroad’, by ‘expanding bilateral, multilateral, and multistakeholder engagements to collaborate on AI’.

With this Order, the US hence stands firm in establishing itself as a crucial force in AI’s global regulatory arena, and as a key contributor to the emerging international law of AI. Since last year, the US joined the CAI’s negotiations for an international treaty on AI based on the Council of Europe’s standards on human rights, democracy and the rule of law, taking on a prominent role. Set to be finalized in the spring of 2024, this treaty could become the first binding international legislation on AI. In July 2023, the US also became – once again – a member of UNESCO, which previously adopted a Recommendation on the Ethics of AI. While this recommendation is non-binding, it calls for strong human rights-oriented protections – including a monitoring mechanism – and was adopted by 193 states. Furthermore, in February 2023, the US also made a political declaration on the responsible military use of AI and autonomy, which it is persuading other countries to join (so far, with moderate success).

Yet beyond direct legal endeavors, the US is also indirectly influencing the tone of the international regulatory debate, as can be seen from its involvement in the UK’s AI Safety Summit. The Summit, which took place in Bletchley on 1 and 2 November 2023, was initially criticized for overly focusing on AI’s more hypothetical ‘existential threats’ to humanity rather than tackling the technology’s very real and current harms – a prioritization question that is currently dividing the international AI community. The UK government seemed to have opted for the former. In her 2023 State of the Union speech, Commission president Ursula Von der Leyen likewise stressed ‘the risk of extinction from AI’, to the dismay of many actors who fear this might push the urge to deal with existing injustices to the background (though the EU’s AI Act plainly centers on current harms). When looking at the US’ stance, Biden’s Executive Order is rooted in the ‘current harms’ approach, as also underlined by Kamala Harris’ speech in Bletchley. Without denying the need for global action against AI’s existential threats, she emphasized there are also threats which cause harm already today and which, ‘to many people, also feel existential’. These threats pertain not only to discrimination and surveillance, but also to errors leading to medical misdiagnosis, the loss of healthcare benefits, or the flood of mis- and disinformation. Ultimately, the plea to broaden the UK’s understanding of ‘AI safety’ has also found its way to the final Bletchley declaration, which focuses on short- and long-term risks alike.

Conclusion

There will likely be many more normative AI declarations, recommendations, and orders before the first binding international treaty on AI sees the light of day. Many obstacles and differences in opinions and priorities still need to be overcome for this to happen. Yet without a doubt, an emerging international law of AI is slowly taking shape, complementing as lex specialis the many international norms and rules that already apply today (such as, for instance, international humanitarian law when AI is used in military contexts). This occurs in parallel with the development of national AI regulation, which likewise comes in many colors and shapes, yet which unmistakably influences its international counterpart.

As of yet, it is too early to ascertain what the value and effect of these initiatives will be, and whether the global efforts towards harmonizing certain AI norms will prevent or legitimize a regulatory race to the bottom by individual states. Yet with its Executive Order, notwithstanding the fact that its requirements are still nowhere near the EU’s intricate upcoming AI Act, the US has effectively claimed back leadership in the global regulatory arena for AI. Let us hope it uses this position well.


SUGGESTED CITATION  Smuha, Nathalie A.: Biden, Bletchley, and the emerging international law of AI, VerfBlog, 2023/11/15, https://verfassungsblog.de/biden-bletchley-and-the-emerging-international-law-of-ai/, DOI: 10.59704/e74941ad144ce5ff.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
AI, AI Act, AI Regulation, International Law, existential risk


Other posts about this region:
USA, Welt