19 December 2023

The EU’s Pacing Problem

Why crafting and enforcing AI regulation is hard

In 2011, legal scholar Gary Marchant argued that regulators are confronted by a pacing problem as “successive waves of technical change are washing over [societies]” at an exponential pace. Marchant referred to a lengthy list of specific advancements including robotics and biotechnology – today, Artificial Intelligence (AI) would probably be first on his list. Developments in Large Language Models (LLMs) exponentially accelerate developments in many other scientific fields, including in AI itself. Some suggest that AI will impact societies on a scale similar to the printing press, electricity, and the nuclear bomb. It might even be argued that AI has the potential to surpass these previous technologies in impact – as the co-founders of Center for Humane Technology, Aza Raskin and Tristan Harris, put it, “nukes don’t make stronger nukes, but AI makes stronger AI”.  The speed of deployment of technology is also increasing at an exponential rate. For example, it took Facebook 4,5 years to reach 100 million users; it took Instagram 2,5 years; and it took ChatGPT two months (0,17 years). Exponential developments have the potential to leave a gap between technology and deployment on the one hand and legal-ethical oversight on the other hand. This is what has been named the pacing problem.

The EU regulators face a pacing problem. This has been demonstrated several times during the legislative process of the AI Act itself: for example, the initial Commission proposal from 2021 did not include a definition of General Purpose AI (GPAI). The proposal did not anticipate the rise of Large Language Models like ChatGPT and GPT-4 but only addressed AI systems designed for specific purposes. This lacuna in the original proposal has haunted the EU Parliament, Council and Commission in the past final weeks of the trilogue negotiations, where the inclusion of so-called Frontier Models has been hotly contested.

The pacing problem arguably can be addressed in two ways: a) increasing the pace of regulatory oversight; b) reducing the pace of technological development and/or deployment. The first option is the less controversial one which is more clearly within the control of the EU legislators. The second option faces political and industry opposition and is often accused of being impossible, un-strategic, or degressive. This is a narrative that has been supported and fuelled by industry players when lobbying for excluding Foundational Models from the scope of the AI Act.

I will focus on the first option here. However, the options are not mutually exclusive, and I believe we are losing out on important policy leavers by blankly disregarding the second option. Affecting the pace with which technology is developed and deployed into society may both be possible, strategic, and essential to ensure human progress on the long-term. The open letter in March 2023 calling for an interim international moratorium on training of models more powerful than GPT 4 illustrates that big parts of the industry and academia are concerned with the exponential advances in AI. The letter was signed by thousands of industry and academic expert salong with civil society organizations. Policy experts and makers should take this call for action seriously and explore what it would look like in practice to integrate an emergency brake on the development and/or deployment of highly capable AI should we turn out to need it in the near future.

Getting the EU legislative train up to speed

Delegation of power to the Commission

There are several methods of increasing the pace of regulatory oversight which have been on the drawing board in the process of drafting the AI Act. One way is by delegating broad powers to the European Commission to alter the scope of the High Risk regime under the proposed AI Act. This was proposed by the European Parliament proposal in May 2023. Delegated acts are ‘non-legislative acts of general application to supplement or amend certain non-essential elements of the legislative act’ (article 290 of the Treaty of the Functioning of the European Union, hereinafter TFEU). They allow the Commission to modify legislation without the need for the cumbersome Ordinary Legislative Procedure (art 294 TFEU). The High Risk category of the AI Act is arguably the most significant category of the Act, as it applies to a large pool of systems and impose significant obligations on providers of the High Risk systems. While we must wait for publication of the final agreement of the trilogue negotiations to know the final shape of the obligations, all three proposals for an AI Act by the EU Council, Commission and Parliament suggested that providers of high-risk systems must, for example, put in place a risk management system; meet data quality requirements; and complete a pre-market conformity self-assessment. The importance of the High Risk category is highlighted by the fact that the AI Act presumes low risk – any systems that fall outside the scope of the defined categories do not give rise to obligations under the Act.

The scope of the High Risk Regime is determined amongst others by the areas and use-cases laid down in Annex III of the proposed Act. As an example, one of the 8 areas listed in the Council proposal is ‘law enforcement’, and one of the use-cases listed under this area is assessing the risk of persons (re)offending. The Commission proposals delegate limited power to the Commission to merely amend the particular use-cases listed under each area in Annex III. In contrast, the Parliament proposal suggested delegating power to the EU Commission to both add, remove, and amend areas as well as use-cases listed in Annex III. This latter proposal would increase the adaptability of the High Risk scope by giving the Commission more room to make alterations to Annex III. There are also other, less formalistic approaches that increase the wiggle room of the Commission, for example adoption of vague terms in Annex III which would allow the Commission to clarify the meaning of the terms in the future with interpretative guidelines etc. The scope of the High Risk regime could also be impacted by non-binding recommendations which often impact how terms are interpreted.

A strong AI Office

Another aspect increasing the pace of regulatory oversight would be to have an agile, centralized oversight body supporting implementation and enforcement of the AI Act as recommended by The Future Society. They suggest implementing a tiered authority model where a European AI Office oversees the most concerning regulated entities – for example providers of general purpose-AI or AI systems affecting more than 45 million people. Such a centralized body would have, inter alia, design features like a supervisory examination system where regulatory authorities are in close and ongoing collaboration with providers of Foundation Models. It could also be granted supervision and enforcement competences including the power to make binding decisions and order interim measures in case of urgency. Furthermore, the AI Office would establish public-private partnerships to set in place iterative feedback-mechanisms from SMEs and provide support for prospective high-risk AI providers within the Union to foster responsible innovation and combat the risk of monopolisation. The Future Society points to these design suggestions as the options that would make the AI Office outlined in the Parliament proposal function the most effectively, efficiently, coherently, and legitimately. In light of the pacing problem, strengthening information flows from AI industries to the regulators, and vice versa, is an important aspect of increasing the pace of regulatory oversight.

Potential backfiring when firing up the EU regulatory engine

One potential drawback of increasing the pace of regulatory oversight is some sacrifices to democratic legitimacy. Lack of input, output, and throughput legitimacy have historically been points of critique of the European Union. Wide delegations of power to the EU Commission enables the Commission to make decisions without active approval by the European Council of Ministers and the European Parliament – the two institutions in the EU regulatory framework designed to provide input legitimacy. However, delegated acts only enter into force if no objection has been expressed by either the EUP or the Council and can always be revoked by the Parliament and Council. This ensures that democratic control is maintained, while output legitimacy may be increased through more timely adaptation of the legislation to rapidly changing realities. Democratic legitimacy may also be an issue with regards to the AI Office – this must be taken into account when shaping the actual office and its legal framework.

There is also a potential objection that the increased flexibility to adjust the scope of the High Risk regime will create legal uncertainty that may discourage innovation and investments in AI development in the EU. This concern is largely overruled by the massive European investment in AI, including by the Commission itself. For example, the Commission is investing billions of euros in AI research and innovation through the Horizon Europe, Digital Europe and Recovery and Resilience Facility alongside national investments across EU Member States. Furthermore, the Commission has led international dialogue on ethics with third countries including in the G7; systematically monitored and coordinated AI-related developments among Member States; and set up the European AI Alliance –these are significant efforts to promote the development and uptake of AI in the Union.

While the two measures suggested – wide delegation of power and a strong AI Office – probably fall short relative to the overhaul our political institutions need to deal with the exponentially increasing pace of technological development and deployment, they are within the scope of the proposals for the EU AI Act, and they would be steps in the right direction. Not much attention has been paid to the pacing problem in recent discussions of the Act, but as the final frantic minutes of the trilogues have passed this weekend, I can hardly wait to go search for higher regulatory gears and technology emergency brakes in the historical and long-awaited EU AI Act.


SUGGESTED CITATION  Emborg, Tekla: The EU’s Pacing Problem: Why crafting and enforcing AI regulation is hard, VerfBlog, 2023/12/19, https://verfassungsblog.de/the-eus-pacing-problem/, DOI: 10.59704/39c185c884081a6b.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.