08 April 2025

When Guidance Becomes Overreach

How the forthcoming Code of Practice Threatens to Undermine the EU’s AI Act

From 2 August 2025, providers of so-called “General Purpose AI” (GPAI) models – such as GPT, DALL-E, Gemini and Midjourney – will face far-reaching obligations under the EU’s AI Act. The emergence of these large (multimodal language) models in late 2022 prompted EU lawmakers to hastily include GPAI regulations into the AI Act. GPAI model providers must provide technical documentation, implement a copyright policy, publish a training content summary, and – for particularly powerful models that may pose systemic risks – undertake risk assessment and mitigation measures.

To demonstrate compliance, the AI Act allows providers to rely on a “Code of Practice”, currently being drafted by over 1000 stakeholders under the auspices of the AI Office, and expected to be adopted by the European Commission before August 2025.

The following will provide a critical analysis of the third draft of the Code of Practice. It highlights that the drafting process conflicts with the procedural requirements of European law and how the draft itself exceeds the substantive obligations set by the AI Act. Given these concerns, I argue that the Commission should rigorously scrutinize the draft before adoption, as approving it without amendment would lead to an unconstitutional overreach of its implementing powers.

Co-regulation as the core strategy of the AI Act

The AI Act is based on the New Legislative Framework (NLF), which relies on co-regulation: a structured dialogue between regulators and industry to translate general legal obligations into technical standards. Instead of specifying technical details in legislation, the AI Act defines essential requirements and leaves the task of concretization to the European standardization organizations CEN and CENELEC through their joint committee, JTC21.

Harmonized standards provide legal certainty: Once adopted by the Commission, compliance with these standards creates a presumption of conformity with the AI Act. Although companies can, in theory, develop their own technical solutions, the administrative difficulties and additional costs involved usually lead them to follow standards. Recognizing these effects, the European Court of Justice has consistently ruled that harmonized standards form “part of EU law” and must be developed and published in accordance with the rule of law (Art. 2 TEU).

The Code of Practice as “part of EU law”

Although harmonized standards are envisaged for GPAI models, standardization efforts in this domain are still at an early stage. To bridge this gap, the AI Act introduces an interim instrument: the Code of Practice. Once adopted by the European Commission through an implementing act, compliance with the Code will grant a presumption of conformity under Art. 53(4)(2) and Art. 55(2)(2) AI Act – similar to harmonized standards. In theory, providers may choose not to rely on the Code and demonstrate compliance through alternative means. However, in practice the Code will likely shape the Commission’s interpretation and enforcement of GPAI obligations.

Given its legal and practical consequences, there is little doubt that the ECJ will also recognize the Code as “part of EU law”. Consequently, the Code must be developed in accordance with the rule of law (Art. 2 TEU) – both procedurally and substantively. However, this is not currently the case.

An unregulated process with 1,000 stakeholders

While the development of harmonized standards is governed by Regulation 1025/2012, the drafting of the Code of Practice relies solely on Article 56 AI Act, which vaguely authorizes the AI Office to invite stakeholders.

The result is a process with no structured rules, no transparency, and no democratic safeguards. Initially, the AI Office planned to draft the Code behind closed doors. In response to criticism, it swung to the opposite extreme, launching a consultation with nearly 1,000 stakeholders — coordinated by 10 experts, including some non-Europeans.

With an extremely compressed timeline and an unwieldy number of participants, the process has left little room for thoughtful deliberation or balanced input. Even more concerning, academics — many with no legal expertise or experience in technical standardization — are leading the drafting effort. Yet the Code, once adopted, will define expectations of GPAI obligations and influence enforcement. Remarkably, this is happening without meaningful participation by standardization experts, without input from the European Parliament, and without oversight from Member States.

To be clear, this criticism is not intended to question the technical expertise of the chairs and stakeholders involved or their willingness to consider diverse perspectives. Rather, the key issue is that the drafting process is not governed by legal procedural rules but has become a top-down effort to regulate GPAI models within a short timeframe – all while discussions in ISO/IEC and CEN/CENELEC on GPAI standards are still at an early, mostly informal stage.

The Code of Practice as a trojan horse to reshape the AI Act?

The substance of the draft is equally concerning. While its purpose is to help providers comply with existing obligations, the current draft goes beyond mere clarification – introducing new requirements not envisioned in the AI Act.

One example is the proposed role of “external evaluators” before releasing GPAI models with systemic risks, which is not provided for in the AI Act. The draft mandates providers to obtain external systemic risk assessments, including model evaluations, before placing their models on the market (commitment II.11). However, the AI Act itself (Art. 55(1)(a) and recital 114) does not impose this requirement – it only calls for adversarial testing of model evaluations, not independent external risk assessments.

Another example concerns copyright: measure I.2.4. of the draft requires GPAI model developers to make reasonable efforts to determine whether protected content was collected by a robots.txt-compliant crawler – an obligation not imposed by the AI Act either. Additionally, measure I.2.5. mandates that GPAI model providers take reasonable steps to mitigate the risk of downstream AI systems repeatedly generating copyright-infringing content and to prohibit such uses in their terms and conditions. However, these requirements are not found in the AI Act or the Copyright Directive 2019/790, which addresses only primary liability (i.e., the responsibility of GPAI model providers) and does not extend to secondary liability arising from text and data mining.

Again, the issue is not whether these requirements are reasonable, but that the Code’s sole purpose is to clarify the obligations of the AI Act, not to redefine them. Therefore, the Code must not be used as a Trojan horse to reshape the AI Act according to political preferences – bypassing democratic procedures.

Next steps: To adopt or not adopt the draft Code of Practice

What happens next? The Code of Practice will only take effect if approved by the Commission through an implementing act under Art. 56(6) AI Act. Unlike delegated acts (Art. 290 TFEU), implementing acts (291 TFEU) do not empower the Commission to amend or supplement the basic legislation, i.e. the AI Act. As the European Court of Justice has repeatedly confirmed, implementing acts “may neither amend nor supplement the legislative act, even as to its non-essential elements”.

Thus, the Commission and the AI Board must not simply rubber-stamp the current draft. Instead, both the Commission and the AI Board should conduct a thorough, critical review to ensure that the proposed measures are indeed necessary for implementation and do not contradict or go beyond the provisions of the AI Act.

Anything less would not only undermine the carefully negotiated political compromise between Parliament and Council in the AI Act but also lead to an unconstitutional overreach of the Commission’s implementing powers.


SUGGESTED CITATION  Ebers, Martin: When Guidance Becomes Overreach: How the forthcoming Code of Practice Threatens to Undermine the EU’s AI Act, VerfBlog, 2025/4/08, https://verfassungsblog.de/when-guidance-becomes-overreach-gpai-codeofpractice-aiact/, DOI: 10.59704/8581208bb6178802.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.