17 November 2025

The Omnibus Package of the EU Commission

Or How to Kill Data Protection Fast

The European Commission is planning a fundamental overhaul of European digital regulation, and it wants to move quickly. On 19 November, it intends to publish draft legislation for an omnibus package designed to simplify, reduce bureaucracy, and harmonise various legal acts. The draft was leaked last week, and it is a tough one for everyone who appreciates fundamental rights-oriented regulation. If the proposals obtain the necessary majorities in the legislative process, the cornerstones of data protection law would be fundamentally changed. This article focuses on the proposed amendments to the definitions in Articles 4 and 9 of the GDPR.

The discussion leading to the Omnibus

Since the Draghi Report, the concept of competitiveness and efficiency has been the apparent driving force behind all European regulations. That said, efficiency in itself is not a meaningful value – one can also abolish democracies very efficiently. By contrast, democratic values and the protection of fundamental rights – from a normative view – are not subject to any efficiency constraint. Pursuing competitiveness, especially between EU and US-based Big Tech companies, via deregulation is a risky business: deregulation favours those who don’t follow the rules currently and punishes the entities that do.

There is broad agreement that the jumble of European regulations, especially in the digital sector, has become so complex that even experts can hardly keep track of them without explicit clarification of their interrelationships. Furthermore, there is lively debate about how to address this shortcoming. Various aspects of reforming the GDPR have already been addressed here, here and here. However, none have questioned the fundamental definitions of personal data and special categories of personal data. Instead, discussion has focused on a risk-based approach to the GDPR, which relieves smaller, supposedly lower-risk data processors while regulating larger ones (more strictly).

I do not dispute that the GDPR incurs considerable compliance costs and that SMEs (small and medium enterprises) in particular are sometimes disproportionately burdened. It is convincing, reasonable and fair to differentiate according to the risks involved in data processing. But: there is nearly nothing of this approach in the Commission’s omnibus proposal. Instead, it attacks basic definitions that have existed for decades and are deeply rooted in the Charter of Fundamental Rights as well as the jurisprudence of the CJEU.

Defining personal data as the core decision of the GDPR

Personal data under Article 4 (1) means information related to an identified or identifiable natural person. The classification relies on a risk assessment: anonymised data does not fall within the scope of the GDPR. But how high must the risk of re-identification be for data protection obligations to apply?

The current definition relies on objective criteria and context for identifiability that were established by the interpretation of Article 8 of the Charter of Fundamental Rights (CFR) and the case law of the CJEU. Data is not personal when it is not likely to be re-identified either by the controller or by any other person; this understanding trades back to the Data Protection Directive. In the Breyer case, the CJEU ruled that a means is unlikely to be used to identify a person if the risk of identification is de facto insignificant. This applies when identifying the person is prohibited by law or is impracticable, for example, due to a disproportionate amount of time, cost and labour. Furthermore, the court decided that identification is possible if the controller can access additional information, including via competent state authorities.

In case C-479/22 P, the CJEU examined whether information in a press release would allow identification by the general public, especially via online sources. This means that the information required to identify the person does not have to be held by a single controller.

In SRB/EDPS, the CJEU developed a context-related understanding of personal data: entities whose knowledge the controller cannot use are irrelevant for assessing identification risk, even if they (the third entities) can identify the data subjects themselves. However, 13(1)(e) of the GDPR still obliges controllers to inform data subjects about potential recipients, even if the data later appears anonymous.

The Omnibus draft is changing the definition of personal data

The draft now states that data is not personal data if another entity can reasonably likely identify a natural person: “Such information does not become personal for that entity merely because a subsequent recipient has means reasonably likely to be used to identify the natural person to whom the information relates.”

As a result, the proposal divides responsibility between different entities, linking the probability of re-identification to the respective data controller. At first glance, this seems reasonable: one cannot be responsible for something that one does not control. It also seems to be in line with the current SRB/EDPS decision of the CJEU.

However, this decision concerned a very specific scenario matching IDs with comments that does not lead to the conclusion that secondary identifiability is not relevant: the court explicitly referred to the Breyer case, stating that for data to be classified as “personal data”, it is not necessary for all the information required to identify the person concerned to be in the hands of a single entity (par. 99). The draft, on the other hand, clearly states that the identification potential of the subsequent recipient is irrelevant, even though the means are reasonably likely. This complicates matters further by the fact that the assessment of the risk of re-identification refers to a third party.

This division leads to a diffusion of responsibility. Entities without the technical re-identification capability could argue that pseudonymised data always falls outside the GDPR or split up data processing procedures to avoid obligations. This blurring erodes data subjects’ rights: if the GDPR does not apply, data subjects have no rights, the principle of purpose limitation does not apply, and there is no legal basis to determine responsibility. If data subjects do not know if their data is processed, they cannot exercise their rights.

It is unclear how this will foster the goals of competitiveness, efficiency and simplification. Advertising IDs, for example, enable the identification of individuals without assigning them to a name. An organisation that does not itself have the means of identification could forward this information outside the scope of the GDPR without having to inform the data subjects. Instead of introducing a risk-based adjustment for different types of data controllers, the draft applies uniformly – from Big Tech to local kindergartens.

Narrowing the understanding of special categories of date

The draft also changes Article 9 GDPR. It does not introduce risk categories but fundamentally changes the definition of “special categories of data” for all data processing operations. Currently, Article 9 provides special protection of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership.

In practice, this provision is difficult to handle: what data can “reveal” sensitive data? In the age of AI, everything can become relevant; only a few data points are needed to attribute sensitive data to a person. Treating the often cited example of a food delivery to a psychiatric hospital the same as processing by Meta or Google, which build extensive health profiles of users, is not constructive.

Unfortunately, the draft does not address this problem. Instead, it lowers the overall level of protection for sensitive data: it only covers information that directly reveals the protected attributes to a specific data subject. First, this contradicts Article 6 of Convention 108 and current CJEU case law. Second, all techniques of inferences by AI systems would no longer fall within the scope of application. This does not make things any easier for smaller companies that rely on the processing of health data, for example, in the care sector. Meta, Google and Co., on the other hand, have had no need for sensitive data in primary collection for decades but can simply infer it. It is completely unclear how these changes are intended to promote European competitiveness.

A free pass for AI training with sensitive data

The new Article 9(2k,5) establishes a new exception from the general prohibition of processing personal data in Article 9(1): the processing shall now be allowed in the context of the development and operation of an AI system as defined in the AI Act. Linking the GDPR to the AI Act is welcome, but the provision in Article 9(2k) leads to considerable uncertainty in relation to Article 10(5) of the AI Act. That provision permits sensitive data processing under strict bias detection rules. The draft, however, allows general processing for AI training and operation.

The clear aim of lowering the level of protection in the omnibus proposal is particularly evident here. The AI Act already creates an exception to Article 9(1), and the draft extends this even further. Paragraph 5 calls for “appropriate” organisational and technical measures in the case of AI training and operation with sensitive data, but this adds legal uncertainty. It considers only the perspective of the controller, without any proportionality assessment. Again: SMEs will struggle to define and evidence “appropriate” safeguards, while large players effectively receive a green light to train and operate AI with sensitive data.

The proposal favours AI over other technologies without assessing any risks. The reference to “training an AI system” is vague and cannot serve as a proper regulatory category; it effectively makes regulation and enforcement impossible. For example, the SORA video model – currently in the headlines for mass production of torture videos of women – would fall under the new permit, while a deterministic algorithm in medical research still requires consent. This approach caricatures fundamental rights protection: the “AI” label becomes a shortcut to lighter rules.

These amendments sit alongside a proposed new Article 88c to ground AI training in legitimate interests. I have explained here why a provision that requires consideration of individual cases is unsuitable for undifferentiated mass data processing, both from a dogmatic perspective.

Political implications

Parts of the proposal read like a retrospective legal legitimisation of existing business models that structurally violate data protection law but are so widespread in reality that consistent enforcement has become unrealistic – a gift to Big Tech. If this is ultimately the outcome of the democratic legislative process, so be it. But then we should acknowledge the implications of relinquishing the last remaining instruments of control.

Even in the US – often cited as a counterpoint to European regulatory zeal – critical voices are growing louder about the lack of regulation of the AI. It’s not only disconnected from the real economy by dizzying sums of investor money but also has a real impact on people’s lives. Numerous lawsuits are pending in various areas; the voices that fear the AI bubble is going to burst are growing. The blanket assertion of a “race” and the empty shells of “effectiveness” and “competitiveness” are becoming tiresome.

Europe will not catch up with the US in Big Tech – and has no reason to want to. Legal certainty, digital sovereignty, and a principled commitment to fundamental rights and democratic values are the true foundations of sustainable development. What Europe needs is investment in its markets, infrastructure, and SMEs, including industrial AI.

The way forward

There are many things one can suggest at first, and the draft also contains positive elements that could advance the Omnibus’s objectives, which merit analysis beyond the scope of this piece. But amending instruments so deeply rooted in the CFR and in primary law requires an evidence‑based assessment. It needs an honest recognition that normative consequences and future externalities cannot be reduced to metrics. Now is the time for a serious debate that can lead to an informed, balanced legislative choice. Fear of “falling behind” is a poor guide. Sustainable growth, public‑interest‑oriented technology, and real‑world innovation depend on expertise, foresight, and a clear‑eyed assessment of risks and opportunities. Your chance, Europe!


SUGGESTED CITATION  Ruschemeier, Hannah: The Omnibus Package of the EU Commission: Or How to Kill Data Protection Fast, VerfBlog, 2025/11/17, https://verfassungsblog.de/the-omnibus-package-of-the-eu-commission/.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
AI, DSGVO, Data, European Commission, gdpr