European AI FOMO
The European Commission Sacrifices the Digital Acquis at the Altar of AI Hype
The last 12 months have seen an extraordinary shift in the European Commission’s approach to digital regulation. Faced with intense geopolitical uncertainty and concerns about EU competitiveness and security, we are seeing a shift away from the espoused “Brussels Effect”, with the EU placing itself as a global rule setter and emphasising fundamental rights at the centre of its strategy to digital regulation. At the heart of such concerns is the much-hyped potential of so-called “artificial intelligence” (AI). In the policy shift, one of the major forces shaping the European Commission’s agenda may be described as AI FOMO (“fear of missing out”).
This AI FOMO is driving the Commission to launch a multitude of investment programmes and other initiatives to boost AI. Our primary focus, however, is the proposed legislative reforms and their manifestation in deregulatory strategies. This post charts how such an agenda is driving Commission policy and highlights how AI FOMO is driving deregulation in EU digital law.
A change in focus
The current agenda marks a major shift from previous Commissions, including President von der Leyen’s first cabinet. Since the passing of the GDPR in 2016 and a failed attempt to pass an ePrivacy Regulation, the Commission had launched legislative efforts to regulate digital practices, services, and products. More or less ambitious, the last Commission drew up an abundance of new legislation, including the Digital Services Act, Digital Markets Act, Data Governance Act, Data Act, AI Act, the since-withdrawn AI Liability Directive, and the Platform Work Directive.
Yet, roughly two months into its tenure, the new Commission seems to have hit the emergency brake. Drawing on two policy reports on the state of the single market and the competitiveness of the EU by Enrico Letta and Mario Draghi, respectively, the Commission is now focusing on its project of exploration at the “frontier” that is AI, and showing a shift in its approach to those AI issues.
Indeed, from an analysis of the Commission’s policy documentation and public addresses from 2025 onwards, AI concerns, and particularly the desire to ensure that the EU does not miss out on economic growth speculated to be driven by AI developments, seem to be motivating the dramatic shifts in policy pursued since the beginning of 2025 and especially via the Digital Omnibus. We suggest that ever since the Letta and Draghi reports, the Commission has been experiencing AI FOMO, driven by simplistic logics pitting regulation against competition and innovation at the expense of substantive protections for individuals.
The Commission’s “new AI frontier”
President von der Leyen set the scene in her speech at the AI Action Summit in Paris in February 2025: parroting marketing claims of AI companies, she opined AI would soon achieve human-level reasoning. Such a claim is spurious, and yet she reiterated it at the Annual Budget Conference in May 2025, asserting a requirement for “embracing a way of life where AI is everywhere” and drawing heavy criticism from AI researchers.
And not long after, deregulation in the digital space began. The Commission quickly withdrew the AI Liability Directive, an attempt to harmonise the law on AI-related liability issues. But this first step was not seen as far enough. At a conference organised by the Commission in September 2025 marking the one-year anniversary of the Draghi report, the namesake economist urged “doing what has not been done before and refusing to be held back by self-imposed limits”, as “Europe’s survival” was said to be at stake and the rest of the world had already broken long-standing taboos. Draghi denounced procedures linked to the rule of law causing delays in the implementation of the deregulatory agenda as complacency. Consequently, he demanded a “radical simplification of the GDPR”, especially with regard to training AI models, and a pause on the application of the AI Act “until we better understand the drawbacks”. By “drawbacks” he did not, as one might think, refer to the known issues of applying high-risk AI systems in sensitive areas such as the health sector, but rather the effects of AI regulation that is supposed to protect individuals from the harms associated with AI. On the heels of Draghi’s speech, in November 2025, the Commission set the course for the main part of its deregulation agenda in EU data law: the Digital Omnibus and the Digital Omnibus on AI, which is not only scraping EU constitutional guardrails but is also on a collision course with major principles underlying EU data legislation.
AI FOMO meets the digital acquis
The Digital Omnibus and the Digital Omnibus on AI were published on 19 November 2025 and propose significant reforms of the EU’s digital rulebook. The Commission’s proposals are emblematic of the level of AI FOMO. Many of the proposed changes which support AI training or deployment risk fundamental rights standards, as well as the coherency of the digital acquis.
Two foundational changes are proposed to narrow the scope of application of the GDPR and broaden potential re-use of data, seemingly to make more data available for AI training and research purposes. First, we see the proposed amendment of the definition of personal data (Article 3(1) Digital Omnibus Proposal). This amendment widely excludes pseudonymised data from the scope of the GDPR by adopting a relativistic approach to the definition: it is considered personal data only for the entity that has the information to identify a given data subject. Given the central nature of the concept of personal data to all aspects of data protection law and the complexity and risks of rewriting this concept, it is hard to conceive of this as a mere “technical” reform.
Further, the Commission proposes to weaken the principle of purpose limitation by broadening the notion of scientific research to include commercial interests (Article 3(1)(b) Digital Omnibus Proposal) and then exempting further processing for scientific purposes from the principle of purpose limitation. This will fundamentally weaken data protection, as any initially legal basis to lawfully process data at the collection point will open the door to further use of that data. Data-driven commercial research has led to major scandals in recent years (e.g., Facebook’s psychological testing or election interference by Cambridge Analytica), yet, such lessons seem to have been forgotten in times of AI hype.
We also see measures explicitly seeking to facilitate AI training. A clarificatory article is proposed to confirm that the legitimate interests legal basis may be used to process personal data for training and AI development purposes (Article 3(15) Digital Omnibus Proposal), a position which is legally doubtful under the current GDPR regime. Additionally, the Commission has proposed a new specific legal basis to allow the scraping of special categories of personal data, i.e., particularly sensitive data such as political opinions, health and biometric data or sexual orientation, for the training of AI models (Article 3(3) Digital Omnibus Proposal). Given that otherwise such special categories of personal data are subject to very strict legal protection, this constitutes a significant liberalisation of the law and – due to the very sensitive nature of this kind of personal data – is particularly vulnerable to generating abuse and discrimination.
Finally, the AI Act, barely adopted, is also being reopened, and proposals for amendments show a distinctly deregulatory flavour. The Digital Omnibus on AI delays the application of various rules in the AI Act until the Commission adopts a corresponding decision (Article 1(31) Digital Omnibus on AI). AI literacy requirements have been watered down (Article 1(4) Digital Omnibus on AI). Significantly, the proposal also largely exempts high-risk models that are already on the market from the application of the AI Act (Article 1(30) Digital Omnibus on AI). Such an amnesty for categories of products already deemed by the EU legislature to carry high risk offers a welcome mat to developers to flood the EU market with abusive technologies before the AI Act’s rules come fully into effect.
Of course, the Digital Omnibus is only a proposal at this stage, though the Council has indicated broad agreement. Nevertheless, there are already significant reasons to be concerned about this direction of travel and its underlying motivations.
The fallout of AI FOMO
EU digital and data laws are certainly not perfect, and yet AI FOMO as a reason for reform is particularly worrying. In its anxiety about missing out on AI, the Commission seems to be leading the EU in what it frames as a race against the US and China as alleged competitors who are following their own agendas, while ceding the benchmarks for success. If there is no clear goal, other than jumping on the AI bandwagon, the assessment of this strategy will likely ignore fallouts for rights protecting individuals and marginalised groups.
Unquestioningly following the move towards ever larger AI models, the dependency on specific US chip technology, as well as political pressure from the US administration, the Commission sees such urgency to act that it is willing to compromise essential elements of its digital regulation. All this long after researchers have started warning of AI hype (see e.g., Raji et al., Bender et al., Widder and Hicks) and at a time when the financial markets are increasingly nervous about an AI bubble, i.e., a massive overvaluation of chip manufacturers, cloud providers, and AI companies.
And when one looks to the concrete actions proposed, the much-vaunted European approach to AI is not discernable. Instead, the Commission resorts to platitudes such as “excellence” and “trust”, while its commitment to “ensuring AI works for people” offers only minimal support for workers in adapting to AI and refers to precisely the regulatory framework it is undermining with its Digital Omnibus. A common thread in all of these legislative and accompanying investment initiatives is that there is a clear focus on competitiveness and AI adoption, while the protection of fundamental rights rhetorically and substantively becomes an afterthought.
This may not be entirely surprising. Since the Data Protection Directive of 1995, EU data (protection) law has sought to balance market-making and fundamental rights objectives (cf. Article 1 DPD and Article 1 GDPR). Similarly, the AI Act aims to improve the internal market and thereby promote AI, while ensuring, inter alia, the protection of fundamental rights (Article 1(1) AI Act). Although it was only an imperfect compromise, EU data protection law has been a source of important individual protections and redress. What we seem to be seeing now is the Commission leaning into the worst of the EU’s neoliberal tendencies, working towards the depoliticisation of the market and elevation of market priorities rather than countering prevailing paradigms. While the Commission superficially retains the veneer of fundamental rights protection, the contents of the Digital Omnibus betray this framing. Accordingly, it is perhaps not surprising that when introducing the Digital Omnibus Executive, Vice-President Henna Virkkunen and Commissioner Michael McGrath each emphasised three times that the new rules did not compromise fundamental rights guarantees. Virkkunen did so after laying out how the goal of the package was to boost EU competitiveness and reduce burdens on businesses, while McGrath argued that the changes to the GDPR were minimal. Such rhetoric cannot be sustained once the substance of the proposals is examined.
Despite its rhetoric about steering the development of AI systems built around European values, it seems the Commission sees the EU at a critical turning point. And consumed with AI FOMO, it is ready to jettison fundamental rights protection and award businesses more discretion in circumventing such protections, if only they use AI systems.



