Italy’s Constitutional Gamble
Democratic Safeguards and Fragmentation Risks in Europe’s First National AI Law
On October 10, 2025, Italy enacted Law No. 132/2025 on artificial intelligence, becoming the first EU Member State to adopt comprehensive national legislation complementing the AI Act. This move raises a serious constitutional question: can Member States successfully navigate the tension between European harmonization and national constitutional identity in AI governance? Or does Italy’s pioneering approach risk triggering precisely the kind of regulatory fragmentation that the AI Act was designed to prevent?
This contribution argues that while Law 132/2025 introduces innovative constitutional safeguards – particularly regarding democratic integrity – it simultaneously exposes fundamental contradictions in the EU’s approach to digital sovereignty that may undermine both innovation and rights protection.
Between harmonization and fragmentation
Italy’s legislative initiative emerges at a moment of acute uncertainty in EU AI governance. After years of development, the AI Act still faces significant implementation challenges across Member States. By moving first, Italy tries to establish a national framework that anticipates EU requirements while asserting regulatory autonomy in areas left to state competence. This dual ambition, however, creates inherent tensions.
The fundamental example of that is in Article 1(2) of Law 132/2025, which simultaneously commits to “interpretation and application in conformity with the EU regulation” while introducing in later provisions distinctly national principles and sectoral rules. This creates a constitutional tightrope: too much national specificity risks creating barriers to the digital single market; too little could fail to protect constitutional values that vary significantly among Member States. Recent scholarship on digital sovereignty and the AI Act highlights this dilemma – national implementation could either strengthen or fragment Europe’s regulatory architecture.
Italy’s approach reveals a deeper structural problem in EU AI governance. Although the AI Act is a directly applicable regulation, it leaves Member States substantial discretion in key areas – national security, sector-specific applications, and institutional design. This creates what might be termed “harmonized fragmentation” – a regulatory landscape where formal unity masks substantive divergence.
Constitutional protection in the digital age
The most constitutionally significant innovation in Law 132/2025 appears in Article 3(4), which prohibits the use of AI systems that may prejudice the conduct of institutional and political life according to democratic methods and the exercise of competencies and functions of territorial institutions based on principles of autonomy and subsidiarity, or that may prejudice the freedom of democratic debate from illicit interference.
This provision represents a genuine principle of constitutional defence – extending traditional fundamental rights protection into the collective dimension of democratic sovereignty. Unlike the AI Act’s focus on individual rights and market harmonization, Italy explicitly prioritizes the integrity of democratic Institutions and processes. In an era of deepfakes, algorithmic manipulation of public opinion, and AI-enabled disinformation, this approach deserves serious consideration.
Yet this innovation also exposes critical questions: Who defines when AI “prejudices” democratic debate? What institutional mechanisms will operationalize this principle? And most importantly, does this provision create a constitutionally legitimate safeguard or an overbroad restriction that could freeze legitimate speech and innovation?
The Italian Constitutional Court’s jurisprudence on the protection of democratic pluralism and institutional integrity suggests that Article 3(4) draws from a well-established constitutional tradition. However, the application of these principles to AI systems – technologies that operate through opacity, scale, and speed fundamentally different from traditional media – remains untested. The risk is that without clear judicial guidance and transparent enforcement mechanisms, this “constitutional safeguard” could become either ineffective symbolism or a tool for discretionary governmental control.
Sovereignty through industrial policy
Articles 5 and 20 of Law 132/2025 reveal a second constitutional dimension: the assertion of digital sovereignty through industrial policy. Article 5 mandates state promotion of AI development specifically in Italy’s “micro, small, and medium enterprises” and facilitates access to “high-quality data” for startups developing AI systems. Article 20 designates AgID (Agency for Digital Italy) and ACN (National Cybersecurity Agency) – both special public administrations – as national authorities, centralizing AI governance within distinctly national institutional frameworks.
This turns AI regulation into an instrument of economic statecraft, reflecting Europe’s concern over technological dependence on non-EU actors and the dominance of the United States and China.
However, this sovereigntist orientation creates significant risks for the digital single market. If each Member State adopts similar industrial policies favouring “national champions”, the result could be 27 fragmented AI ecosystems with different standards, duplicated infrastructure, and barriers to cross-border deployment. Legal analysis warns that this risk of fragmentation is not hypothetical.
Furthermore, assigning broad powers to national authorities raises coordination and accountability concerns. While the AI Act establishes the European AI Office as a coordinating body, the effectiveness of this architecture depends on genuine cooperation between national authorities with potentially divergent priorities. Italy’s emphasis on national cybersecurity and digital development may create incentives for other countries to have a regulatory competition rather than harmonization.
The limits of sectoral specificity
Law 132/2025 takes a sectoral approach, with specific provisions for healthcare (Articles 7-8-9-10), labour (Articles 11-12), intellectual professions (Article 13), personal data in administration and justice (Articles 14-15), and criminal law (Article 26).
This granularity reflects Italy’s civil law tradition of comprehensive codification and sector-specific regulation.
Yet this approach reveals a fundamental limitation: technology, unfortunately, does not respect sectoral boundaries as this law intends to do. AI systems increasingly operate across multiple domains: a healthcare algorithm may rely on labour market data, implicate personal data protection, and raise questions of professional liability issues. The sectoral fragmentation of Law 132/2025 risks to create regulatory gaps, overlapping competencies, and compliance burdens – especially for the small and medium enterprises it aims to support.
Moreover, delegating key details to executive decrees (Article 24) adds further uncertainty. Until these decrees are adopted, many provisions remain still undefined. This technique – common in Italian law – stands in tension with the AI Act’s emphasis on legal certainty and predictability for AI developers and deployers.
A model for europe?
Italy’s Law 132/2025 presents European policymakers with both an opportunity and a cautionary tale. Its innovative constitutional safeguards – particularly the principle of democratic protection and the explicit recognition of AI’s implications for institutional integrity – offer valuable insights for other Member States grappling with similar challenges. But the law also reveals the fundamental contradictions in the EU’s AI governance model. The AI Act’s architecture assumes that a risk-based, harmonized framework can accommodate national constitutional diversity while maintaining integration. Italy’s experience suggests this assumption may be optimistic. Industrial policy as sovereignty assertion, sectoral complexity, and fragmented institutions all point toward divergence, not convergence.
Because of these reasons, three critical questions will shape the future of Italian and European AI governance:
First, institutional capacity: Can Italy’s designated authorities – AgID and ACN – effectively coordinate surveillance, enforcement, and implementation across the complex landscape created by Law 132/2025? The success of Italy’s model depends not on legislative text but on administrative capacity, technical expertise, and political will. Without significant investment in these institutions, the law risks becoming aspirational rather than operational.
Second, constitutional adjudication: How will the Italian Constitutional Court interpret provisions like Article 3(4) when real cases arise? Judicial interpretation will determine whether this innovation enhances or undermines fundamental rights.
Third, European coordination: Will other Member States follow Italy’s path, creating 27 variations on AI governance, or will the European AI Office successfully coordinate national implementations toward genuine harmonization?
The answers to these questions will determine whether the AI Act achieves its goal of a unified European approach or instead legitimizes regulatory fragmentation.
Conclusion
Italy’s Law 132/2025 is a constitutional gamble: an attempt to assert national regulatory identity within a framework designed for harmonization. Its recognition that AI affects not just individuals, but democracy itself deserves serious attention across Europe.
Yet the law also reveals the limits of national AI legislation in an interconnected digital society. Balancing sovereignty with integration and constitutional diversity with market unity, cannot be resolved through legislative text alone. It requires ongoing coordination, judicial clarification, and political commitment to a genuinely European vision of trustworthy AI.
Italy’s success will depend on implementation – whether the principles of human oversight, transparency, and democratic safeguarding translate into effective oversight. It will depend on whether national authorities can audit high-impact systems; and whether public Institutions protect democracy without choking innovation.
For other Member States the lesson is clear: national AI legislation is both necessary and dangerous. Necessary because the AI Act alone cannot address all constitutional and sectoral specificities. Dangerous because poorly coordinated national rules risk fragmenting precisely what Europe most needs: a unified, rights-respecting, and innovation-enabling digital space.
Italy has made its move, and we can say it’s a very interesting one. So, what’s next?



