This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take

| 2 minute read

Italy's AI Law: A National Framework Bounded by the EU AI Act

On October 10, 2025, Italy became the first EU member state to enact comprehensive national AI legislation when Law No. 132/2025 entered into force. The law establishes governance structures, sector-specific rules, and enforcement mechanisms for AI systems operating in Italy. However, its scope is explicitly bounded by EU Regulation 2024/1689 (the EU AI Act). Article 1(1) directs that the Italian law must align with and operate within the European AI regulatory structure, while Article 3(5) prohibits imposing obligations that exceed those already established under EU law. This interpretive framework arose from the European Commission's scrutiny during the TRIS notification process and fundamentally constrains how the law must be applied.


The law's most significant contribution lies in its governance architecture rather than substantive obligations. Article 20 specifies the Agency for Digital Italy (AgID) as the notifying authority and the National Cybersecurity Agency (ACN) as both market surveillance authority and single point of contact with EU institutions. Sector-specific authorities—including the Bank of Italy, CONSOB, and IVASS—retain market surveillance powers within their respective domains to the extent the AI Act permits. The Italian Data Protection Authority and Communications Regulatory Authority (AGCOM) maintain their existing jurisdictions under the GDPR and Digital Services Act. This multi-agency structure creates overlapping competencies that will be defined through enforcement practice.


Across regulated sectors, the law establishes a consistent principle: AI serves as a tool, not a replacement for human judgment. The judicial system illustrates this approach most clearly—while courts may employ AI for administrative and logistical functions, interpretive decisions, legal analysis, factual assessments, and judicial measures remain reserved to judges. This same boundary applies to professional services, where AI may provide support and auxiliary functions but intellectual work remains the responsibility of human professionals who must inform clients about AI use. The employment context imposes transparency obligations through Article 12, requiring employers to inform employees when AI systems are deployed, and creates a National Observatory to monitor AI's labor market effects. Healthcare provisions permit AI assistance with prevention, diagnosis, and treatment while maintaining ultimate decisional authority with medical professionals and requiring patient notification. Article 8(2) separately addresses clinical research by classifying AI-driven personal data processing in healthcare as serving significant public interest under GDPR Article 9, thereby permitting secondary use of anonymized or pseudonymized health data to support AI system development.


Italy's approach includes enforcement mechanisms, particularly in criminal law. Article 26(1)(c) adds Article 612-quater to the Italian Criminal Code, criminalizing the dissemination of AI-generated or altered images, video, or voice content that misleads recipients about authenticity and causes unjust harm, with penalties of one to five years imprisonment. The law also establishes aggravating circumstances for crimes committed with AI assistance and increases penalties for market manipulation offenses facilitated by AI under the Financial Consolidated Act. Intellectual property provisions, amending Law No. 633/1941, extend copyright protection to AI-assisted works only when they result from human creativity and expand text and data mining exceptions to cover AI systems, including generative AI.


The law's full impact remains uncertain pending implementing decrees due by October 2026. Article 16 mandates decrees defining a comprehensive framework for data, algorithms, and AI training methodologies, including rights, obligations, remedies, and sanctions, while Article 24 authorizes the government to specify enforcement powers and address unlawful AI system creation and use. These decrees must adhere to the same interpretive constraint: consistency with the EU AI Act without imposing additional obligations. Companies and public administrations face immediate compliance obligations, including auditing existing AI systems, assessing legal bases for their use, evaluating transparency levels, and implementing risk mitigation measures. Italy has authorized up to €1 billion in investments to support AI, cybersecurity, and quantum computing innovation. The implementing phase will prove critical for determining whether Italy achieves regulatory coherence with the European framework or adds complexity to the AI regulatory landscape.