This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take

| 2 minute read

EU Releases General-Purpose AI Code of Practice

On July 10, 2025, the EU published its Code of Practice for General-Purpose AI Models, a comprehensive, though not exhaustive, framework designed to guide Artificial Intelligence ("AI") providers in complying with the European Union’s AI Act. Its overarching objective is to foster a trustworthy, human-centric, and innovative AI ecosystem while ensuring the protection of health, safety, fundamental rights, democracy, and the environment. The Code is structured into three main chapters: Transparency, Copyright, and Safety & Security. Each chapter outlines specific commitments and measures for AI providers, aiming to ensure responsible development, deployment, and oversight of general-purpose AI models, especially those with systemic risks. 

Transparency Chapter

The Transparency Chapter establishes obligations for providers to document, update, and share detailed information about their AI models. This includes technical specifications, training processes, data provenance, energy consumption, and usage policies. Providers must maintain a Model Documentation Form, which serves as a central repository for all required information. This documentation is intended for downstream providers, the AI Office, and national competent authorities, with strict confidentiality and intellectual property protections in place. 

Key measures include:

  • Drawing up and keeping up-to-date model documentation, including all relevant technical and operational details. 
  • Providing access to this documentation to the AI Office and downstream providers upon request, ensuring that downstream providers can understand and integrate the models responsibly. 
  • Ensuring the quality, integrity, and security of the documented information, following established protocols and technical standards. 

The transparency requirements are designed to facilitate oversight, enable compliance with legal obligations, and support the safe integration of AI models into various applications. 

Copyright Chapter

The Copyright Chapter focuses on ensuring that AI providers respect Union copyright law throughout the lifecycle of their models. Providers must implement a copyright policy that addresses lawful data use, respects rights reservations, and mitigates the risk of copyright infringement. 

Key commitments include:

  • Developing, maintaining, and implementing a copyright policy that complies with EU copyright and related rights law. 
  • Ensuring that only lawfully accessible content is used for training, and not circumventing technological protection measures such as paywalls. 
  • Identifying and complying with machine-readable rights reservations (e.g., via robots.txt or metadata) when crawling the web for training data. 
  • Implementing technical safeguards to prevent the generation of copyright-infringing outputs and prohibiting such uses in their terms and conditions. 
  • Establishing a point of contact and a mechanism for rights holders to lodge complaints regarding non-compliance. 

These measures are intended to protect the interests of rights holders while supporting the lawful and innovative use of data in AI development. 

Safety and Security Chapter

The Safety and Security Chapter is particularly focused on general-purpose AI models with systemic risks—those whose high-impact capabilities could significantly affect public health, safety, security, fundamental rights, or society at large. Providers are required to adopt a state-of-the-art Safety and Security Framework that encompasses risk management across the entire model lifecycle. 

Key elements include:

  • Creating, implementing, and regularly updating a risk management framework that identifies, analyzes, and mitigates systemic risks. 
  • Conducting structured risk assessments, including model evaluations, scenario development, and post-market monitoring.
  • Implementing safety mitigations (e.g., data filtering, output monitoring, staged access) and robust cybersecurity measures to protect model parameters and infrastructure. 
  • Allocating clear responsibilities for risk management within the organization and promoting a healthy risk culture. 
  • Reporting serious incidents to the AI Office and national authorities, and maintaining detailed documentation for oversight.

The chapter emphasizes continuous assessment, cooperation with external evaluators, and transparency in risk management processes to ensure that systemic risks remain acceptable and are effectively mitigated. 

Conclusion

Together, these three chapters appear to form a robust code of practice that operationalizes the EU’s vision for safe, transparent, and lawful AI. The Code balances innovation with accountability, while trying to safeguard the public interest. 

The Code of Practice helps industry comply with the AI Act legal obligations on safety, transparency and copyright of general-purpose AI models.

Tags

ai, article