
Understanding the European Commission’s Code of Practice for General-Purpose AI Models Part 1
By Yunus Bulut
Why AI Systemic Risk Demands Attention
The rapid evolution of artificial intelligence (AI) offers significant opportunities for industries and societies worldwide, enabling groundbreaking research and automating complex tasks. However, its transformative capabilities also pose systemic risks, impacting public health, safety, security, fundamental rights, and society as a whole.
To address these risks, the European Commission has introduced the Code of Practice for General-Purpose AI Models, a guiding document that complements the broader framework of the EU AI Act. This code is designed to foster innovation while ensuring that AI systems remain safe, secure, and trustworthy. It serves as a roadmap for AI providers to demonstrate compliance with regulations, mitigate systemic risks, and uphold fundamental rights and societal values.
In this blog, we’ll introduce the code’s overarching objectives, with a particular focus on the Safety and Security chapter, which plays a pivotal role in ensuring that general-purpose AI models are responsibly developed, deployed, and monitored. To further explore the code, we are excited to announce the launch of a series of blog posts dedicated to shedding light on the intricacies of the code. Each blog will delve deeper into specific aspects, providing insights and clarifications to enhance understanding and promote responsible practices within the AI community.
Objectives of the Code of Practice for General-Purpose AI Models
The Code of Practice is not just a compliance checklist—it’s a proactive framework aimed at balancing innovation with accountability. Its key objectives include:
Improving Market Functioning: By creating harmonized standards, the Code helps AI providers navigate the EU’s internal market while fostering competition and innovation.
Promoting Human-Centric AI: The Code emphasizes the importance of developing AI systems that prioritize human well-being, fundamental rights, democracy, and environmental protection.
Ensuring Safety and Security: It provides actionable guidance to mitigate systemic risks, ensuring that AI models do not cause harm to users or society at large.
Supporting Compliance: While adherence to the Code is voluntary, it offers providers a structured approach to align with the obligations set forth by the EU AI Act. By addressing these objectives, the Code seeks to build public trust in AI technologies and set a global precedent for responsible AI governance.
What Are Systemic Risks in AI?
Central to the Code is the concept of systemic risk.
Systemic risk is a type of risk specific to general-purpose AI models with high-impact capabilities.
These risks are characterized by their ability to cause significant harm across multiple domains, including:
Public Health and Safety: Risks such as misinformation, harmful outputs, or models used in dangerous applications like bioweapons design.
Public Security: Risks stemming from cybersecurity vulnerabilities or malicious misuse of AI tools.
Fundamental Rights: Risks that undermine privacy, non-discrimination, or freedom of expression.
Society as a Whole: Risks that amplify social inequalities, disrupt democratic processes, or concentrate power in unethical ways. The Code outlines structured processes for identifying, assessing, and mitigating these risks, ensuring they are addressed throughout the AI model’s lifecycle—from development to post-market monitoring.
Spotlight on the Safety and Security Chapter
One of the most critical components of the Code is the Safety and Security chapter, which provides detailed guidance for AI providers to manage systemic risks. This chapter focuses on:
Lifecycle Management: Providers must continuously assess and mitigate risks throughout the model lifecycle, including after deployment. Updates to risk management processes are encouraged to keep pace with emerging capabilities.
Contextual Risk Assessment: Risks are not evaluated in isolation; providers must consider the broader environment in which AI models operate, including system architecture, integrations, and computing resources.
Proportionality: The level of risk mitigation must match the severity of the systemic risks identified. For high-risk models, more robust safety and security measures are required.
Collaboration: AI providers are encouraged to collaborate with regulators, civil society, and other stakeholders to share evaluation methods and infrastructure, fostering efficiency and transparency.
Innovation in Safety and Security: The Code emphasizes advancing the state of the art in AI safety and security measures. Providers are encouraged to develop innovative techniques tailored to address specific risks while maintaining the beneficial capabilities of AI models.
By adhering to these principles, the Safety and Security chapter ensures that AI providers proactively manage risks, rather than responding reactively to incidents after they occur. This forward-looking approach is essential in the fast-paced and ever-evolving AI landscape.
Key Commitments in the Safety and Security Chapter
The chapter sets forth a series of commitments for AI providers, which serve as actionable steps to meet the Code's objectives. These include:
Developing a Safety and Security Framework: Providers must create, implement, and continuously update a framework outlining their risk management processes and measures.

Systemic Risk Identification and Analysis: Providers are required to identify systemic risks through structured processes and analyze them using rigorous evaluation methods.
Systemic Risk Mitigation: If risks are deemed unacceptable, providers must implement safety and security measures to reduce risks to acceptable levels before proceeding with model deployment.
Serious Incident Reporting: Providers must establish processes to track, document, and report serious incidents involving their models, enabling timely corrective actions.
Transparency and Accountability: Providers are encouraged to document their processes and share summaries of their frameworks and reports, as necessary, to enhance public trust while protecting sensitive commercial information.
Each of these commitments is accompanied by specific measures to ensure their effective implementation, fostering a culture of accountability and risk awareness.
Why the Code Matters for AI Providers and Stakeholders
The European Commission's Code of Practice is more than a regulatory guideline—it represents a paradigm shift in how systemic risks in AI are understood and managed.
For AI providers, it offers clarity on expectations and provides tools to navigate complex compliance landscapes. For stakeholders, including governments, civil society, and end users, it serves as a benchmark for trustworthy AI development. By adhering to the Code, providers can:
Demonstrate Responsibility: Show commitment to developing human-centric AI models that prioritize safety and security.
Build Public Trust: Enhance transparency and accountability to foster confidence in AI technologies.
Reduce Regulatory Uncertainty: Align practices with the EU AI Act and other relevant laws, minimizing the risk of non-compliance.
Laying the Foundation for Safer AI
The Code of Practice for General-Purpose AI Models is a vital step toward ensuring that large-scale generative AI models are safe, secure, and aligned with societal values. The Safety and Security chapter equips providers with the tools to proactively assess and mitigate systemic risks, paving the way for responsible AI innovation. As the AI landscape continues to evolve, adherence to frameworks like this will be crucial in balancing technological advancements with ethical, legal, and societal considerations.
In our next blog, we’ll take a closer look at the principles underpinning the Safety and Security chapter and explore how they guide risk management processes for general-purpose AI models. Stay tuned as we unpack these principles and dive deeper into the Code’s commitments in the upcoming posts in this series!
References
[1] https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai