top of page

Systemic Risk Identification in EU Code of Practice

Jul 24

5 min read

Understanding the European Commission’s Code of Practice for General-Purpose AI Models - Part 2


By Yunus Bulut


In our ongoing exploration of the Code of Practice for General-Purpose AI Models, we turn our focus to systemic risk identification, analysis, and acceptance determination. These processes are outlined in Commitments 2, 3, and 4, and they represent the backbone of responsible general-purpose AI governance. For users of general-purpose AI (GPAI) models, understanding these commitments is crucial for evaluating whether AI providers are meeting their obligations to identify, analyze, and manage systemic risks effectively.


This blog post unpacks the key elements of these commitments, explaining how systemic risks are identified, analyzed, and deemed acceptable. By the end, you’ll be better equipped to assess whether the providers you rely on are adhering to the standards outlined in the Code of Practice.

 


Commitment 2: Systemic Risk Identification


The first step in addressing systemic risks is identification. Commitment 2 requires AI providers to systematically identify risks that could stem from their models and propagate harm at scale, either directly or indirectly. The goal is to recognize risks early in the model lifecycle, enabling targeted mitigation efforts.


Key Processes


  1. Structured Risk Identification:

    • Providers must compile a list of potential systemic risks based on a predefined set of risk types (e.g., risks to public health, safety, fundamental rights, or societal stability). These risks are outlined in Appendix 1.1 of the Code.

    • Relevant information sources include:

      • Model-independent information (e.g., market analyses, literature reviews).

      • Insights from similar models and post-market monitoring.

      • Guidance from regulatory bodies, external experts, and other stakeholders.

  2. Systemic Risk Scenarios:

    • For each identified risk, providers must develop detailed systemic risk scenarios—hypothetical situations that illustrate how a risk might materialize. Scenarios help providers understand the pathways through which risks could emerge and spread.


What You Should Look For


  • Providers should publish summaries of their systemic risk identification processes and scenarios.

  • Look for transparency in how potential risks are documented and justified.

  • Verify whether providers consider a wide range of risks, including emerging threats like misuse in critical infrastructure or harmful societal manipulation.

 


Commitment 3: Systemic Risk Analysis


Once risks are identified, the next step is analysis. Commitment 3 focuses on deeply understanding the nature, sources, and potential impacts of each systemic risk.


Key Elements


  1. Model-Independent Research: Providers gather external information, such as incident reports, forecasts, and market trends, to inform their analysis. This ensures that the analysis is not limited by the provider's internal perspective.

  2. Model Evaluations: Providers conduct state-of-the-art model evaluations, testing the model’s capabilities, propensities, and effects. Techniques include:

    • Red-teaming and adversarial testing to uncover vulnerabilities.

    • Simulations and stress tests to evaluate the model’s behavior in different scenarios.

    • Benchmarking against similar models to assess relative risks.

  3. Systemic Risk Modeling: Providers use structured modeling techniques to map out how specific risks could materialize. This includes understanding risk pathways, sources, and potential cascading effects.

  4. Risk Estimation: Providers estimate the likelihood and severity of harm for each systemic risk, expressed in formats such as risk matrices or probability distributions.

  5. Post-Market Monitoring: Even after deployment, providers must monitor real-world usage to detect new risks or changes in existing ones. Feedback mechanisms (e.g., reporting channels, community evaluations) are essential for this process.


What You Should Look For


  • Evidence of rigorous evaluations: Does the provider disclose how their models were tested? Are their testing methods aligned with state-of-the-art practices?

  • Transparency in Modeling and Estimation: AI providers should share insights into how they model systemic risks and estimate their probability and severity. Look for detailed risk matrices, qualitative descriptions, or quantitative data that outline the risks associated with the model.

  • Post-Market Monitoring Mechanisms: Providers should have clear frameworks for monitoring models after deployment. These might include user feedback channels, incident reporting tools, or partnerships with external evaluators to identify emerging risks.

  • Independent External Evaluations: Independent assessments by third-party experts are an important indicator of a provider's commitment to unbiased systemic risk analysis. Ensure the provider includes external validation in their processes.

 


Commitment 4: Systemic Risk Acceptance Determination


The final step in the systemic risk management process is determining whether the identified and analyzed risks are acceptable. Commitment 4 ensures that AI providers establish clear criteria for systemic risk acceptance and make informed decisions about whether to proceed with development, market deployment, or use of the model.


Key Processes


  1. Defining Acceptance Criteria:

    • Providers must set measurable systemic risk tiers or other criteria that indicate acceptable levels of risk. These tiers are based on model capabilities, propensities, and other metrics.

    • Safety margins are incorporated to account for uncertainties and potential limitations in risk assessments and mitigations.

  2. Assessing Risks Against Criteria: Each identified systemic risk is evaluated against the predefined acceptance criteria. This includes considering mitigating factors like safeguards implemented during development.

  3. Decision to Proceed or Not:

    • If systemic risks are determined to be acceptable, providers can proceed with development, deployment, or use of the model.

    • If risks are deemed unacceptable, providers must take corrective actions, such as:

      • Restricting market availability (e.g., adjusting licenses or usage restrictions).

      • Implementing additional safety and security mitigations.

      • Conducting another round of systemic risk identification, analysis, and acceptance determination.


What You Should Look For


  • Clear Risk Criteria: Providers should publish their systemic risk acceptance criteria. These criteria should be transparent, measurable, and justified based on the nature of the risks.

  • Documentation of Decisions: Providers should explain why they determined the risks to be acceptable and describe the safety margins they incorporated.

  • Corrective Measures for Unacceptable Risks: Verify whether the provider has a process for halting development or restricting deployment if risks exceed acceptable thresholds.

 


How Commitments 2, 3, and 4 Protect Users and Society


Together, Commitments 2, 3, and 4 form a comprehensive framework for systemic risk management. Here’s how these commitments benefit users and society:


  1. Proactive Risk Management: By identifying and analyzing risks early in the lifecycle, providers reduce the likelihood of harmful impacts after deployment.

  2. Transparency and Accountability: Clear documentation of systemic risks and decisions builds user trust and ensures providers are held accountable for their actions.

  3. Adaptability to Emerging Risks: Post-market monitoring and iterative risk assessments enable providers to respond to new threats as they arise.

  4. Alignment with Ethical Standards: The systemic risk acceptance process encourages providers to prioritize safety, security, and fundamental rights, ensuring models are deployed responsibly.

 


Conclusion


Commitments 2, 3, and 4 of the Code of Practice establish a robust framework for managing systemic risks in general-purpose AI models. As a user or stakeholder in the AI ecosystem, understanding Commitments 2, 3, and 4 empowers you to advocate for responsible AI practices and hold providers accountable for systemic risk management.


For providers, they represent an essential responsibility to protect users, society, and the broader AI ecosystem. As we advance in the age of increasingly capable AI systems, systemic risk management will remain a cornerstone of trustworthy and safe AI development.


In upcoming posts, we will delve into other commitments outlined in the Code of Practice, including safety mitigations (Commitment 5), security mitigations (Commitment 6), and serious incident reporting (Commitment 9). These commitments complement systemic risk management and collectively form the foundation of robust AI governance. Stay tuned as we continue to unpack the Code of Practice and explore actionable insights for both AI users and providers!

Related Posts

bottom of page