
AI Meets Constitution: The Role of the Fundamental Rights Impact Assessment
May 5
5 min read
By Anil Tahnmisoglu • 05/05/2025
The EU Artificial Intelligence Act (EU AI Act) is set to become the world’s most comprehensive legal framework for the development and deployment of AI systems. Among its many obligations, Article 27 introduces a new compliance milestone for certain deployers of Annex III high-risk AI systems: the Fundamental Rights Impact Assessment (FRIA).
This blog post outlines what a FRIA entails, who must conduct one, when it must be completed, and how it interacts with other existing assessments like the Data Protection Impact Assessment (DPIA) under the GDPR.

What Is a Fundamental Rights Impact Assessment?
A Fundamental Rights Impact Assessment is a structured, risk-based analysis of the potential impact that a high-risk AI system may have on individuals’ fundamental rights, as protected by the Charter of Fundamental Rights of the European Union. It is not a risk elimination tool—but a mechanism for identifying, evaluating, and mitigating rights-related risks prior to system deployment.
AI systems—particularly those that are opaque, autonomous, and capable of making decisions with real-world consequences—introduce risks such as discrimination, unfair treatment, or denial of access to essential services. The FRIA ensures these risks are properly documented and governed before the system is put into use.
Who Must Conduct a FRIA?
Under Article 27 of the EU AI Act, the obligation to carry out a Fundamental Rights Impact Assessment (FRIA) applies to specific categories of deployers using high-risk AI systems. These fall into two distinct groups, based on the nature of the deployer or the specific use case of the system:
Public Service Deployers
This category includes all deployers using Annex III high-risk AI systems, except those used in the management of critical infrastructure (Annex III, point 2), provided they serve a public function. It covers:
Deployers governed by public law, as defined in Article 2(4) of Directive 2014/24/EU. This includes entities established to meet needs in the general interest—typically public authorities—operating without a commercial purpose.
Private deployers providing public services, such as those operating in healthcare, social welfare, public transportation, or education.
These deployers must conduct a FRIA for any high-risk AI system they use (with the exception of systems covered under critical infrastructure).
Deployers of Specific High-Risk Systems
FRIA obligations also apply to any deployer, regardless of public/private status if they use the following types of high-risk AI systems, as defined in Annex III:
AI systems used to evaluate the creditworthiness of natural persons or establish their credit scores (Annex III, 5(b)).
AI systems used to assess risk and pricing in life and health insurance for natural persons (Annex III, 5(c)).
In these cases, the obligation is triggered by the system’s function, not the nature of the deployer.
A Note on the Critical Infrastructure Exemption
Importantly, deployers of AI systems used in the management and operation of critical infrastructure (Annex III, point 2) are explicitly exempt from the requirement to conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27.
This exemption recognizes that such systems—often used in contexts like energy, transport, water supply, and digital infrastructure—are typically subject to distinct sectoral safety regulations and strict technical oversight frameworks. These environments already mandate rigorous risk assessments related to system resilience, public safety, and infrastructure security, which may overlap with or supersede some concerns addressed by a FRIA.
However, it’s important to emphasize that this exemption applies only to the systems covered under Annex III, point 2, and not to other high-risk AI systems that might be used within critical sectors but fall under different points of Annex III (e.g., biometric identification or credit scoring).
When Must a FRIA Be Performed?
A FRIA must be conducted before the AI system is first put into use.
If the AI system provider has already completed a valid FRIA, deployers may reuse it—but only if it sufficiently captures the specific deployment context and associated risks. Where deployment introduces new or unique risks, a new or supplementary FRIA is necessary.
What Must a FRIA Include?
Pending the release of the official template and tool by the EU AI Office (as required under Article 27(5)), the following elements—based on the text of the Act should be covered:
Deployment Context and Purpose
A detailed description of the processes in which the AI system will be used, and a clear definition of its intended purpose in that context.
Duration and Frequency of Use
Information on how long and how often the system will be used to assess its long-term impacts.
Affected Individuals and Groups
Identification of categories of natural persons likely to be directly or indirectly impacted by the AI system.
Specific Risks of Harm
Assessment of potential adverse effects the system may pose to the rights and freedoms of affected persons, taking into account the provider’s documentation and usage instructions.
Human Oversight Measures
Description of the supervision and intervention mechanisms in place, aligned with the provider’s guidance.
Planned Mitigation Measures
Outlining governance arrangements, complaint mechanisms, and escalation paths in case identified risks materialize.
This structure reflects the growing need to address not just technical robustness, but the societal and ethical dimensions of AI deployment.
Interaction with DPIAs and Other Assessments
FRIAs should be viewed as complementary to existing risk assessments—especially Data Protection Impact Assessments (DPIAs) required under the GDPR.
According to Article 27(4) of the AI Act, if aspects of the FRIA are already covered by a DPIA (e.g. regarding data protection risks), the FRIA may build upon the DPIA rather than duplicate it. However, a DPIA will generally not be sufficient on its own to meet the FRIA requirement, as the latter covers a broader range of rights—including non-data-related rights such as access to healthcare, education, or fair treatment in legal proceedings.
Obligation to Notify Market Surveillance Authorities
Once completed, the deployer must notify the relevant Market Surveillance Authority (MSA) by submitting the completed FRIA template.
This notification step, while administrative, plays a crucial role in ensuring regulatory oversight and accountability. It allows MSAs to verify that high-risk AI systems are not only technically compliant but are being deployed in ways that respect fundamental rights.
Practical Considerations for Legal and Compliance Teams
To prepare for FRIA compliance:
Identify high-risk systems deployed within your organization.
Clarify your status: Are you a public body or private provider of public services?
Develop internal capacity to assess risks beyond data protection, including equality, justice, and human dignity.
Coordinate with AI providers to understand the system’s capabilities, intended use, and known risks.
Monitor updates from the EU AI Office regarding the FRIA template and automated tools.
Final Thoughts: Building a Rights-Based AI Governance Framework
The FRIA is not just a paperwork exercise. It represents a legal mechanism to ensure that AI systems deployed in sensitive areas are safe, fair, and rights-respecting. For legal and compliance professionals, this is an opportunity to move beyond checkbox compliance and help shape an AI governance strategy grounded in transparency, accountability, and societal impact.
As regulatory enforcement approaches, organizations that take early action—by developing robust and well-structured FRIA processes—will do more than meet their legal obligations. They will position themselves as leaders in responsible AI, demonstrating foresight, credibility, and a strong commitment to fundamental rights in high-risk deployments.
While the official FRIA template is yet to be released by the EU AI Office, Validaitor will offer a fully aligned, ready-to-use FRIA framework as soon as the official guidance becomes available. This solution is designed to help deployers ensure compliance with Article 27 of the EU AI Act while embedding responsible AI practices into their operations from the outset.