After years of intense negotiation, EU stakeholders have finally reached political agreement on the long-awaited EU Artificial Intelligence Act (“EU AI Act“), which was first published by the European Commission (“EC“) on 21 April 2021. Following the final trilogues, the Council of the EU released the text of the final provisional agreement on 26 January 2024 (“Final Provisional Text“), and this was endorsed by all 27 EU Member States on 2 February 2024. Following the plenary vote in the European Parliament on 13 March 2024 for the adoption of the Final Provisional Text, the EU AI Act is expected to be adopted in the next few weeks. Once adopted, the AI Act would enter into force 20 days after publication in the Official Journal of the European Union. When the final text arrives, organisations who come within the scope of the EU AI Act will need to ensure that the use of AI tools are fully mapped out and governance and compliance schemes are implemented.
As most organisations do not develop AI systems themselves but rather deploy them for a wide range of purposes, the impact of the EU AI Act on “deployers” of AI systems has remained a key topic of interest for organisations. Below, we discuss the requirements imposed by the EU AI Act upon deployers and operators of high-risk AI systems, in particular, the requirement to conduct a Fundamental Rights Impact Assessment (“FRIA“).
On May 2023, when the European Parliament voted for its mandated text to enter into trilogues with other EU legislative bodies, a novel impact assessment for deployers of high-risk AI systems was included within the EU AI Act, which built upon of the EU’s existing acquis of similar impact assessments foreseen under the General Data Protection Regulation (“GDPR“) and the Digital Services Act. Although a point of contention throughout the trilogues, it is now clear that FRIAs for certain deployers and providers of high-risk AI systems will be included within the final text of the EU AI Act.
The FRIA aims to mitigate possible harms of high-risk AI systems in relation to individuals’ fundamental rights, beyond the rather technical compliance requirements of the EU AI Act, such as the conformity assessments. As providers of AI systems may not always be able to imagine all potential deployment scenarios and the harms posed by AI systems through bias or manipulation, the FRIA is a justification and accountability exercise that allows organisations to reflect on why, where, and how the high-risk AI system will be deployed. However, conducting FRIAs will be a challenge and not all deployers of high-risk AI systems will be equipped to fully assess the risks of the high-risk AI system deployed.
The exact scope of those required to conduct the FRIAs is not yet fully clear. Additionally, there are open-ended questions as to whether all fundamental rights shall be assessed under the FRIAs or whether organisations can limit the assessment to a group of fundamental rights that are most likely to be affected. Effective methodologies that successfully translate technical descriptions of high-risk AI systems to concrete analyses of rather abstract concepts, such as fundamental rights, have not yet been produced. Organisations will need to consider how to embed the FRIAs into their existing compliance governance schemes.
Who needs to conduct the FRIA?
Article 29a of the Final Provisional Text – which is the most recent official source on the EU AI Act in the absence of the final text – sheds light on which organisations will need to conduct a FRIA “prior to deploying a high-risk AI system as defined in Article 6(2)“, and indicates that “deployers that are bodies governed by public law or private operators providing public services, and operators providing high-risk systems referred to in Annex III, point 5, (b) and (ca)” will need to conduct FRIAs and will also need to “notify the market surveillance authority of the results of the assessment(s)“. These groups have been defined after an intense set of negotiations between the Council of the EU and the European Parliament and require further a deep dive to understand who comes within the scope.
It should be noted that market surveillance authorities, by virtue of article 47 of the Final Provisional Text, may lift the obligation to conduct a FRIA in case of “exceptional reasons of public security or the protection of life and health of persons, environmental protection and the protection of key industrial and infrastructural assets“. Even in such case, article 47.1 of the Final Provisional Text states that this derogation is only temporary and completion of the procedures (e.g. conformity assessment and the FRIA) shall be undertaken “without undue delay”.
i) Deployers that are bodies governed by public law
The scope of “bodies governed by public law” may differ depending on the national laws of EU Member States. The Final Provisional Text does not define or provide any clarification on the concept of “bodies governed by public law”. However, the concept is defined in other EU legislation, such as EU Directive 2014/24, and is likely to include public authorities/bodies of any kind as well as those who are not public authorities/bodies but still provide services of general interest and are largely financed by state resources.
ii) Private operators providing public services
Article 3.8 of the Final Provisional Text states that the term “operator” covers the provider, the product manufacturer, the deployer, the authorised representative, the importer, or the distributor. This means that the obligation to conduct FRIAs for this group is not only limited to deployers of high-risk AI systems but also covers all the other roles under the EU AI Act, such as the providers, the importers, and the distributors.
However, the Final Provisional Text leaves the door open for interpretation with regard to what “providing public services” means, in the absence of a conclusive definition. Recital 58g of the Final Provisional Text states that “services important for individuals that are of public nature may also be provided by private entities. Private operators providing such services of public nature are linked to tasks in the public interest such as in the area of education, healthcare, social services, housing, administration of justice.”
Unlike “bodies governed by public law“, “public service” is not a defined term under EU law. However, it is stated in the EC’s Quality Framework for Services of General Interest in Europe (“Quality Framework“) that the term “public service” is generally “used in an ambiguous way” and “it can relate to the fact that a service is offered to the general public and/or in the public interest, or it can be used for the activity of entities in public ownership“. The EC later indicates that it avoids using this term.
That being said, theQuality Framework talks of ‘services of general interest and services of general economic interest’ instead of ‘public services’. Article 106 of the TFEU also refers to ‘the services of general economic interest’ (without defining them). The EC’s Quality Framework correspondingly defines and lists the providers of services of general interest and general economic interest:
- Postal services
- Banking services
- Transport services
- Energy services
- Electronic communications services
Additionally, the European Economic and Social Committee gives further examples of services of general interest: “They include areas such as housing, water and energy supply, waste and sewage disposal, public transport, health, social services, youth and family, culture and communication within society, including broadcasting, internet and telephony“.
Indeed, understanding “private operators providing public services” as private providers of services of general interest also creates a coherent picture with the first group of deployers (e.g. bodies governed by public law) who will need to conduct the FRIAs, given the similarities on the scope of services.
Consequently, a vast group of private organisations in key sectors such as education, healthcare, social services, housing, and administration of justice (if not more) will fall under this group of operators who need to conduct FRIAs. The exact scope of this group requires a case-by-case analysis and further guidance from the authorities.
iii) Operators deploying high-risk systems referred to in Annex III, point 5, (b) and (ca)
High-risk AI systems listed in Annex III, point 5 (b) and (ca) include the following:
- “(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud”
- “(ca) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance”
Although the wording of Article 29a of the Final Provisional Text refers to “operators deploying high-risk system”, the term “operators” is likely to have been included to avoid repeating the terms ‘deployers’ and ‘deploying’ in the same sentence. Consequently, this group is likely to only cover deployers.
Recital 58g of the Final Provisional Text further states that this group covers at least banking and insurance entities. However, if other deployers were to deploy such systems, they would also fall under the scope of Article 29a of the Final Provisional Text and be required to conduct FRIAs.
How to conduct FRIAs?
What can we infer from the Final Provisional Text?
Article 29a.1 of the Final Provisional Text states that the FRIAs should include the following:
- “A description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
- a description of the period of time and frequency in which each high-risk AI system is intended to be used;
- the categories of natural persons and groups likely to be affected by its use in the specific context;
- the specific risks of harm likely to impact the categories of persons or group of persons identified pursuant point (c), taking into account the information given by the provider pursuant to Article 13;
- a description of the implementation of human oversight measures, according to the instructions of use;
- the measures to be taken in case of the materialization of these risks, including their arrangements for internal governance and complaint mechanisms.”
It is important to note that the FRIAs will only need to be conducted for the first use of the high-risk AI system. The deployers may, “in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider” (article 29a.2 Final Provisional Text). However, if, during the use of the high-risk AI system, the deployers consider that any of the factors listed above change are or no longer up to date, the deployers must take the necessary steps to update the information.
Additionally, the new article 29a.5 of the Final Provisional Text states that the European AI Office will develop a template for a questionnaire (“Future FRIA Template Questionnaire“), including an automated tool, to enable deployers to implement the obligations of the article 29a of the Final Provisional Text simply.
Lastly, article 29a.4 of the Final Provisional Text states that if any obligations in relation to the FRIAs are already met through the data protection impact assessments (“DPIA“) of the GDPR, the FRIA shall be conducted in conjunction with that DPIA. However, DPIAs are unlikely to cover the obligations listed in article 29a.1 of the Final Provisional Text. The FRIAs have a wider material scope due to the fact that DPIAs only focus on one fundamental right (at least in practice), i.e. the right to data protection as set out in article 8 of the Charter of Fundamental Rights of the EU (“Fundamental Rights Charter“). However, the FRIAs need to focus on a vast number of fundamental rights, which can be challenging, combined with the inherent complex nature of AI systems.
The challenges of implementation
The scope: The Fundamental Rights Charter is divided into six chapters covering different fundamental rights on dignity, freedoms, equality, solidarity, citizens’ rights, and justice. Each of these fundamental rights has a different focus, ranging from freedom of business, right to non-discrimination to right to asylum. Taking into account the fact that there is extensive case-law from the Court of Justice of the EU and the European Court of Human Rights with regard to each of these rights, it requires an in-depth knowledge to assess the impact of high-risk AI systems on these rights. In addition, various pieces of applicable legislation may include specific provisions that protect a specific aspect of a fundamental right, such as legislation relating to media regulation (which may be relevant to the freedom of expression and of information) or legislation relating to medical treatment (which may encompass norms relevant to individual autonomy, privacy, or bodily integrity). In many cases, such legislation includes special requirements or criteria that must be complied with where there is a restriction of fundamental rights. Those requirements may also be taken into account under the FRIAs.
On the other hand, EU Member States may have different fundamental rights in addition to the Fundamental Rights Charter and/or different nuances in the interpretation of these fundamental rights.
One possibility may be to limit the FRIAs to the fundamental rights that are most likely to be affected by deployment of high-risk AI systems (e.g. right to data protection, right to non-discrimination, right to equality). However, it remains to be seen whether deployers will be capable of appropriately ‘selecting’ the relevant fundamental rights.
Therefore, the European AI Office will need to address these challenges in its Future FRIA Template Questionnaire.
Governance: Organisations may already have compliance governance mechanisms in place to bring legal, IT and business professionals together for impact assessments such as a DPIA. Organisations may be able to leverage these mechanisms in relation to FRIAs. The first step will consist of a pre-FRIA screening to detect high-risk AI systems (therefore requiring a FRIA to be completed, among other requirements arising from the EU AI Act). If an up-to-date and easy to understand list of high-risk AI systems are provided by legal professionals, such pre-FRIA screening may be completed by relevant business departments who are most informed about the features of the specific high-risk AI systems. However, given the potential risk involved in relation to incorrect classifications, this pre-FRIA screening will likely still require validation from legal professionals. After establishing that a FRIA will be needed, relevant input can be collected from IT and business professionals, relating to the technical and the business context of the deployment. Legal professionals, using the explanations provided by their colleagues, can assess the effects of the deployment on the fundamental rights, internally or with the help of specialised external counsels. However, inhouse legal teams may not be fully armed with the necessary knowledge to analyse the effects of AI systems on human rights. Therefore, they may need further training or external counsels’ help that provide different sets of know-how (e.g. IT, data protection and human rights laws). Additionally, there might be need to involve the providers of high-risk AI systems for additional technical information or for accessing the ‘provider’ FRIAs on the specific AI system.
The governance mechanism will also depend on the granularity of the analysis to be conducted. The Future FRIA Template Questionnaire will likely indicate the required level of detail and how organisations shall prepare for this new assessment.
Establishing a solid governance framework for pre-FRIA screening and completing the Future FRIA Template Questionnaire are crucial as market surveillance authorities will need to be notified of the results of the FRIAs. This means that organisations may fall under regulatory scrutiny in the event that they do not comply with the FRIA-related obligations.
Existing best practices
Various stakeholders have already focused on assessing the impact of AI systems or algorithms on fundamental rights. The United Nation’s Human Rights Office of the High Commissioner has a dedicated project for providing authoritative guidance and resources for implementing the United Nations Guiding Principles on Business and Human Rights in the technology sector. Although this project has not focused on creating a concrete methodology for assessing the impact of AI systems on human rights, it holds an important place as a global effort to tackle technology and human rights related matters.
Canada’s Algorithmic Impact Assessment tool, which is a mandatory risk assessment tool to support public bodies on deploying automated decision-making tools provides a specific methodology, consisting of the following steps:
Risk Areas | Project: Project description and stage, reasons for automation, risk profile focusing on vulnerability of concerned individualsSystem: Capabilities of the system at hand (e.g. image/video/text generation, facial analysis, identification)Algorithm: How the algorithm reaches its outputsDecision: Classification and description of the decisions being automatedImpact: Type of automation (full or partial, the degree of human involvement)Data: Data source (e.g. method of collection, security classification) and data types (e.g. audio, text, image, or video) |
Mitigation Areas | Consultation: Internal and external stakeholders consulted, including privacy and legal advisors; digital policy teams; and subject matter experts in other sectorsDe-risking and mitigation measures: Data quality (representativeness of data sources, bias risks, transparency measures on data collection), procedural fairness (audit reports, recourse processes), privacy |
Impact Scores | All elements are scored based on the answers and a resulting impact level is found in four different categories (e.g. little to no impact, moderate impact, high impact, and very high impact). |
Depending on the reached impact level, specific requirements apply to the automated-decision making tool, based on Canada’s Directive on Automated Decision-Making. |
At EU level, a multi-stakeholder project ALIGNER, has published a customised FRIA template for deploying AI systems for law enforcement purposes. ALIGNER’s customised FRIA, has a unique methodology and sets out the harm for various fundamental rights; and asks the assessor of the AI system to explain whether the AI system can create this scenario:
Fundamental Right: Presumption of innocence and right to an effective remedy and to a fair trial | ||
Challenge (An example of the likely harm) | Evaluation | Estimated Impact Level |
“The AI system produces an outcome that forces a reversal of burden of proof upon the suspect, by presenting itself as an absolute truth, practically depriving the defence of any chance to counter it“ |
The downside of this methodology is the difficulty of foreseeing every possible harm that AI systems can cause, especially for deployments of generative AI or complex AI systems with a black box nature that are deployed for purposes other than law enforcement.
ALIGNER’s customised FRIA template also focuses on four different fundamental rights that are most likely to be affected by the deployment of AI systems for law enforcement purposes, rather than assessing a wider group of fundamental rights found within the Fundamental Rights Charter.
The Dutch Ministry of the Interior and Kingdom Relations has published a Fundamental Rights and Algorithms Impact Assessment template (“FRAIA“) to support the decision making for Dutch public authorities for the use and deployment of algorithms in the country. The FRAIA template is currently the most comprehensive impact assessment focusing on fundamental rights and AI systems, and its methodology and content may be very useful for answering the Future FRIA Template Questionnaire under the EU AI Act.
A general summary of the methodology of the FRAIA is as follows:
Why? | Reason and problem definition: For which problem is the algorithm to provide a solution? What is the actual occasion or reason to use an algorithm? Why does it require an algorithm?Objective: What is the objective that the use of the algorithm needs to achieve?Public values: What are the public values that prompt the use of an algorithm? If there are several public values prompting the use of an algorithm, can they be ranked?Legal basis: What is the legal basis of the use of the algorithm and of the targeted decisions that will be made on the basis of the algorithm? Answering this question involves finding out whether there is a legal basis that explicitly and clearly allows for the use of an algorithm and renders this use sufficiently foreseeableStakeholders and responsibilities: Which parties and persons are involved in the development/use/maintenance of the algorithm? |
What (Input)? | Algorithm type: Is it already (roughly) known what type of algorithm will be used?Data sources and quality: Questions relating to data types, quality, reliabilityBias/assumptions in the data: Questions relating to assumptions, biases, explicability of outputs, training data, data security, data supervision, data archiving |
What (Throughput)? | Algorithm type: Questions relating to whether it is a self-learning or non-self-learning algorithm, justification on the choice of algorithm, other alternatives and their usefulness and appropriatenessOwnership and control: Questions relating to agreements and contracts with the developer of the algorithm, ownership of input and output, management of the algorithmAlgorithm accuracy: Questions relating to accuracy level of the algorithm, tests, risks of reproduction or amplification of biases, assumptions, percentage of wrong positivesTransparency and explainability: Questions relating to clarity of algorithms, concerned groups of individuals, target groups |
How? | Decisions based on algorithmic outputThe role of humans in the decisionEffects of the algorithmProcedures for managing the algorithmsContext of the employmentTransparency about the deployment (e.g. public communication)Evaluation, auditing, and safeguarding |
Fundamental Rights | Fundamental right: does the algorithm affect (or threaten to affect) a fundamental right?Specific legislation: does specific legislation apply with respect to the fundamental right that needs to be considered?Defining seriousness: how seriously is this fundamental right infringed?Objectives: what social, political, or administrative objectives are aimed at by using the algorithm?Suitability: is using this specific algorithm a suitable tool to achieve these objectives?Necessity and subsidiarity: is using this specific algorithm necessary to achieve this objective, and are there no other or mitigating measures available to do so?Balancing and proportionality: at the end of the day, are the objectives sufficiently weighty to justify affecting fundamental rights? |
The FRAIA template also goes beyond of the Fundamental Rights Charter and provides a list of fundamental rights that need to be considered, including the fundamental rights relating to a person and freedom-related fundamental rights and also equality rights and procedural rights (e.g. prohibition of (body) searches, right to leave the country, right to funded legal aid). This creates a large list of rights that needs to be assessed for each high-risk AI system.
The FRAIA template acts as an important guidance for FRIAs as it has been formally endorsed by the Dutch government – an EU member state government – as an accountability tool.
All these different templates may provide those who need to conduct FRIAs insights as well as acting as a source of inspiration for the European AI Office, while creating the Future FRIA Template Questionnaire.
Conclusion
Following the publication of the final text of the EU AI Act, organisations will need to consider whether they come within the scope of groups of operators who need to conduct FRIAs in case of high-risk AI systems deployment. Organisations will then need to think of how to effectively tackle FRIAs. FRIAs will be one of the most important accountability and risk measure tools of the EU AI Act and market surveillance authorities may leverage existing know-how from DPIAs and give the FRIAs particular importance, using them as a basis for future investigations and accountability assessments.