The eagerly awaited proposal for an Artificial Intelligence Act was finally published by the EU Commission following last week’s press leak of an outdated draft. The proposed Act introduces a first-of-its-kind, comprehensive legal framework for Artificial intelligence (125 pages), which has the ambition to harmonize existing laws in the internal market regarding AI to facilitate investment and innovation while simultaneously also protecting fundamental rights and ensuring the respect of safety requirements (including the protection of personal data).
As expected, based on the Commission’s previous publications on the subject matter, the Act proposes horizontal legislation with an effect on a wide range of topics. In doing so, the Act draws extensively from existing regulatory frameworks and methodologies set out in the GDPR, the NIS Directive, the Product Safety Directive, the Unfair Commercial Practices Directive and the Market Surveillance Regulation.
The proposal focuses on setting down different types of restrictions on AI systems, depending on the (risk of the) AI systems concerned. While a first set of rules intend to prohibit selected AI practices, a second and major part of the Act regulates the placing on the market, putting into service and use of high-risk AI systems (including market monitoring and surveillance). The third set of rules concerns certain specific AI systems (such as AI systems intended to interact with natural persons and AI systems used to generate or manipulate image, audio or video content) and introduces new transparency obligations.
Finally, the Act also sets out a number of measures aimed to support innovation (and stimulate the uptake of AI in the EU), a governance and GDPR-style penalties framework, as well as a framework for the adoption of codes of conduct encouraging providers of non-high-risk AI systems to voluntarily apply the mandatory requirements applicable to high-risk AI systems.
While the proposal has already received criticism from stakeholders and governmental actors (including within the Commission) and must still go through the EU’s (lengthy) legislative process with likely amendments, it is clear that this Act will impose major obligations on providers, importers, distributors and users of AI systems which will require ample preparation time ahead of implementation.
1. Scope and definitions (Title I of the Act)
The proposed Act has a wide material and territorial scope of application, and is intended to apply to a broad category of actors: (i) providers placing on the market or putting into service AI systems in the EU (independently of their place of establishment inside or outside the EU), (ii) users of AI systems located within the EU, and (iii) providers and users of AI systems located in a third country, where the output produced by the system is used in the Union. The Act also specifies that it does not affect the application of the provisions of the eCommerce Directive on the liability of intermediary service providers (as to be replaced by the corresponding provisions of the Digital Services Act).
Key to the Act is, unsurprisingly, the definition of AI and of an “AI system.” The Commission proposes a definition of an AI system which is intended to be technology-neutral and future-proof, while nevertheless providing sufficient legal certainty. Under the Act, an AI system may encompass many products using modern software technologies and is defined as:
- software;
- developed with one or more of the techniques and approaches listed in Annex I to the Act (and which the Commission will be able to adapt over time by delegated acts). The techniques and approaches currently listed in Annex I include:
- Machine learning approaches;
- Logic- and knowledge-based approaches;
- Statistical approaches, Bayesian estimation, search and optimization methods;
- which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
2. Prohibited Artificial Intelligence Practices (Title II of the Act)
The proposal includes the prohibition of certain AI practices, which are considered to create an unacceptable risk (eg by violating fundamental rights – note that the Act does not prohibit specific AI systems, only specific AI practices), which are listed in article 5 of the Act. These are:
- AI-based dark patterns (AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm).
- AI-based micro-targeting (AI systems that exploit the vulnerabilities of a specific group of persons in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm).
- AI-based social-scoring (AI systems used by public authorities for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behavior or personality characteristics, with the social score leading to detrimental/unfavorable treatment in social contexts unrelated to the context in which the data was gathered).
- The use of “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except (i) concerning targeted searches for specific potential victims of crime, (ii) to prevent a threat to life or terrorist attack, or (iii) to detect, localize, identify or prosecute a perpetrator or suspect of certain serious crimes. In this regard, the nature of the situation and the consequences of the use(seriousness, probability and scale of the harm) should be taken into account, necessary and proportionate safeguards should be complied with and a prior authorization by a judicial/administrative authority should be granted.
It is to be noted that an earlier (unpublished) version of the Act did not prohibit the use of “remote biometric identification systems” but only set out an authorization system. Conversely, such prior version of the Act contained a specific prohibition of AI-based general-purpose surveillance, but this prohibition has not as such been included in the published proposal.
3. High-risk AI Systems (Title III of the Act)
The proposal further includes specific rules concerning high-risk AI systems, which are the most strictly and comprehensively regulated AI systems under this Act.
a) The definition of a high-risk AI system
High-risk AI systems are defined in article 6 of the AI Act, through a system of references to existing EU legislation, and essentially relate to product safety risks. In fact, the classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation (risk-based approach):
- A first category concerns AI systems intended to be used as safety component of products (or which are themselves a product) that are subject to third party ex-ante conformity assessment. These high-risk AI systems are listed in Annex II. In this category, a further distinction can be made between (i) high-risk AI systems related to products covered by the New Legislative Framework (NLF) legislation (such as machinery, medical devices, toys) and (ii) high-risk AI systems related to products covered by relevant Old Approach legislation (eg aviation, cars). Regarding the former, the idea is that the requirements for AI systems set out in the Act will be checked as part of the conformity assessment procedures under the relevant NLF legislation. Regarding the latter, the Act would not apply directly, but the ex-ante essential requirements for high-risk AI systems will need to be taken into account when adopting relevant implementing or delegated legislation under those acts.
- A second category concerns other standalone AI systems with mainly fundamental rights implications that are explicitly listed in Annex III. This list contains a limited number of AI systems whose risks have already materialized or are likely to materialize in the near future, and can in the future be expanded to other AI system presenting sufficient degree of risk of harm by the Commission by delegated asks within certain predefined areas (i.e. the areas in point 1 to 8 of Annex III), subject to the application of a set of criteria and a risk assessment methodology.
The degree of risk of harm that must be presented by an AI system in order to be added by the Commission to this list in Annex III concerns “a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III.”
The Act further sets out a number of criteria to be taken into account by the Commission, such as the intended purpose of the AI system, the extent of use, the extent of harm already caused, the potential extent (intensity and scope) of harm, the dependency and vulnerability of impacted persons, the reversibility of the outcome, and the redress measures and measures minimizing those risk available under EU law.
This approach thus allows the EU Regulator to maintain a certain flexibility, while nevertheless ensuring legal certainty by subjecting any additional high-risk AI-systems to a risk-based assessment to be undertaken by the Commission.
b) Requirements applicable to high-risk AI systems
The Act imposes significant compliance obligations in relation to high-risk AI systems. Compliance with the requirements set out in the Act must take into account (i) the purpose of the AI system and (ii) the risk management system set out in article 9 (a continuous iterative process throughout the entire lifecycle of a high-risk AI system), and must be assessed before the placing on the market or putting into service of such high-risk AI systems via the conformity assessment procedures set out in the Act.
Among others, the following compliance obligations apply:
- Training, validation and testing data sets shall be subject to appropriate data governance and management practices and must be relevant, representative, free of errors and complete, have the appropriate statistical properties, take into account geographical, behavioral or functional specificities, … and shall be subject to appropriate data governance and management practices (article 10).
- Complete and up-to-date technical documentation must be maintained (and drawn up by providers before the placement on the market/putting into service) in order to demonstrate compliance with the Act and the outputs of the high-risk AI system must be verifiable and traceable throughout its lifecycle, notably by allowing the automatic generation of logs (which must be kept by providers, when under their control) (articles 11, 12 and 16).
- The transparency of the operation of the AI system must be ensured (so that users are able to understand and control the production of the output), and the high-risk AI system must be accompanied by clear documentation and instructions of use for the user, which must contain the information set out in article 13(3) of the Act.
- Human oversight of the high-risk AI system must be possible and ensured through appropriate measures (article 14).
- A high level of accuracy, robustness and security must consistently be ensured throughout the high-risk AI system’s lifecycle (article 15).
These compliance obligations shall mainly be ensured by the providers of high-risk AI systems. In addition to the above obligations, such providers must also, among other things:
- Set-up, implement and maintain a post-market monitoring system (in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system) (article 61).
- Have a compliant (documented) quality management system in place.
- Ensure the system undergoes the relevant conformity assessment procedure (prior to the placing on the market/putting into service) and draw up an EU declaration of conformity.
- Register their high-risk AI system in the EU database on high-risk AI systems (prior to the placing on the market/putting into service) (articles 16 and 51).
- Immediately take corrective actions in relation to non-conform high-risk AI systems (an inform the national competent authority of such non-compliance and the actions taken) (articles 21 and 22).
- Affix the CE marking to their high-risk AI systems to indicate conformity.
- Upon request of a national competent authority, demonstrate the conformity of the high-risk AI system (article 23).
- Report to the national market surveillance authorities any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations under Union law intended to protect fundamental rights, immediately after a causal relationship between that incident/malfunctioning and the AI system, and not later than 15 days after they become aware of that serious incident or malfunction (article 62).
- Appoint an authorized representative by written mandate where no importer can be identified. (article 25).
Importers (article 26), distributors (article 27), and users of high-risk AI systems (article 29) are also subject to a number of compliance obligations, but these are generally substantially less extensive than the obligations to which providers are subject. For instance, users (i) must use high-risk AI systems in accordance with the instructions of use; (ii) must ensure that input data is relevant in view of the intended purpose of the high-risk AI system; (iii) must monitor the operation of the system and inform the provider/distributor of suspected risks and of serious incidents or malfunctioning and (iv) must conserve logs automatically generated by that high-risk AI system, to the extent such logs are under their control.
c) Framework concerning notified bodies and notifying authorities and conformity assessment
The Act further sets out a rather complex framework of notified bodies and notifying authorities to be designated or established by Member States, and which are involved as independent third parties in the conformity assessment procedures relating to high-risk AI systems.
Lastly, the Act details the (relatively complex) conformity assessment procedures to be followed for each type of high-risk AI system. This conformity assessment approach aims to minimize the burden for economic operators as well as for notified bodies. The conformity assessment framework is rather complex.
Concerning high-risk AI systems used as safety components of products (ie the first category of high-risk AI systems, listed in Annex II), systems subject to the New Legislative Framework legislation (eg machinery, toys, medical devices) benefit from a “simplified” regime: they are subject to the same compliance and enforcement mechanisms of the products of which they are a component, but these compliance and enforcement mechanisms must ensure compliance not only with the requirements established by the concerned sectorial legislation, but also with the compliance obligations established by the Act (as set out above).
Concerning standalone high-risk AI systems listed in Annex III, a new compliance and enforcement system is established by the Act, modeled on the New Legislative Framework legislation and which is implemented through internal control checks by the providers (except for remote biometric identification systems which are subject to third-party conformity assessment).
4. Transparency obligations concerning certain specific AI systems (Title IV of the Act)
Certain AI systems in the proposal are not prohibited and not necessarily high-risk (although they can be, in which case the requirements in relation to high-risk AI systems remain applicable), but are subject to a number of transparency obligations. These obligations include in particular:
- Providers must ensure that AI systems intended to interact with natural persons inform the natural persons that they are interacting with an AI system (except where this is obvious or in relation to the investigation of crimes).
- Natural persons who are subject to an emotion recognition system or a biometric categorization system must be informed thereof by the Users of such systems.
- Users of AI systems generating so-called “deep fakes” must disclose that the content has been artificially created or manipulated (except where necessary to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts, subject to appropriate safeguards).
5. Provisions aimed at supporting innovation (Title V of the Act)
As mentioned above, the AI Act not only imposes restrictions on AI systems, but also contains a number of provisions aimed at supporting innovation and stimulating the uptake of AI within the EU. This includes provisions relating to the putting in place of AI regulatory sandboxes and the taking of measures to reduce the regulatory burden for SMEs and startups.
(i) AI regulatory sandboxes
The Act provides that national competent authorities (from one or more MS) or the EDPS may establish AI regulatory sandboxes to (provide a controlled environment to) facilitate the development, validation and testing of innovative AI systems, under direct supervision and oversight by the competent authorities, before AI systems are placed on the market or otherwise put into service.
However, the Act also provides that supervisory authorities will still be able to exercise their powers in relation to AI systems participating in such sandboxes, and that participants in such sandboxes shall remain liable for any harm inflicted on third parties as a result from the experimentation taking place in such AI sandboxes.
(ii) Measures to reduce the regulatory burden for SMEs/small-scale providers
The Act also obliges the national competent authorities to undertake a number of actions aimed at reducing the regulatory burden on so-called “small-scale providers” (ie SMEs): they must provide small-scale providers with priority access to the AI sandboxes, they must organize awareness-raising activities tailored to small-scale providers about the application of this Act, and they must establish a dedicated hub within the national competent authorities for communication with small-scale providers (including to provide guidance and respond to queries about the implementation of the Act).
The Digital Hubs and Testing Experimentation Facilities, which were included in a previous (unpublished) version of the Act, are not included in the official proposal of the Commission.
6. Governance, enforcement and sanction (Titles VI to XII of the Act)
(i) European Artificial Intelligence Board
The Act (articles 56 and 57) provides for the establishment of a European Artificial Intelligence Board (EAIB), which is composed of (the heads of) the national supervisory authorities and the EDPS. The board will be tasked, among others, with advising and assisting the Commission in relation to matters covered by the Act, in order to (i) contribute to the effective cooperation of the national supervisory authorities and the Commission, (ii) coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues and (iii) assist the national supervisory authorities and the Commission in ensuring the consistent application of the Act.
Both its tasks and its structure thus remind us of the European Data Protection Board (EDPB) in relation to the GDPR, although the advisory nature of the EAIB (advising the Commission) is specifically emphasized here.
In relation to this assistance to the Commission, the EAIB shall in particular: collect and share expertise and best practices among EU Member States, contribute to uniform administrative practices in the EU Member States, issue opinions, recommendations or written contributions on matters related to the (implementation of) the Act (article 58).
(ii) National competent authorities
Moreover, the Act also provides that EU Member States must (article 59) designate national competent authorities and, among them, a national supervisory authority, for the purpose of providing guidance and advice on the implementation of the Act, including to small-scale providers.
(iii) Enforcement
The Act also provides that there shall be market surveillance and control of AI systems covered by the Act, in accordance with Regulation (EU) 2019/1020 (article 63), which shall be granted full access to the training, validation and testing datasets used by the provider (article 64).
Where the market surveillance authority of an EU Member State has sufficient reason to believe that an AI system covered by the Act presents a risk to the health or safety or to the protection of fundamental rights of persons, they shall carry out an evaluation of the AI system concerned and where necessary, require corrective actions (articles 65 and 57).
Similar to the GDPR, the Act also provides a possibility to adopt codes of conduct. More precisely, the Commission and the EU Member State shall encourage and facilitate the drawing up of codes of conduct intended (i) to foster the voluntary application of the requirements applicable to high-risk AI systems to AI systems which are not high-risk, (ii) to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, accessibility for persons with a disability, stakeholders participation in the design and development of the AI systems and diversity of development teams.
(iv) Sanctions
Lastly, the Act provides for several sanctions applicable to certain infringements of the Act:
- Infringements of the prohibited AI practices (article 5) and obligations in relation to data quality (article 10) is subject to administrative fines up to EUR30 million, or if the offender is a company, up to 6% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
- Non-compliance of the AI system with any requirements or obligations under the Act other than those mentioned above, is subject to administrative fines up to EUR20 million, or if the offender is a company, up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
- The supply of incorrect, incomplete or false information to notified bodies and national competent authorities in reply to a request is subject to administrative fines up to EUR10 million, or if the offender is a company, up to 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
Moreover, the Act also provides that EU Member States must set out effective, proportionate, and dissuasive penalties (including administrative fines) applicable to infringements of the Act and must take all measures necessary to ensure that they are properly and effectively implemented.
To note in this regard is that the Act does not provide for a complaint system from individuals subject to AI systems. Enforcement of the Act will thus exclusively lie with the competent authorities.
DLA Piper is actively monitoring this key regulation and related discussions regarding AI, and will be rolling out various guidance documentation, webinars and analyses in the coming weeks and months. An analysis of the Act from the perspective of the UK in particular will be following shortly.