According to latest news following months of discussions, members of the European Parliament have reached a provisional political deal on AI Act legislation some days ago. Both a final vote of the European Parliament and related official documentation is not publicly available at this stage. However, the legislative process now seems to approach the trilogue procedure before the AI Act may enter into force end of 2023.

Revised Definition of AI

According to latest news publicly available about the discussions, the members of the European Parliament seem to have adopted a definition of AI which largely overlaps the definition developed by the Organisation for Economic Cooperation and Development (OECD). As such, an AI system shall mean “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments”. Reportedly, additional text in the preamble shall make clear that the definition shall be “closely aligned with the work of international organizations working on artificial intelligence to ensure legal certainty, harmonization and wide acceptance”.[1]

General Purpose AI Systems and Foundation Models

Following passionate debates on how to deal with AI systems not having a specific purpose (so called General Purpose AI), the members of the European Parliament reportedly confirmed the opinion of the European Council in its General Approach on the Artificial Intelligence Act draft version as of 6 December 2022. Furthermore, the European Parliament agreed on a set of new rules for so called Foundation Models or Generative AI. These are now available as a sub-category of General Purpose AI, but – in contrast to GPAI – trained on a broad data at scale to achieve a wide range of downstream tasks, including those for which they have not been specifically developed for. Prominent examples would be Chat-GPT and Stable Diffusion (which is able to create a latent text-to-image diffusion). Due to their tendency and related risk to develop a life of their own, Foundation Models shall be governed by stricter rules than conventional General Purpose AI. This includes, e.g., stricter requirements on risk management, extensive analysis and testing activities.

In addition, Generative AI shall be designed and developed in compliance with the laws applicable in the European Union and its fundamental rights, in particular, freedom of speech[2], and, as such, is required to successfully pass a fundamental rights impact assessment test.[3]

Additional Layer for High-risk AI Classification

In terms of classifying high-risk AI, the members of the European Parliament aligned on an additional layer. According to the initial proposal, any AI solution pertaining to critical categories or use cases listed on Annex III would be classified as high-risk AI and, as such, would have to comply with a stricter regime of requirements on risk management, transparency and data governance. According to latest news available to the public, AI solutions listed on Annex III now shall only be classified as high-risk AI if the respective solution also poses significant risk of harm to health, safety or fundamental rights. Such significant risk is defined as “a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and it’s the ability to affect an individual, a plurality of persons or to affect a particular group of persons.” The same applies for recommender systems of large online platform providers and AI solutions being used to manage critical infrastructures, such as water management systems or energy grids, if they entail severe environmental risks.[4] Examples of currently recorded high-risk AI systems would include:

  • Critical infrastructures that could endanger the lives or health of individuals (e.g., traffic, transport);[5]
  • educational or vocational training, which may influence someone’s access to education and career path (e.g., scoring of exams);[6]
  • product safety features (e.g., AI application in robot-assisted surgery);[7] and
  • access to self-employment, worker management, and employment (e.g., CV sorting software for recruitment procedures).[8]

Revised Set of Prohibited Activities

According to the latest information available in the public on the provisional political deal on the AI Act, the following AI systems are prohibited:

  • AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior;
  • AI systems that exploit vulnerabilities of a specific group of persons due to their age, physical or mental disability;
  • ‘Real-time remote biometric identification systems in public spaces – Such systems shall only be permitted ex-post and limited to serious crimes, subject to previous approval by the competent court; and
  • AI systems for emotional recognition in border management, at workplaces or in educational institutions.

Additional Safeguards and Sustainability Standards

According to latest co-rapporteurs proposal, additional principles shall apply to AI in general. The principles include respect of human dignity as well as personal autonomy and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination and fairness. According to the co-rapporteurs, such principles shall be implemented by respective technical standards and documentation, but not stipulate additional requirements.[9]

The members of the European Parliament also aligned on extra safeguards against biases if an AI system processes sensitive data, such as sexual orientation or religious beliefs.[10]

Last, but not least, high-risk AI-systems shall keep records of their environmental footprint and foundation models shall comply with European environmental standards.[11]

What’s next?

According to latest news, voting activities in the Committee on Civil Liberties, Justice and Home Affairs (LlBE) as well as the Internal Market and Consumer Protection Committee (IMCO) are expected for 11 May 2023. Voting activities in the European Parliament are expected for 12 June 2023. The deliberations between the European Commission, the European Council and the European Parliament (trilogues) are then expected to start on 15 June 2023.

Update – 19 May 2023[12]

On 19 May 2023, the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate with 84 votes in favour, 7 against and 12 abstentions. The adopted wording is officially available here.

The draft negotiating mandate is now heading towards plenary adoption expected for 12 – 15 June 2023.[13]

Having said this, a final agreement is still possible in 2023 under Spanish Council Presidency, although there are some challenges to overcome.

DLA Piper continues to monitor updates and developments to the EU’s AI Act and its impact on industry across the world. For further information or if you have any questions, please contact the authors or your usual DLA Piper contact.





[5] Annex III (2) (a) of the AI-Act-Proposal

[6] Annex III (3) of the AI-Act-Proposal

[7] Annex III (2) (a) of the AI-Act-Proposal

[8] Annex III (4) of the AI-Act-Proposal