After the EU Parliament adopted its position regarding the draft EU AI Act on 14 June 2023 with an overwhelming majority, the last step of the legislative process – the trilogue negotiations between the Parliament, the EU Council of Ministers and the European Commission – meanwhile began, mindful of the goal set by the Spanish presidency to reach a deal by the end of November.[1] In this article we provide a condensed summary of the latest developments towards and the potential roadblocks standing in the way of the EU AI Act.

By July 2023, the involved parties had already agreed on a political level about the less controversial negotiation topics, such as obligations of providers and users of high-risk systems, conformity assessment bodies, and technical standards.[2] After a rather silent summer intermezzo, the debate has intensified over the last few weeks, with time is running out for the Spanish presidency to reach a final deal on the legislation.  DLA Piper is actively monitoring the legislative debate, as more and more controversies are shaping the rapidly evolving environment around the EU AI Act.

Debate over foundation models

While it seemed that the negotiators were coming close to agreeing on the tiered approach to foundation models during the trilogue negotiations that took place on 24 October 2023[3], more recent developments have shown that, in fact, the opposite is the case. On 10 November 2023, negotiations eventually broke down after larger member states sought to retract the proposed approach for foundation models.

The contention centres on the appropriate regulatory approach for AI models such as OpenAI’s GPT-4, the technology behind the well-known ChatGPT. While an initial agreement was made about establishing a tiered system for foundation models, members of the EU Parliament were pushing for more stringent rules on the most influential providers (“high-impact foundation model providers”), particularly those developed by companies outside of Europe.[4] The criteria for determining the most impactful models were proposed to be, among others, data sample size, model parameter size, computing resources, and performance benchmarks. The Commission should then be instructed to establish a methodology to assess these thresholds and adjust them according to technological development. However, this has been met with increasing resistance from key European nations, especially France, Germany, and Italy. In a joint paper, the three countries suggested that “mandatory self-regulation through codes of conduct” should be used for foundation models as opposed to “untested norms”, as they see it as preferable to regulate the application of AI rather than the technology as such, in order to stay true to the EU AI Act’s technology-neutral and risk-based approach.[5] This opposition is echoed by organisations who are obviously apprehensive about potential overregulation.[6]

As a severe ongoing disagreement might derail the entire AI legislative process if not resolved promptly, the Commission circulated a compromise concerning foundation models on 19 November 2023.[7] It distinguishes between “general-purpose AI models” defined as: an “AI model, including when trained with a large amount of data using self-supervision at scale, that is capable to [competently] perform a wide range of distinctive tasks regardless of the way the model is released on the market” and “general-purpose AI systems”, defined as an “AI system based on an AI model that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”. As is evident, the term “foundation models” is therefore not used but is replaced by “general-purpose AI models”. The document also stipulates that all General Purpose AI models must maintain current technical documentation, as outlined in model cards, a concept also mentioned by France, Germany and Italy. Likewise, the compromise draft provides for codes of practice at least for transparency measures for General Purpose AI models as a response to the joint paper referred to above. In a working document of the EU Parliament circulated on 24 November 2023, EU parliamentarians state that they can generally get behind the idea of codes of practice, but only if their purpose is to complement the horizontal transparency requirements set for all foundation models and if the codes of practice are drafted by small and medium sized enterprises/SMEs, civil society and academia. [8]

Exemption categories for High Risk AI Systems

The approach to High Risk AI Systems has changed a little since our last report. Initially, all AI systems in certain critical use cases were labelled high-risk. However, a proposal on 2 October 2023 introduced a potential exemption system for AI developers from these stringent regulations. Critics of this approach have stated that these exemptions could intensify legal uncertainty, contrary to the objective of the EU AI Act, and have called for narrower and more specific designs for these exemptions.[9] The European Commission is therefore now tasked with defining a clear list of high-risk and lower-risk AI use cases and refining the criteria for exemptions. The Commission can add or remove exemptions based on solid evidence and the need to maintain consistent protection levels across the EU.[10]

Revised definition of AI

Furthermore, the definition of AI has been revised once again by the Organisation for Economic Co-operation and Development (OECD). The new definition is as follows: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.[11]

A significant change to the previous definition is the removal of the stipulation that objectives must be defined by humans, to include scenarios where an AI system can acquire new objectives by itself. Furthermore, the OECD states that design objectives can be augmented by user prompts during the AI system’s operation, which is typical of foundation models.

The motivation for revising the AI definition originates from the objective to align with international AI definitions, account for advancements over the past five years, improve technical precision and clarity, and ensure relevancy for the future. The new definition is expected to be incorporated into the EU AI Act as part of the ongoing trilogue negotiations.

Other current controversies

Several other key debates focus on biometric surveillance and AI’s use of copyrighted material, as seen in relation to models such as ChatGPT.[12] The goal of legislators is to ban the use of AI in biometric surveillance, yet a number of EU countries, with France at the forefront, are advocating for exemptions related to national security and military uses, and it appears that the Council has now adopted this approach.[13] Furthermore, lawmakers are pushing for AI regulations to address the use of copyrighted content by entities like OpenAI. On the other hand, EU member states contend that the current copyright regulations in the union are adequate for this purpose.

Other debates focus on the approach to biometric categorisation, emotion recognition, fundamental rights, workplace decision-making and sustainability obligations.[14]

What’s next?

It is apparent that there are still a lot of topics that need to be negotiated and finalised between the involved parties to the ongoing negotiations, especially in relation to foundation models. If a consensus is not reached in December, the outgoing Spanish presidency may lose motivation to pursue technical discussions, leaving the incoming Belgian presidency with only a few weeks to resolve the complexities of this extensive dossier before the European Parliament dissolves for the EU elections in June 2024. Likewise, a fundamental re-evaluation of the approach to foundation models might also be too time-consuming to be concluded in the remaining time.

Negotiations have had to have been, and have been, escalated to the highest political level in an attempt to resolve the impasse, resulting in the latest draft of the compromise. Even though disagreements seem to be at large, the political pressure to finalise the legislation might push the more reluctant lawmakers towards a compromise, however. It remains to be seen, whether or not the negotiators will manage to agree on the compromise and finally seal the deal on the long awaited EU AI legislation.

DLA Piper continues to monitor updates and developments to the EU’s AI Act and its impact on industry across the world. For further information or if you have any questions, please contact Thorsten Ammann or your usual DLA Piper contact.