With great fanfare, and a Herculean final series of negotiating sessions stretching long into the night, political agreement on the EU AI Act was reached on 8th December 2023. This marked a big step closer to the first major piece of legislation regulating AI and its uses coming in force. Getting systems to a point of compliance with the requirements of the AI Act will take time. Now is the time for businesses to get ahead of the game and put processes in place to ensure day one compliance with the AI Act, particularly given the far reaching effects the new law will have.

While the main pillars of the AI Act have now been agreed, the legislative journey to bring it into effect is still underway. The coming months will be dedicated to detailed technical discussions to refine the final text. As was the case for the EU Data Act and the EU Digital Services Act, these discussions are expected to be lengthy.

After this phase, the European Parliament and the Council of the EU must formally endorse the text, which after being published in the Official Journal of the European Union is expected to come into effect in the second quarter of 2024.

In this update we provide an overview of the key agreements reached during the negotiations, the next steps that will take place and of our recommendations for organisations that are utilising AI technologies as they look to become compliant during this transitional period.

Scope

The AI Act adopts the primary components of the OECD’s definition of AI, although it doesn’t replicate it verbatim.[1] Its scope does not apply to any software or algorithm if there is no training activity and if an output is not provided based on learned information. But a case-by-case assessment will be necessary as the language is broad and does not specify human intervention which, if such language had been included, would have excluded generative AI systems.

Under the provisional agreement, free and open-source AI systems will have a limited exemption from the regulation’s scope. This exemption will not cover open-source model providers of high-risk systems, prohibited applications, AI solutions that pose a risk of manipulation or general purpose AI models with systemic risks. Obligations in terms of transparency and copyright will also apply to providers of such open-source AI systems.

GPAI/foundation models

Regarding one of the most contested issues, the regulation of general-purpose AI (GPAI) or foundation models, the negotiators agreed to adopt the initial proposal of the European Parliament of a tiered ​approach distinguishing between all GPAI models and models with potentially systemic risks. The inclusion of regulation on foundation models came relatively late in the process, when the European Parliament incorporated this in the text it published in June 2023. The well-publicised objections of the  French, German and Italian governments cast significant doubt over the fate of the Act as late as November 2023. They perceived the inclusion of any rules on GPAI as a potential barrier to innovation, potentially stifling the competitive advantage that certain successful EU AI-developers might otherwise have. The alternative, they suggested, was to rely on voluntary codes of practice. In the final political agreement, a compromise was secured whereby such codes will hold a supplementary role rather than being a replacement for the substantive obligations.

All GPAI models will have to adhere to transparency requirements.[2] Such requirements include drawing up technical documentation, complying with EU copyright law (particularly to ensure that, where copyright holders have not allowed their data to be available for text and data mining (including web-scraping), this is identified and respected) and disseminating detailed summaries about the content used. The disclosure of content used by AI systems referred to above has the potential to give rise to significant litigation. This is because rights holders of materials used by AI systems could challenge their use under copyright and/or privacy legislation.

For GPAI models that are high-impact and carry systemic risk, European Parliament negotiators succeeded in enforcing more rigorous requirements during the negotiations. If GPAI models have reached a certain threshold of computational resources used in training, there will be a requirement to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity, and report on the energy efficiency of such models.[3] At present, it is anticipated that computation thresholds will be set at 1025 floating point operations or ‘FLOPs’, which would capture only the very largest models. Research has shown that various ‘emergent properties’ manifest somewhere just above 1024 FLOPs[4]. However, given trends in using computing resources more efficiently and emergent properties being trained into smaller models, these thresholds may need to be reduced over time. The Commission will have the ability to update these thresholds and add other criteria by way of delegated acts.

Requirements for High Risk AI Systems

Clear responsibilities were established for AI systems identified as high-risk, owing to their considerable potential to harm human safety and fundamental rights. For identification purposes, the negotiators agreed a set of filtering criteria designed to exclusively target genuine high-risk applications, while the areas agreed as being sensitive include use cases in education, employment, critical infrastructure, essential private and public services, law enforcement, border control, democratic processes and administration of justice.[5]

Developers of AI systems in this risk category will be required to carry out conformity assessments to ensure that the system meets the key requirements for trustworthy AI; namely data quality, documentation, traceability, transparency, human oversight, accuracy, cybersecurity and robustness. Developers will also be responsible for implementing quality and risk management systems (the standardisation of which is currently under development by the International Organisation for Standardisation /ISO). Deployers of such systems will also be subject to obligations, including ongoing monitoring, and conducting, in some cases, fundamental rights impact assessments and the reporting of malfunctions. The details of the fundamental rights impact assessment awaits full clarification but – according to the EU Commission – will consist of a description of the deployment processes relating to the high-risk AI system, the period of time and frequency in which the high-risk AI system is intended to be used, the categories of natural persons and groups likely to be affected by its use in the specific context, the specific risks of harm likely to impact the affected categories of persons or group of persons, a description of the implementation of human oversight measures and the measures to be taken in case of the materialisation of the risks. The EU Commission has acknowledged that there may be some overlap with the process applicable to data processing impact assessments/DPIAs under the EU General Data Protection Regulation/GDPR, but has stated that – given the broader scope of the AI Act that goes beyond personal data – a specific assessment will still be needed.

Exemptions for law enforcement

Negotiators reached a consensus on implementing a set of safeguards and limited exceptions for deploying biometric identification systems (RBIs) in public spaces for law enforcement.[6] This is contingent upon prior approval from the judiciary and is confined to a meticulously specified range of serious criminal activities. The use of “post-remote” RBIs will be strictly limited to the focused pursuit of individuals either convicted or suspected of serious criminal offences. “Real-time” use of RBIs will have to comply with strict conditions and will be limited in terms of time and location, having to be used for specific purposes such as the prevention of a specific and present terrorist threat.

Prohibited practices

Negotiators also agreed that the list of AI systems will completely prohibit:[7]  

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • manipulation of human behaviour to circumvent free will;
  • exploitation of the vulnerabilities of people due to their age, disability, social or economic situation.

Agreeing the list of prohibited uses involved much back and forth between the institutions, with the Parliament initially introducing more extensive areas of prohibition than those that were finally agreed. The compromises reached in relation to the final list, as well as the law enforcement exemption for RBIs, indicate the intensity of the marathon trilogue negotiations that took place.

Sanctions

Failure to adhere to the regulation may result in penalties varying from either 35 million euros or 7% of global turnover, to 7.5 million euros or 1.5% of the company’s turnover, depending on the nature of the infringement and the size of the company.[8] Exceptions will be made for smaller businesses, with limited penalties for SMEs and startups. Even with regard to penalties a balance has been struck between the need to regulate AI and the goal of not restricting the development of the technology in the EU. For the same reason, the so-called “sandboxing” solutions are now provided for, where solutions can be tested while benefiting from a special regime.

What’s next?

EU policymakers initiated technical discussions during the week of 11 December 2023 to seek to finalise the specifics of the legal wording. After the final text of the AI Act is agreed, it must receive formal approval from both the Parliament and the Council before the current legislative term concludes in April 2024. The AI Act will then fully apply two years following its enactment. However, prohibitions will take effect six months following formal approval and provisions relating to AI models, the bodies responsible for conformity assessments and the GPAI governance section will be implemented one year after formal approval.  A new European AI Office, that will sit within the Commission, will be tasked with coordination of  the supervisory authorities of EU member states, supervision of the implementation of the AI Act and the enforcement of the rules on GPAI.

The Commission has also launched an “AI Pact” for organisations seeking to position themselves as early adopters of the regulation. This scheme is already engaging with interested parties who will make declarations to work towards early compliance and create a community of exchange of best practices in advance of implementation of the AI Act.

What should organisations do now?

Although the final text of the AI Act is yet to be determined, the general direction of it is evident, and organisations involved in developing, using, or implementing AI solutions should begin their regulatory preparations now. This involves assessing if AI use cases fall under one or more of the regulated categories, particularly to determine if the high-risk or GPAI categories apply and looking to understanding their own role and the allocation of responsibilities in the AI value chain in relation to it for compliance purposes. Additionally, the transparency obligations for AI systems should be addressed, such as labeling limited-risk AI-like chatbots and sharing information on AI used in customer-facing systems, in line with existing EU consumer protection laws. Finally, an AI Governance structure should be established, involving legal, business, technical and risk stakeholders, to start to integrate regulatory requirements into production and service offerings.

At this stage, we recommend that businesses (i) map the technologies they use to understand whether or not they are caught by the AI Act; (ii) include obligations in contracts with suppliers that comply with the AI Act; (iii) implement solutions to ensure compliance with regulations relating to privacy and intellectual property; (iv) adopt internal policies and a governance system to regulate the use of AI; and (v) put in place an AI Act assessment solution.

DLA Piper continues to monitor updates and developments to the EU AI Act and its impact on industry across the world. For further information relating to the AI Act or if you have any questions relating to this article please contact any of the authors or your usual DLA Piper contact. You can also watch a recording of our recent webinar here: Unveiling the Impact of the Trilogue on the EU AI Act | DLA Piper  


[1] https://www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-policymakers-nail-down-rules-on-ai-models-butt-heads-on-law-enforcement/

[2] https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

[3] https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

[4] See, for example, the research from Wei et al: [2206.07682] Emergent Abilities of Large Language Models (arxiv.org)

[5] https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on-the-worlds-first-ai-rulebook/

[6] https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

[7] https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

[8] https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai