The European Commission has proposed a Regulation on a European approach for Artificial Intelligence (AI Act) as well as an updated Coordinated Plan on Artificial Intelligence on 21 April 2021. The Commissions stated intention of this approach is to guarantee the safety and fundamental rights of people and businesses, while strengthening Artificial Intelligence (AI) uptake, investment, and innovation across the European Union (EU). It is a continuation of the Commission’s European Strategy on AI, the High-Level Expert Group on Artificial Intelligence’s Guidelines for Trustworthy AI, and the Commission’s White Paper on AI.
AI has an impact on various industries – established companies as well as start-ups incorporate AI technologies in their services to provide customers with new features and experiences. This also applies for the legal industry, in particular law firms adding software tools with AI components to their legal service offerings (so-called “Legal Tech”). Legal Tech is for example used for analysing compliance risks or streamlining corporate work in M&A-transactions.
This article details the impacts of the proposed AI Act on Legal Tech that utilizes AI.*
Scope of application
The AI Act will lay down harmonised rules for the placing on the market, the putting into service and the use of AI systems in the EU (Art. 1 (a)).
It will apply to (Art. 2 (1))
- providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established within the EU or in a third country,
- users of AI systems located within the Union and
- providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.
This means that the AI Act will also apply to Legal Tech providers that are established outside of the EU servicing users in the EU. It also means that companies and law firms utilizing Legal Tech may be required to comply with certain obligations of the AI Act.
The AI Act will not apply to private, non-professional users, because the definition of a user excludes the use of AI systems during a personal non-professional activity (Art. 3 (4)). So, the AI Act will not apply to end-users, for example persons using Legal Tech to get answers to their legal questions via a multiple-choice tool.
To avoid conflicts of territorial sovereignty and international law, the AI Act will also not apply to public authorities in a third country or to international organisations, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the EU or with one or more Member States (Art. 2 (4)).
Definition of AI systems
The AI Act will apply to AI systems as defined in its Art. 3 (1).
AI system means software that is developed with one or more of the techniques and approaches listed in Annex I to the AI Act and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with (Art. 3 (1)).
The AI techniques and approaches listed in Annex I to the AI Act are:
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning,
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference, and deductive engines, (symbolic) reasoning and expert systems,
- Statistical approaches, Bayesian estimation, search, and optimization methods.
This already broad definition will cover a lot of Legal Tech solutions and is future-proofed by empowering the Commission to adopt delegated acts to update and further amend the list of relevant AI techniques and approaches (Art. 4). It is however very likely that not all Legal Tech solutions will be governed by the AI Act as plenty of Legal Tech does not fulfil these criteria – for example tools that relate to process optimization or the communication with clients.
Risk-based categories of AI systems
The proposed AI Act follows a risk–based approach, with four levels of risk: unacceptable risk, high risk, limited risk, and minimal risk. While Legal Tech is not expressly mentioned in the proposal, a lot of Legal Tech solutions will be covered by the distinct categories and requiring them to comply with new regulatory obligations:
Legal Tech that is considered unacceptable risk
A limited set of harmful uses of AI that contravene EU values will be prohibited by Art. 5 of the AI Act. This does not directly include any Legal Tech solutions, but it includes inter alia systems that cause physical or psychological harm, social scoring by governments, exploitation of vulnerabilities of children and the use of subliminal techniques. Legal Tech will be prohibited to utilize these and other AI practices that are listed in Art. 5 of the AI Act.
Legal Tech that is considered high-risk
Several possible Legal Tech solutions are listed as high-risk AI systems in Annex III to the AI Act, based on Art. 6 (2) of the AI Act. This is mainly Legal Tech used by public authorities (including law enforcement) and courts:
- Category 5 of high-risk AI systems concerns access to and enjoyment of essential public services and benefits and includes AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services (lit. a.).
- Category 6 of high-risk AI systems concerns different areas of law enforcement and includes AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences as well as (lit. d.) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data (lit. g.)
- Category 7 of high-risk AI systems concerns migration, asylum and border control management and includes AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status (lit. d.).
- And category 8 of high-risk AI systems concerns the administration of justice and democratic processes with AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts (lit. a.). Recital 40 to the Regulation clarifies that this does not extend to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks, or allocation of resources.
Art. 7 of the AI Act empowers the Commission to amend the list of high-risk AI systems in Annex III by adopting delegated acts. As indicated above, this will allow the Commission to keep up with new developments and implement new emerging technologies. This also means that Legal Tech providers and companies using AI need to constantly evaluate any updates of the list and impacts for their business models.
Chapter 2 of the AI Act establishes extensive compliance requirements for high-risk AI systems and their providers. The obligations to be complied with include:
- Risk management systems (Art. 9),
- Data governance and management practices for training, validation, and testing data (Art. 10),
- Technical documentation (Art. 11 and 18) as specified in Annex IV to the AI Act,
- Record-keeping/logging (Art. 12 and 20),
- Transparency and provision of information to users and authorities (Art. 13 and 22, 23),
- Human oversight (Art. 14),
- Accuracy, robustness, and cybersecurity (Art. 15),
- Quality management systems (Art. 17),
- Conformity Assessment (Art. 19) and drawing up a declaration of conformity (Art. 48) containing the information listed in Annex V to the AI Act, resulting in labelling the product containing the AI system or at least its documentation with a CE marking (Art. 49),
- Appointing an authorised representative in the EU (for providers established outside of the EU, Art. 25),
- Document retention (Art. 50),
- Post-market monitoring (Art. 61), and
- Incident reporting requirements (Art. 62).
In addition, the AI Act will also establish obligations for the importers (Art. 26), distributors (Art. 27) and even users of high-risk AI Systems (Art. 29).
Before a high-risk AI system is placed on the market, the provider will need to register it in a new EU database (Art. 51). The information to be submitted upon the registration is detailed in Annex VIII to the AI Act.
Legal Tech that is considered limited risk AI systems
For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation. Users should be aware that they are interacting with a machine.
In terms of Legal Tech, this applies to AI systems that are not high-risk and that are intended to interact with natural persons. The AI systems need to be designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use (Art. 52 (1) of the AI Act).
It is likely that most Legal Tech solutions used by companies, lawyers, and end-users (looking for answers to their legal questions) will fall into this category, e.g., legal chat bots, tools for document automation (based on an interactive questionnaire) and Legal Tech tools to automate decision trees. The AI Act is however not very specific on the question when and how an AI system is “obvious”. Neither Art. 52 (1) of the AI Act nor the recitals of the AI Act set out the detailed requirements. For example, recital 70 only describes the underlying idea of the regulator.
Legal Tech that is considered minimal risk AI systems
All other AI systems will be considered minimal risk AI systems. According to the Commission, this is most AI systems currently used in the EU such as AI-enabled video games or spam filters.
But in terms of Legal Tech, this can only be AI systems that are used outside of the areas mentioned above and that are not intended to interact with natural persons, e.g., fully automated Legal Tech solutions outside of high-risk areas that do not require human input. Those Legal Tech solutions will not be required to comply with additional obligations.
Obligations of companies, law firms and other users of AI systems in Legal Tech
Professional users of AI systems will also be covered by the AI Act. User means any natural or legal person, public authority, agency, or other body using an AI system under its authority (Art. 3 (4)). This includes companies (such as credit institutions that are expressly mentioned in the AI Act) and law firms using Legal Tech.
As already indicated, users of AI systems will be obliged to comply with certain requirements if the AI systems used are categorized as high-risk. The AI Act differentiates between users of high-risk AI systems that will be considered as providers themselves (Art. 28) and (mere) users (Art. 29).
A user shall be considered a provider subject to the obligations of a provider (Art. 16) in any of the following circumstances:
- they place on the market or put into service a high-risk AI system under their name or trademark,
- they modify the intended purpose of a high-risk AI system already placed on the market or put into service,
- they make a substantial modification to the high-risk AI system.
This will be relevant for law firms that e.g., use a white-label Legal Tech solution that is a high-risk AI system, and market it (modified or not) under their brand.
But even (mere) users of Legal Tech solutions that are high-risk AI systems need to comply with certain requirements including:
- ensuring that the data is relevant in view of the intended purpose of the high-risk AI system (Art. 29 (3)),
- monitoring the operation of the high-risk AI system and informing the provider when certain risks are identified (Art. 29 (4)),
- keeping logs (Art. 29 (5)), and
- including information on the AI system in their data protection impact assessment (Art. 29 (6)).
Member States’ responsibilities
The Member States will need to designate or establish a national supervisory authority (Art. 59 (2)) that acts as the notifying authority responsible for the conformity assessment (Art. 30) and as market surveillance authority (Art. 63). The authority will have access to data and documentation (Art. 64) and it will be empowered to take action against non-compliant AI systems that could ultimately lead to the withdrawal of the product from the market (Art. 65).
Member States will be able to establish AI regulatory sandboxes to facilitate the development, testing and validation of innovative AI systems for a limited time before their placement on the market (Art. 53 and 54). This will be an opportunity for developing new Legal Tech solutions and Members States are required to provide priority access for small-scale providers and start-ups (Art. 55).
Member States will also be required to pursue non-compliance with the requirements of the AI Act and issue administrative fines of up to 30.000.000 EUR or, if the offender is a company, up to 6 % of its total worldwide annual turnover for the preceding financial year (Art. 71). This clearly shows that the Commission wants to put a focus on this topic and enforce the AI Act vigorously. In addition, Law Firms planning to use Legal Tech as new source for income need to take care of the underlying risks – otherwise new business models can quickly result in substantial operative and commercial risks.
Developments on an EU level
A European Artificial Intelligence Board (AI Board) will be established to provide advice and assistance to the European Commission (Art. 56). It will be composed of the national supervisory authorities and the European Data Protection Supervisor (Art. 57).
A new EU database for high-risk AI systems will also be established (Art. 60) and providers shall be encouraged to draw up codes of conduct for AI systems that are not high-risk (Art. 69).
The withdrawal of a non-compliant product from the market in one Member State can also lead to the withdrawal on an EU Level following a safeguard procedure (Art. 66). And the European Data Protection Supervisor may impose administrative fines on Union institutions, agencies and bodies falling within the scope of the AI Act (Art. 72).
Furthermore, the updated Coordinated Plan on Artificial Intelligence is intended to accelerate, act and align AI policy priorities and investments across the EU. This includes helping European companies by funding European Digital Innovation Hubs that are also open to support European Legal Tech providers. While the Coordinated Plan does also not expressly mention Legal Tech, it details high-impact sectors that are suitable for Legal Tech solutions. This includes the public sector requiring AI systems for e.g., automatic translations, document review and classification in administrations as well as the use in judicial proceedings (pages 46 to 49 of the Coordinated Plan), as well as law enforcement, migration and asylum requiring AI system for e.g., speeding-up and improving proceedings (pages 49 to 51 of the Coordinated Plan).
Legislative process
The AI Act is a draft law proposed by the European Commission. In the next steps, the European Parliament and the Member States need to adopt the Commission’s proposals in the ordinary legislative procedure. Once adopted, the final AI Act will be a Regulation directly applicable across the EU. It will enter into force on the 20th day after its publication in the Official Journal of the European Union and it will apply from two years after the entering into force (Art. 85), so Legal Tech providers will have at least two years to prepare for the new (then final) requirements that will be imposed upon them by the AI Act.
* Footnote:
The authors are Legal Tech Fellows and (Legal) Design Facilitators at DLA Piper’s Hamburg office.
For more general information on the AI Act, please find an overview in this blog post on the Privacy Matters blog by Andrew Dyson, Ewa Kurowska-Tober and Heidi Waem or in this blog post on the IPT Germany blog (in German) by France Vehar and Jan Pohle.
Explore the frontier of technology and its place in the future of enterprise at the fifth DLA Piper European Technology Summit on October 5, 2021, which includes a panel discussion on data analytics and tech in legal services. In this session, eminent industry thought leaders are set to explore how to meaningfully create mutual value between legal services buyers and service providers, challenging assumptions and embracing change in the process of redefining problems and to find ways of making business quicker, easier and better, with this panel set to showcase solutions DLA Piper has invested in to do so for the benefit of its clients through the firm’s Law& offering which includes the application of artificial intelligence tools that integrate technology with commercial and legal knowhow. Visit the DLA Piper European Technology Summit 2021 website for more information and to register your interest.
Further insights on AI from DLA Piper are available via our Technology’s Legal Edge blog, our TechLaw Podcast and Video Series as well as other thought leadership channels all available via our DLA Piper Technology Sector team’s web page here.