In September 2022, the European Commission adopted a proposal for a directive on liability for damage caused by artificial intelligence, aimed at building trust in AI and encouraging investments across the internal market.

Civil liability and Artificial Intelligence: why specific legislation is needed

According to a recent survey, civil liability rules related to the use of artificial intelligence (“AI”) systems[1] are one of the top three external barriers to the use of AI by European companies (33% of the sample), and represent a primary obstacle for companies that have not yet adopted AI systems, but intend to do so in the coming years (43% of the sample).

Facing this problem, on 28 September 2022, the European Commission (“Commission”) announced the adoption of two proposals aimed at improving and harmonising Europe’s liability rules to adapt to the digital age, circular economy, and impact of global value chains. One such proposal, and the subject of this article, concerns the introduction of a new directive on liability for artificial intelligence (“AI Directive”). Among other things, the AI Directive is tasked with facilitating compensation for those who have suffered loss or damage arising from the use of AI systems.[2] 

The adoption of the proposed AI Directive is the result of several steps, collectively taken by the European institutions. This journey began in 2019 with the publication of the study “Liability for artificial intelligence and other emerging digital technologies”,[3] which tasked itself with better understanding the shortcomings of the current liability regime for those impacted by AI. This moved through to the adoption by the Commission of the White Paper on AI (“White Paper”) on 19 February 2020 which sets out policy guidance options on how to reduce risks of AI while increasing uptake in its use.[4] The journey culminated with the recent public consultation on the various approaches offered by the White Paper which closed on 10 January 2022.[5] 

The purpose of the AI Directive is to address the growing deployment of AI and digital technologies enabled and moderated by the AI Act. While the AI Act proposed by the Commission in April 2021 (the “AI Act”)[6] focuses mainly on monitoring and preventing damages, the AI Directive aims at harmonising the liability regime applicable when AI systems – either high or low-risk systems – cause damage.[7]

According to the Commission, existing national liability rules, and fault-based rules particularly, are not presently suitable for handling liability actions for damage caused by AI-based products and services. The main limitation of these regulations in their present form is their inability to appropriately account for the complexity, autonomy, and opacity of many of the features of AI. In many cases, victims may find it difficult or excessively burdensome to identify who is liable and therefore prove the existence of the necessary requirements for an action based on tort under the current liability regime. Moreover, the supply chain of AI systems involves several actors, making the attribution of liability even more complex. For instance, in an accident involving a self-driving car, liability for the damage caused could fall either on the driver, the designer of the self-driving system, the producer of the sensor software or the sensors themselves, the vehicle manufacturer, the parties who provided the relevant data, cyber-attackers, the contributory negligence of all the said parties, or someone completely different. Other difficulties in the attribution of liability may arise from the fact that some AI systems are autonomously capable of modifying themselves thanks to either the processing of new data (self-adapting), the updates to which they are subject, or their continuous interaction with their surroundings, other systems, and data sources.

As well as solidify greater protections for victims, the AI Directive aims to create a consistent approach to legislative intervention across the Member States. It is hoped that this will produce a landscape of legal certainty for companies who wish to develop or use AI within the internal market. Alongside this push for legal certainty, the Commission is also pursuing methods of creating a favourable and trust-based ecosystem for the development, use, and investment in AI systems. This has so far involved the creation of further safety-related legislative initiatives, such as the new “Machinery Regulation 2021”[8] and the review of the product liability directive (“Product Liability Directive”).[9]  The relationship between the Product Liability Directive and the AI Directive is particularly interesting, given that they form complementary approaches to forming methods of attributing liability. The former deals with strict producer liability, while the latter, as mentioned above, deals with tortious actions at a national level and aims at ensuring compensation for all possible damages and injuries.

The Commission estimates that the AI Directive will have a propulsive effect on the growth of the European AI market, with an additional value of about EUR 500 million to EUR 1.1 billion.

The choice of the directive as legislative instrument and the two-step approach

In the explanatory memorandum on the AI Directive, the Commission shared the reasons underlying the choice to use the directive as the selected legislative instrument. Non-binding soft law (e.g. recommendations) would not be complied with as intended and therefore not have the desired effect. A set of rules directly applicable to all Member States (regulation), comparatively, would be too strict in relation to the scope of tortious liability, which is based on specific and long-established legal traditions of each Member State. The choice of the directive therefore leaves more flexibility for the internal transposition to the Member States while requiring more rigid compliance than its soft-law counterparts.

The AI Directive is constructed on a two-step model. It’s first aim is to adapt and coordinate the national liability regimes for damages caused by AI of each of the Member States, without completely overturning them. As described below, the text envisages an easing of the burden of proof for victims, including the proof of the causal link between the defendant’s fault and the event consisting of an act or omission resulting from the use of AI systems.

The second phase then requires an assessment of the effectiveness of the measures taken during the first phase, considering future technological, regulatory, and jurisprudential developments and the need to harmonise other elements of national tort law, including the introduction of a possible strict liability regime and requirement of insurance coverage.

The rebuttable presumption and the differentiated regimes for high-risk, low-risk and personal-use AI systems

As noted above, the AI Directive aims to ease the claimant’s burden of proof. This is achieved through two main instruments: i) a rebuttable presumption and ii) disclosure of information.

The former is intended to make it easier for victims to prove the causal link between the fault of the defendant-injurer and the output produced by the AI system or the failure of the AI system to produce the output that gave rise to the damage. The AI Directive does not, therefore, go so far as to provide for a shift of the burden of proof on the defendant (e.g. suppliers or manufacturers of the AI system), as this is considered too burdensome (and may in fact stifle innovation and adoption of AI-based products and services). According to some scholars, a shift of the burden of proof could even lead to excessive contentiousness, with the proliferation of litigation by victims against multiple potential liable parties (Article 4(1)).[10]

The rebuttable presumption, instead, only applies if a national court finds it excessively difficult for the victim-claimant (or any other claimant) to prove the causal link and a number of specific conditions are met. These include that:

  • the claimant has proven, or the court has presumed, the fault of the defendant, or of a person for whose behaviour the defendant is responsible, consisting in the failure to comply with a duty of care intended to protect against the damage that has occurred (e.g. a duty of care under the AI Act or other national or European legislation);
  • it is reasonably likely that, based on the circumstances of each case, the defendant’s negligent conduct affected the output produced by the AI system or the AI system’s inability to produce an output; and
  • the claimant has proved that the output produced by the AI system or the AI system’s inability to produce an output caused the damage.

Furthermore, the AI Directive provides for a differentiated regime for high-risk AI systems, as qualified by the AI Act.[11] The application of the presumption of causal link in the context of actions against suppliers or users of such systems in this case is limited to the non-compliance with certain obligations under the AI Act (Article 4(2)(3)). Moreover, the presumption of a causal link does not apply if the defendant proves that the claimant had sufficient evidence and expertise to prove such link (Article 4(4)).

The AI Directive also establishes a further differentiated regime for the situation where the AI system from which the alleged damage arose was used during a personal, non-professional, activity (Article 4(6)). In this case, the presumption of a causal link only applies if the defendant materially interfered with the conditions of operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so.

The disclosure of evidence and the balance with trade secrets protections

In addition to the rebuttable presumption of causation, with respect to high-risk AI systems, the AI Directive gives national courts the power to order the disclosure of evidence by the provider or another person bound by its obligations where the provider refused to comply with the same request made by the victim-claimant (or any other person entitled to do so) (Article 3(1)). To obtain this order, the claimant must present sufficient evidence to support the claim and make every proportionate effort to obtain evidence from the defendant. In addition to the ability to require disclosure, the AI Directive also provides that the claimant may request the preservation of evidence (Article 3(3)).

The AI Directive also requires national courts to consider the legitimate interests of all parties when determining whether an order for disclosure or preservation of evidence is proportionate (Article 3(4)). Specifically, the AI Directive makes specific reference to trade secrets protections under EU Directive 2016/943 (the so-called “Trade Secret Directive”) and national transposing legislation, leaving the national courts to make the delicate assessment of which one should prevail between disclosure/preservation or protection of secrets.

The AI Directive provides that, where the disclosure of a trade secret, or an alleged one, is ordered in judicial proceedings, the relevant national courts are authorised, with a duly motivated request from one party or ex officio, to adopt specific measures necessary to preserve secrecy. For example, the Trade Secret Directive provides for, among such measures, the possibility of limiting access to documents, hearings, recordings, and transcripts to a smaller number of persons, and of redacting sensitive parts of the rulings.[12]

If the defendant does not comply with the disclosure or preservation order, a rebuttable presumption of non-compliance with a relevant duty of care applies (Article 3(5)).

The Directive’s transposition and evaluation of the results achieved: a pragmatic approach

Once the AI Directive has been transposed in all Member States and five years have passed, the Commission will then submit a report to the Parliament, the Council, and the Economic and Social Committee, in which the achievement of the intended goals will be assessed (Article 5). The Commission will also consider whether it is appropriate to provide for a strict liability regime for claims against operators of certain AI systems and for compulsory insurance coverage.

The AI Directive does not directly consider the debate concerning the possible assignment of an “electronic personality” to certain AI systems/robots with a very high level of autonomy (which can be compared to an entity’s legal personality), so that liability for the damage caused in these circumstances could be placed directly on such systems.[13]  This approach was, however, endorsed by the European Parliament in a recent resolution, in which it was stated that the obligation to pay compensation for damage should in any case be transferred to the AI system/robot owner and discharged through a compulsory insurance scheme.[14] Such considerations are likely to be the subject of future debate as the developing liability regime matures alongside AI.

The current text of the proposed AI Directive will be subject to stakeholder comments for a minimum period of 8 weeks.[15] All comments received will be summarised by the Commission and presented to the European Parliament and the Council to fuel the legislative debate.

Find out more

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

You can find a more detailed guide on the AI Act and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organisation’s maturity on its AI journey (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

You can find more on AI and the law at Technology’s Legal Edge, DLA Piper’s tech-sector blog.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry in the UK, EU, and across the world. For further information or if you have any questions, please contact the authors or your usual DLA Piper contact.


[1] “Software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”, art. 3(1) AI Act.

[2] Available here: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13601-Liability-rules-for-Artificial-Intelligence-The-Artificial-Intelligence-Liability-Directive-AILD-_en.

[3] Available here: https://op.europa.eu/it/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1.

[4] White Paper on Artificial Intelligence – A European approach to excellence and trust (COM(2020) 65 final) https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf in which the Commission listed the issue of liability as one of the main risks associated with the use of AI systems. The White Paper was then followed on June 9, by the 2020 Council conclusions on “Shaping Europe’s Digital Future https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020XG0616%2801%29 and by the European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html.

[5] See https://www.technologyslegaledge.com/2022/01/liability-in-the-digital-and-ai-age-eu-looks-beyond-the-ai-act/. The consultation webpage is available here: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13601-Liability-rules-for-Artificial-Intelligence-The-Artificial-Intelligence-Liability-Directive-AILD-_en.

[6] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.

[7] According to the risk-based approach, peculiar to the AI Act (see above, note 7).

[8] Proposal for a Regulation of the European Parliament and of the Council on machinery products https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0202. On this topic, see https://www.technologyslegaledge.com/2022/04/the-interplay-between-the-new-machinery-regulation-and-artificial-intelligence-iot-cybersecurity-and-the-human-machine-relationship/ and https://www.technologyslegaledge.com/2022/03/new-regulation-on-machinery-products-and-the-digital-transition-obligations-for-manufacturers-distributors-and-importers/.

[9] Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, and the recent proposal for a directive published on 28 September 2022, available here: https://single-market-economy.ec.europa.eu/document/3193da9a-cecb-44ad-9a9c-7b6b23220bcd_en

[10] E.g. B. Schütte et al., Damages Liability for Harm Caused by Artificial Intelligence – EU Law in Flux, in Helsinki Legal Studies Research Paper n. 69, January 2021, p. 26.

[11] AI Act, art. 6 et seq..

[12] Art. 9(2).

[13] Among the first scholars to propose this solution, Lawrence B. Solum, Legal Personhood for Artificial Intelligences, North Carolina Law Review, vol. 70, 1992. More recently, N. Petit, Law and regulation of Artificial Intelligence and Robots: conceptual framework and normative implications, working paper, 2017; S. Beck, The problem of ascribing legal responsibility in the case of robotics, in AI & Soc., 2016, 476 and fol.; D. C. Vladeck, Machines without principals, see above, 117 and fol.; C. Leroux et al., Suggestion for a Green Paper on Legal Issues in Robotics, Contribution to Deliverable D.3.2.1 on ELS Issues in Robotics, 2012, 58 et seq..

[14] European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), available here: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52017IP0051&rid=9

[15] The consultation is available here: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13601-Liability-rules-for-Artificial-Intelligence-The-Artificial-Intelligence-Liability-Directive-AILD-_en.