On 28 September, the European Commission (“Commission”) announced that it had adopted two proposals aimed at bringing European liability rules into the digital age and addressing many of the new issues arising from novel technologies. The first, tasked with updating the existing regime under the Product Liability Directive (“Liability Directive”) and modernising existing rules on strict liability of manufacturers for defective products. The second, implementing the proposed AI Liability Directive (“AI Directive”), setting out a framework that allows victims of AI-related damage to seek compensation.
The adoption follows the public consultation closed on 10 January 2022 and forms part of a coordinated European approach to readying itself for the increasing prevalence of AI, having identified a number of specific challenges posed by the technology to the current regulatory framework. Alongside the adoption of the Commission’s proposed AI Regulation in April 2021 (“AI Act”), which focuses more on the monitoring and prevention of harm, the AI Directive aims to facilitate and harmonise a regime to address harms caused where AI obfuscates the situation.
What changes can we expect?
As noted above, the two proposals make amends to the existing regime, as well as introduce a new set of rules focused on AI.
Product Liability Directive:
The changes to the existing Liability Directive modernise and reinforce the current well-established rules on strict liability, compensation, and damages. It does so through the following:
- Ensuring liability rules are made clear for companies that substantially modify products;
- Allowing compensation for damage when technologies, such as robots or drones, are made unsafe through software updates and failure to address cybersecurity vulnerabilities;
- Creating a level playing field between EU and non-EU manufacturers by enabling consumers to seek compensation from importers of products to the EU as an additional avenue; and
- Putting consumers on an equal footing with manufacturers by: requiring manufacturers to disclose evidence, increasing flexibility in time restrictions for claims, and alleviating the burden of proof for victims in complex claims.
AI Liability Directive:
The new AI Directive aims to lay down a homogenous set of rules for access to information and alleviates the burden of proof when it comes to damages caused by AI systems. By doing so, the AI Directive simplifies the legal process for victims when it comes to proving faults led to damage. This is achieved by implementing two features:
- Where a fault has been established and a causal link to the AI is reasonably likely, a presumption of causality will be used to address difficulties of victims when explaining how the harm is linked (particularly in the case of complex AI systems which are often perceived as ‘black boxes’); and
- The provision of more tools to seek reparations by introducing the right of access to evidence from companies and suppliers in cases where high-risk AI is involved.
As noted by the Commission, the new rules strike a balance between protecting consumers and fostering innovation by removing certain barriers to access compensation, while introducing the right to fight a liability claim based on the presumption of causality for those in the AI sector.
How does the AI Liability Directive fit with the Liability Directive?
The Commission notes that the two Directives work in tandem. The updated Liability Directive will apply to claims where damage has been caused by defective products, including damage to health, property, and data. The new AI Directive goes on to specifically target the regime where damage has been caused by an AI system.
A recurring similarity within both Directives is the introduction of certain tools easing the burden of proof (including the right to disclosure of evidence and rebuttal of presumptions) in aid of those seeking to claim damage. This has been done through similar wording across the Directives to ensure “consistency [in approach], regardless of the compensation route chosen”.
How does the AI Liability Directive fit with the proposed AI Act?
As noted above, and highlighted in a Q&A by the Commission, the AI Directive and the AI Act are “two sides of the same coin” and apply at different points in time to reinforce one another.
On the one hand, the AI Act introduces rules on safety of development and use, designed to reduce the overall risk of stakeholders and prevent damages where possible. On the other, the AI Directive engages when the risks that could not be eliminated result in damage to persons or property. It in effect hangs a safety net for when the harness of the AI Act doesn’t quite fit tight enough.
Both the AI Directive and the AI Act use the same definitions. This keeps the distinction between high-risk and non-high-risk AI, while recognising the requirements for documentation and transparency included within the AI Act. The result is a holistic approach to safety and regulation that allows for claims to be brought with a level of ease when things have gone wrong.
What happens next?
The Commission’s proposals will now need adoption by both the European Parliament and Council.
After a period of 5 years, the Commission will then assess whether there is also a need for no-fault liability provisions for AI-related claims.
Find out more
DLA Piper will come back to the AI Liability Directive with our in-depth analysis of the draft proposal shortly.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
You can find a more detailed guide on the AI Act and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.
To assess your organisation’s maturity on its AI journey (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.
You can find more on AI and the law at Technology’s Legal Edge, DLA Piper’s tech-sector blog.
DLA Piper continues to monitor updates and developments of AI and its impacts on industry in the UK and abroad. For further information or if you have any questions, please contact the authors or your usual DLA Piper contact.