As artificial intelligence ("AI") increasingly permeates the way we live, work and interact, the probability of loss and damage being caused to individuals and organisations by the use of AI is likely to increase. Those suffering loss and damage will likely seek compensation, and there are concerns that current liability rules are deficient in many jurisdictions.
This insight considers the proposed new liability regime in the EU and how this seeks to address fault-based liability in relation to the use of AI.
On 28 September 2022, the European Commission (the "Commission") introduced the draft Artificial Intelligence Liability Directive ("AI Liability Directive"). The AI Liability Directive proposal aims to "adapt private law to the needs of the transition to the digital economy" and make it easier for claims to be brought for harm caused by AI systems and the use of AI. The proposal addresses the specific issues with causality and fault linked to AI systems and ensures that claimants suffering loss in fault-based scenarios will have recourse to damages or other appropriate remedies.
Currently unknown: The draft AI Liability Directive still needs to be considered by the European Parliament and Council of the European Union. Once negotiated and adopted, EU Member States will be required to transpose the terms of the AI Liability Directive into national law, likely within two years.
The AI Liability Directive is intended to complement the European Commission’s proposed Regulation on Artificial Intelligence (the "AI Act"), which will classify AI systems by risk and regulate them accordingly. It will also sit alongside proposals for a new Directive on Liability for Defective Products ("Revised PLD"), which will update the EU's product liability framework to better reflect the digital economy and will explicitly include AI products within its scope.
At a fundamental level, the AI Liability Directive is intended to make it easier to bring claims for harm caused by AI:
The proposed AI Liability Directive is part of a broader package of EU legal reforms aimed at regulating AI and other emerging technologies. As currently drafted, the AI Liability Directive is aimed to achieve principally three things:
The AI Liability Directive will apply to providers, operators and users of AI systems, with these terms to have the same definitions as in the draft EU Artificial Intelligence Act ("AI Act").
The AI Liability Directive has extraterritorial effect, and broadly captures providers and/or users of AI systems that are available or operate within the EU.
Key provisions / requirements
The key provisions of the AI Liability Directive are:
The AI Liability Directive does not, as currently drafted, address situations where an AI system causes damage but there is no obvious defective product or fault by either the provider or user. However, the European Commission will assess the need for no-fault strict liability rules five (5) years after the entry into force of the AI Liability Directive.
As the AI Liability Directive proposes civil liability rules, there are currently no apparent plans for any regulatory supervision, other than those already set out in the draft AI Act.
The AI Liability Directive will not apply retrospectively and will only apply to causes of action occurring after its (likely two-year) implementation period has expired.
The AI Liability Directive underscores the vital importance of compliance with the AI Act.
In preparation, organisations are advised to conduct thorough risk assessments, examining whether their AI use cases are likely to fall under the purview of the proposed AI Act and, if so, whether they might be categorised as 'high-risk' AI systems. Once these risk assessments are completed, appropriate governance and policies should be put in place to limit the risks of damage being caused by the AI systems due to the incorrect use by, or inaction of, the organisation and its employees.
Businesses should also consider how they will be able to comply with potential disclosure requests, and what this information would look like. For complex AI systems, this should be factored into the development roadmap and governance structure well ahead of the AI Act or AI Liability Directive taking effect.
Outside of compliance with the AI Act and AI Liability Directive, businesses should also ensure that they have appropriate contractual protections in place in relation to the use of AI systems (in particular appropriate warranties and indemnities to cover potential risks when procuring AI systems).
The AI Liability Directive seemingly equates the responsibilities of AI system users and providers to those typically associated with more tangible technologies like industrial machinery. In other words, it proposes that an AI system user will be considered at fault if they misuse the system or disregard instructions. Similarly, a provider would be at fault if the AI system was improperly designed or developed, or if corrective measures were not taken to address identified defects.
However, given the intrinsic complexity and opacity of AI systems—some of which are colloquially termed "black boxes"—the efficacy of this approach in addressing an ever-evolving technological landscape remains uncertain.
Importantly, the AI Liability Directive does aim to establish a clear cause-and-effect relationship between the actions of an individual or organisation and the damage attributable to the AI system in fault-based scenarios. This means, despite its potential shortcomings in addressing complex liability questions, the introduction of these rules would ensure that victims suffering loss or damage can find redress in the most straightforward cases.
Nonetheless, the AI Liability Directive may leave a gap in scenarios where a claimant cannot establish a clear link between the damage caused by the AI system and the defendant's fault—especially when the AI system operates as designed but still results in damage. This exclusion appears deliberate, as the Directive’s draft proposals reference the European Commission's plan to reassess the need for non-fault-based liability rules five years after the AI Liability Directive's adoption.