As AI technologies continue to advance and permeate various sectors, the European Union is taking proactive steps to regulate their impact on society. Two legislative initiatives aim to address the complexities surrounding the liability of AI.

  1. The revised EU Product Liability Directive (Revised PLD)
  2. The EU AI Liability Directive (EU AILD)

Both directives complement the EU AI Act and are part of the EU’s broader strategy to create a comprehensive regulatory environment for AI.

Decoding the revised EU Product Liability Directive: Strict liability

Manufacturers or developers could be held strictly liable for the harm their software or AI systems cause.

The Revised EU PLD regulates non-contractual strict liability claims. That means harmed individuals could claim compensation for harm caused by a product without needing a prior contractual relationship with the manufacturer or proving fault or negligence. Its revised version aims to modernise the framework governing liability for defective products, particularly in light of new technological advancements. While the original directive, established in 1985, focused primarily on physical goods, the latest amendments extend liability considerations to encompass software and AI-driven systems.

In future, the EU Product Liability Directive will apply to not only physical but also digital goods.

The PLD classifies digital manufacturing files, AI systems, software, and AI-enabled products as ‘products’, ensuring that compensation is accessible for personal injury, property damage, or data loss resulting from a defective AI product. As digital products increasingly require post-market modifications, the PLD pProposal also ensures that consumer rights extend to harm defective modified products cause, holding remanufacturers and other businesses accountable.

Furthermore, the PLD aims to enhance consumer fairness by implementing procedural adjustments. It requires manufacturers to disclose relevant evidence during liability claims to ease consumers’ burden of proof and level the playing field.

Decoding the EU AI Liability Directive: fault-based liability

Due to AI’s complexity and lack of transparency, the AILD proposal seeks to harmonise non-contractual, fault-based liability rules specifically for AI-enabled products and services.

Mainly, the AILD aims to facilitate claims of harmed individuals or businesses, holding providers, persons subject to the obligations of a provider, or users of AI accountable for AI-caused harm – this could be malfunction, unintended consequences, or biases. The court can order to disclose evidence in relation to a specific “high-risk AI system. The directive proposal also stipulates that the law should presume causality when a claimant can establish fault. This presumption of causality applies if a causal link between AI system performance and damage is ‘reasonably likely’. Conditions for this presumption vary depending on whether the AI system is ‘high risk’. The presumption of causality is rebuttable. For high-risk AI cases, courts may order the disclosure of relevant evidence to facilitate fair claims.

The European Commission will monitor incidents involving AI systems and may implement further regulations if needed, potentially introducing strict liability for high-risk AI systems and mandatory insurance coverage.

Outlook on the EU liability directives related to AI

The European Parliament formally adopted the PLD in March 2024. The Council adopted it as well on 10 October 2024, and it will enter into force 20 days after its publication. Member States will have 24 months to incorporate the directive into national law.

Meanwhile, the AILD proposal is progressing through the EU’s legislative processes, with the European Parliament and Council still working to finalise their positions.

Impact of the EU PLD and AILD

The EU legislators tried to fill the gaps in the liability of AI and make it easier for harmed individuals to get compensated. The strict liability approach under the revised PLD may especially appeal to claimants. A claimant might prefer the AILD whenever an organisation’s breach of the duty of care does not lead to a defective product which is a requirement by the PLD. Furthermore, AILD might also be beneficial for claimants aiming to file a claim against the user of an AI system – rather than against the manufacturer of the AI product under the PLD.

The AILD also facilitates class actions and collective claims. By adding the AILD to the EU’s Representative Actions Directive (EU 2020/1828), the proposal allows collective consumer claims at national level of the EU member state, potentially opening the door to more representative actions against AI-related harm.

Expect AI class actions to grow.

Global implications of the PLD and AILD

Similar to the EU AI Act, the PLD and AILD could carry far-reaching extraterritorial impacts beyond the EU. The EU still leads the way in regulating AI and could potentially influence other nations towards a more standardised approach to AI development and usage. To accommodate products manufactured outside the EU, the revised PLD aims to allow EU-based consumers to claim compensation from EU-based entities within the supply chain, even if the product originated outside the EU.

Actions you can take next

  • Enhance your understanding of PLD and AILD by delving deeper into the legislation. You can read the revised EU Product Liability Directive (Revised PLD) and read the EU AI Liability Directive (EU AILD).
  • Defend against claims by implementing robust recordkeeping. The PLD and AILD require more documentation and evidence tracking to defend against claims.
  • Update insurance policies and contracts to account for the new liability definitions, and consider risk allocation carefully by asking for our assistance.
  • Explore the potential impacts on your industry by staying informed about the developments in AI legislation through our newsletter.
  • Start a conversation about AI governance in your organisation by asking us to help you with AI GRC.