AI & Law #4: AI, Liability and the European legal landscape

In today’s digital age, the complexity of products has increased significantly. While new emerging technologies offer societal and economic benefits, they also bring potential risks. Certain characteristics of these technologies render our current liability rules ineffective, causing legal uncertainty for businesses and complicating the process of injured parties to receive compensation for damages.

This is especially true for AI-based systems, characterized by autonomous conduct, continuous adjustments and limited predictability and transparency. With the prospect of a new liability regime, the European Commission wants to enhance the trust in AI systems and boost successful AI innovations in the EU.

Two proposals

In September 2022, the European Commission launched two complementary proposals to change the current non-contractual liability rules:

  • A proposal for the revision of the Product Liability Directive (Revised PLD)[1], replacing the original Product Liability Directive which already dates back to 1985 (Original PLD); and
  • A proposal for a directive to amend liability rules to AI (AILD)[2].

The Revised PLD aims at making the product liability rules fit for the digital age, such as covering software within its scope. The goal of the AILD is to guarantee that (natural or legal) persons harmed by AI systems (as defined in the draft AI act) are granted an equivalent level of protection as those impacted by other technologies in the EU.

These proposals complement the existing contractual liability rules on EU level (such as the Sales of Goods and the Digital Content and Services Directive), on the basis of which consumers can hold their contracting party liable when products or (digital) services do not comply with their agreement or do not work properly.

Currently, the Revised PLD and AILD are only proposals. They still need final approval at EU level and shall be transposed into national law before they can actually take effect. Though changes are still possible, we already give some key insights in the main changes these proposals bring along.

The Revised PLD

The Original PLD introduced a no fault-based liability system for producers. This means that the liability of the producer is not linked to any error or negligence on his part, but that he is liable for defective products, whether or not the defect is his fault. In order to be compensated under the PLD, the injured party (natural person) needs to demonstrate that (i) the product is defective, (ii) he suffered damage and (iii) there is a causal link between the product’s defect and the damage.

The Revised PLD covers new clauses relating to the liability for products such as software, whether embedded or stand-alone, including AI systems and AI-enabled goods. The main 3 changes relate to the broadening of the scope as to “defects” and “damages” and the alleviation of the burden of proof for the injured party.

  1. Defects: Under the Original PLD, the producer would only be liable for defects which existed at the time the product was placed on the market or put into service. The Revised PLD foresees certain circumstances under which the producer remains liable for defects even if they arise after market launch, namely in case of (i) machine learning, (ii) software updates under the producer’s control[1] and (iii) failure to deal with cybersecurity vulnerabilities[2].
  2. Damages: The Revised PLD extends the types of damages for which compensation can be claimed from death, personal injury and material damage to (i) medically recognized psychological damage[3] and (ii) loss of or errors in data, not only used for professional purposes. For latent health problems, the period of liability is extended from 10 years to 15 years. Furthermore, the threshold of 500 euro for material damage no longer applies.
  3. Burden of proof: The principle remains that the injured party should show that the product is defective and that he suffered damage, due to this defect (i.e. causal link between defect and damage). However, the Revised PLD established two new principles:
  • Once the injured party has provided facts and evidence that are substantial enough to demonstrate the plausibility for the compensation claim, the producer is obliged to share necessary information in court.[6]
  • The “presumption of defectiveness and causal link” entails that defectiveness is assumed when (i) a product does not meet the obligatory safety requirements, (ii) the producer does not disclose information when he is obliged to, or (iii) an obvious malfunction caused the damage. The causal link is presumed when (i) the damage typically corresponds with the defect at hand, or (ii) excessive complexity in the technical or scientific aspects makes it difficult to establish liability, as seen in ‘black box’ AI systems.As such, from a practical point of view, we expect that producers will have to be extra diligent in the design, monitoring and maintenance of their AI systems. This will allow producers to disclose the information required by users on the one hand, while reducing their liability risk on the other hand.

The AILD

On the other hand, the AILD introduces a fault-based liability regime, which requires proving the fault of the liable person, the damage and the causal link between those two. While the draft AI act wants to decrease risks for safety and fundamental rights, it does not prevent damage from occurring (see first blogpost in this series). The liability regime of the AILD aims at compensating any type of damage caused by AI systems.

To this end, the AILD establishes two systems to facilitate obtaining compensation for damage resulting from (the lack of) output of an AI system.

1. Rebuttable presumption of causality: National courts shall presume that the causal link between the fault of the AI system provider and the (failure to produce) output by the AI system is established, when the following 3 conditions are met:

  • The injured party proves non-compliance with an EU or national duty of care (i.e. fault) by the AI system provider. For providers of high-risk AI systems, this entails inter alia compliance with the AI act provisions relating to data training, human supervision, transparency etc. With respect to non-high-risk AI systems, national courts will only accept the presumption of causality if it is extremely challenging for the injured party to demonstrate the causal link.
  • It is reasonably likely that the non-compliance has influenced the (failure to produce) output of the AI system. By way of illustration, the mere failure to register with a certain authority would not be regarded as “reasonably likely” to have influenced the (failure to produce) output.
  • The injured party proves that the (failure to produce) output caused the damage.

 

This system alleviates the burden of proof for the injured party, but is not a reversal of the burden of proof to the AI system provider. The AI system provider can still rebut the presumption of causality, for example by demonstrating that his fault could not have caused the damage.

2. Disclosure of evidence: National courts can order the disclosure of evidence about high-risk AI systems suspected of having caused damage, which allows injured parties to identify the liable party. Thus, if the injured party provides the court with satisfactory proof to support his claim and has made reasonable efforts to receive proof from the AI system provider (who is subject to the obligations under the AI act), this provider can be ordered to share various types of information, such as specific documentation and logging information. Given these potential requests for information, it is important for AI system providers to properly document the operation and logging of their system.

Nevertheless, the national courts should limit the disclosure of evidence to what is necessary and proportionate to support the claim of the injured party. The legitimate interests of all parties should be taken into account as well as the protection of trade secrets and confidential information.

If disclosure is ordered by the court and the defendant does not provide the requested evidence, the court can presume that the evidence would prove non-compliance with a duty of care. The defendant can, however, again rebut this presumption.

Moreover, the AILD foresees a review clause, which means that the AILD will be re-examined after 5 years. It will then be assessed whether there is a need for additional no fault-based liability rules, along with obligatory insurance for the operation of particular AI systems.

In the meantime, the AILD proposal has already sparked quite some controversy, especially from the side of the producers. Several actors have expressed their views with respect to the applicability of the AILD and its coherence with other legislation[7], as well as the chilling effect the AILD could have on innovation.[8]

Conclusion

Overall, it should be concluded that both Revised PLD and AILD proposals maintain a balance between the rights and obligations of users and providers (of AI systems). The burden of proof on the shoulders of the injured party is lightened, without fully transferring this burden to the (AI system) provider. Therefore, both proposals can be considered as a move in the right direction to provide clarity regarding liability rules in a world where both innovation and recognition of damages caused by innovation have their place.

 

Olivier Van Raemdonck, Partner

Ward Verwaeren, Senior Associate

Aida Kaloci, Associate

 


[1] Proposal for a Directive of the European Parliament and of the Council on liability for defective products, 28 September 2022, COM/2022/495 final.

[2] Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence, 28 September 2022, COM/2022/496 final.

[3] The Business Software Alliance (BSA) already flagged to align the liability period for software updates with the expected lifespan of a product or five years, whichever is shorter.

[4] Orgalim (representing the Europe’s technology industries) asked to only qualify products as defective for cybersecurity vulnerabilities if the product is not compliant with obligatory cybersecurity requirements.

[5] BSA already suggested to clarify the notion of ‘medically recognized harm to psychological health’, for example by making clear what injured parties should prove concretely (e.g. diagnosis by a healthcare provider).

[6] DigitalEurope (representing companies in the electronics and telecommunications sector) already proposed to ensure the protection of trade secrets when requesting the disclosure of evidence.

[7] The Future of Life Institute (a non-profit organization working on reducing risks from transformative technologies) raised that it should be clear which types of damages can be claimed and which operators can be held liable.

[8] The computers and communications industry association (CCIA), the Developers Alliance and the App Association expressed their fear that the AILD will result in an increased number of liability claims, discouraging innovative initiatives.

Team

Ward Verwaeren
Managing Associate
Aida Kaloci
Associate
Olivier Van Raemdonck
Managing Partner

Expertises

innovation
IP
privacy & security