In 2021 the European Commission proposed a new legal framework for AI that aims to address the risks generated by specific uses of AI, either by prohibiting specific uses, or by requiring strict risk mitigation measures. The Artificial Intelligence Act is the centerpiece of that framework. Do note that the text of this act is still being negotiated!
At the same time, the European Commission faced a related, but different, challenge: to make sure that persons harmed by applications of AI enjoy the same level of legal protection as persons harmed by other technologies and products. These persons may be direct users of the AI, but they may also be any other stakeholder harmed by the AI.
In October 2020, the European Parliament adopted an own-initiative resolution, based on Article 225 TFEU, requesting the European Commission to propose legislation to solve the problem of civil liability for AI. On 28 September 2022, the European Commission delivered on the European Parliament’s request with the Proposal for an Artificial Intelligence Liability Directive.
The Artificial Intelligence Liability Directive proposal lays down uniform rules for certain aspects of non-contractual civil liability for damages caused with the involvement of AI. It does so by modernizing the existing, familiar legal framework for civil liability involving technologies and products that is based on the Product Liability Directive. It’s just 4-5 pages of substance prefaced with plenty of explanation and justification.
What changes for organizations using AI? Keep in mind that, generally speaking, two key questions for determination for civil liability for damage caused by technology are the following:
- When can an application of the technology be said to have caused the damage?
- Is that damage the responsibility of the provider of the technology or the user of that technology?
The Artificial Intelligence Liability Directive does not address in detail the distribution of liability between provider and user. The proposed Artificial Intelligence Act already details their specific duties, and by doing so sets standards for when an organization was at fault. What these duties are will depend on the risk category the AI system is in. If the risk is classified as high according to the rules, there are many possible reasons for finding that the provider or user caused the damage. If it is low, there are few such reasons.
The organization as provider of a potentially high risk AI technology does well to think of themselves as a manufacturer of pharmaceuticals: research the impact of the technology very well and spare no expense to write good a good (proverbial) package leaflet with detailed instructions and warnings for the user. And avoid grand claims about the technology’s properties, as these may come back to bite you.
The organization as user does well to think of themselves as a pharmacist: educate yourself, see to it that the instructions on the package leaflet are followed to the letter, continuously monitor how the AI is working to the best of your ability, and inform stakeholders of problems you discover as soon as possible.
It is of course sensible to do this regardless of whether the provider and user are separate economic entities or not! AI developed in-house may cause damage, after all. And when it does, the buck will stop on somebody’s desk: the provider’s or the user’s. It is good to think about this distribution of responsibilities before (sh)it happens.
Procurers in the organization of course do well to properly inform themselves very well about what should be included in a good package leaflet. And generally to be aware that 1) AI risk can also be brought into the organization indirectly, by sourcing output data created by AI technology from third parties, and 2) that in-house development with the AI product using the organization’s data makes the organization both provider and user of the resulting AI application!
The Artificial Intelligence Liability Directive does create clarity about responsibility for damage. The innovation it adds to the Artificial Intelligence Act is the way it distributes the burden of proof between the claimant, who is the (professed) injured person, and the defendant, who is either the provider or the user of the AI technology. The claimant who suffered damage may of course be the user of the AI or any other person affected by it!
Firstly, courts may presume a causal link between a demonstrated fault of the defendant and the output produced by the AI system (or the failure of the AI system to produce an output), if the claimant can demonstrate that the (absence of) output gave rise to damage.
The burden of proving that no causal link exists between the fault and the damage is – by default at least – on the defendant. The claimant does not have to prove that. That is clearly a novelty in product liability.
Secondly, within certain limits, courts can order the defendant – in its role as provider or user – to disclose evidence supporting a (potential) claim for damages. If a defendant fails to comply with an order, a court will presume this means that the defendant was at fault. This is a novelty as well.
By putting the burden of proof squarely with the provider or user by default, the Artificial Intelligence Liability Directive reinforces the importance of AI transparency and AI explainability. In addition, it stresses the importance of well-functioning AI Governance, Risk Management, and Compliance practices in compliance with the AI Act, certainly if the AI is classified as high risk.
Detailed development documentation, formalized decision processes about impact and risk, and audit trails do not exist just to satisfy c-level management, auditors, supervisory authorities, or paying customers who want assurance: stakeholders who feel they were harmed by AI may directly obtain them as evidence through the courts! That is something to keep in mind when they are written.
Finally, do take note of article 6 of the Artificial Intelligence Liability Directive proposal that quietly amends Annex I to Directive (EU) 2020/1828 (representative actions for the protection of collective interests of consumers) to include itself. This is about the possibility of collective claims!
Since the Artificial Intelligence Liability Directive is a directive, not an act, it has to be transposed into member state law within two years, resulting in some inevitable variety in how this will work in practice in member states. In Dutch Law product liability is regulated in the Civil code (Art. 6:185-6:190 BW). Dutch law turned out quite attractive for collective claims relating to privacy infringements based on the GDPR. It is reasonable to expect that the same will happen for collective claims relating to AI. That means that even very limited damage caused by AI may be actionable – if the group that suffered that limited damage is big enough!
Collective actions: High profile examples of collective actions based on the GDPR in Dutch law are for instance the cases against Tiktok, Oracle, and Salesforce. Note that high profile collective actions against automated decision making processes, like for instance those involving SyRI or Deliveroo’s Frank algorithm, were based on other legal grounds.
Discover more
We will keep you informed by email.
Enter your preferences here.