You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu
On 1 August 2024, the AI Act – the world's first comprehensive legal regulation for artificial intelligence systems and models – came into force. The purpose of this legislation is to ensure safety and compliance with European values regarding the development and use of AI. However, it does not address issues of liability. It is important to note that the AI Act is not the only legislation dealing with AI that may impact its development in the EU. Other existing or planned regulations include the EU General Data Protection Regulation (EU) 2016/679, the Product Liability Directive, which allows people harmed by software (including AI software) to seek compensation from the software manufacturer, the General Product Safety Regulation 2023/988/EU, and intellectual property laws under the national laws of EU Member States.
According to a 2020 study conducted for the European Commission, liability is one of the top three barriers to the use of AI by companies. Current national liability laws in Member States are not designed to address claims for damages caused by AI-based products and services. Victims must prove the unlawful act or omission of the person responsible for the damage. The features of AI can make it difficult and costly for victims to identify the liable party and to prove that the conditions for claiming damages are met.
On the other hand, if these proceedings arise, national courts may favour the injured party and adjust the application of the applicable law on an ad hoc basis to achieve a fair outcome. Unfortunately, this approach creates legal uncertainty that can be detrimental to companies, as they will not be able to accurately assess liability risks or protect themselves. The problem is exacerbated if the company engages in cross-border activities.
For this purpose, the EU decided to draft an Artificial Intelligence Liability Directive (AILD).
The aim of the AILD is to establish unified EU-wide requirements for certain aspects of non-contractual civil liability for damage caused by AI systems. Given the existing barriers in demonstrating a causal link between the harm suffered and the AI system, as well as identifying the responsible party, the AILD introduces two solutions to remove these barriers or at least loosen them enough to enable claims to be recognised.
Under the AILD, a potential claimant may request the court to order the disclosure of relevant evidence relating to specific AI systems that are suspected of having caused harm. However, the request must be supported by sufficient facts and evidence to establish the credibility of the pending claim. Courts, however, will limit the scope of documents disclosed to only those necessary for evaluating a claim for damages and apply only such safeguards of evidence that contribute to the analysis of the case. If a document or piece of information constitutes a business secret deemed confidential by the court, the court, either upon request or on its own initiative, may implement specific measures to preserve its confidentiality.
It may be difficult for the injured party to demonstrate a causal link between the damage and the wrongful act or omission of the AI. Therefore, the AILD introduces a solution whereby national courts are to presume, for the purposes of a damage claim, the existence of a causal link between the defendant's fault and the AI's act or omission.
The EU, through the AILD, aims to facilitate victims' ability to pursue a claim related to AI usage, and to provide stability and predictability for companies developing and using AI. It remains to be seen whether the AILD will be adopted in its currently discussed form.
authors: Daria Rutecka, Piotr Podsiedlik