IP Cases & Articles

AI and liability: The EUs framework for artificial intelligence

With the European Parliament having recently agreed on the final text of the AI Act, this article focuses on the EU’s proposed “AI strategy package” which could become the global standard by default.

This article was first published by MARQUES on their Class 46 blog on 10 and 11 July 2023, and is republished with their kind permission. D Young & Co author, Gabriele Engels, is Chair of the MARQUES Cyberspace Team.

Background

For the past two years, the EU has been working hard on establishing a new, unified approach to regulating artificial intelligence. As well as updating existing legal regimes to bring them up to date with the twenty-first century, this ambitious project includes introducing brand new legislation aimed specifically at artificial intelligence. Whilst the former goal is to be achieved by reforming the Product Liability Directive (COM (2022) 495 final), the focus of the latter objective is the creation of an AI Act (COM (2021 206 final) and an AI Liability Directive (COM (2022) 496 final).

The AI Act will impose requirements on manufacturers and operators of AI aimed at preventing the violation of rights. The approach used is universally applicable and solely based on risk and separate from any notion of fault. Regulating liability in cases in which AI is involved and gets something wrong is not the subject matter of the AI Act. These issues are addressed in the AI Liability Directive. Additionally, a revision of the Product Liability Directive would expand existing liability regimes to become applicable to intangible products, such as AI systems.

AI Act

Following the unveiling of the commission’s proposal in April 2021, the draft was extensively discussed in both the council and parliament. After the council adopted its position in December 2022, the European Parliament approved the latest iteration on the draft regulation on 14 June 2023. With trilogue negotiations between the three bodies as the next stage to agree on a mutual final draft, the EU draws another step closer to passing a EU Regulation on AI.

The initial commission proposal of the AI Act encompassed a broad definition of AI, characterising it as a software-based technology that generates outputs through interactions with its surroundings. The draft regulation establishes four distinct risk categories of AI: unacceptable, high, low, and minimal risk.

AIs categorised as low or minimal risk, such as chatbots and spam filters, are not obligated to fulfil any specific requirements other than transparency obligations.

Conversely, systems posing an unacceptable risk are outright prohibited. Systems are often categorised as posing an unacceptable risk where fundamental rights are significantly impacted (for example, facial recognition programs deployed in the context of law enforcement which utilise real-time biometric data).

The focus of the regulation is on high-risk AI systems, which are subjected to stringent requirements. Examples include security components embedded within other products, such as drones.

In its common position, the council expanded on several key points of the draft regulation. For instance, it narrowed the definition of AI to only include systems developed through machine learning, as well as logic- and knowledge-based approaches.

Additionally, requirements for high-risk AIs were clarified. Further, an additional layer was added to the classification to safeguard that those systems whose subject-matter initially falls under the high-risk classification, which effectively only pose a minimal risk, are not subject to the same arduous requirements. New provisions to enhance transparency and facilitate user complaints were also introduced.

The text ultimately adopted by the European Parliament on 14 June envisages further amendments. A key addition is the focus on specific rules for generative AI, such as ChatGPT. It is provided that such systems would have to comply with additional transparency obligations. These include the requirement to disclose that content was generated by AI, preventing the generation of illegal content through design precautions, as well as the duty to publish summaries of copyrighted data used for training purposes.

The list of prohibited unacceptable AI systems is also expanded to include inter alia predictive policing systems, emotion recognition systems in a variety of scenarios (including law enforcement and in the workplace), and the indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

Additionally, high-risk AIs should be expanded to encompass those which harm people’s health, safety, fundamental rights or the environment, as well as AI which influences political campaigns and which is used in recommender systems by very large social media platforms within the meaning of the Digital Services Act, that is, platforms with 45 million users in the EU per month.

A compromise must now be reached between the three drafts to produce a final draft which can then be voted into law – possibly already by the end of this year/beginning of 2024 with a two-year transitional period to follow.

UK approach to regulating AI

This rules-based and horizontal approach stands in stark contrast to the one adopted by the UK. The UK has decided to take a different route entirely in a post-Brexit world, opting for a light-handed approach which aims to avoid the stifling of innovation under the burden of an entirely new regulatory regime.

For this reason, the White Paper on “a pro-innovation approach to AI regulation” presented to parliament by the Secretary of State for Science, Innovation and Technology in March 2023 does not propose the creation of new laws or empowering a new regulator. Instead, existing regulators are to be entrusted with responsibility for establishing sector-based approaches tailored to the way that AI impacts their individual sector.

The danger of multiple, highly divergent or even unintentionally overlapping regimes shall be mitigated by the implementation of overarching core principles related inter alia to transparency, security, safety and fairness.

In essence, this revised White Paper does not differ significantly from the Policy Paper released in July 2022 which was met with support from industry players. The revised paper additionally identifies central support functions to ensure a level of regulatory coherence between sectors.

Proposal for AI Liability Directive

The rise of AI has led to issues surrounding causation and proof and the realisation that existing liability regimes may not be equipped to deal with such uncertainties. To tackle these challenges, the commission published a proposal for a new AI Liability Directive in September 2022.

Its objective is to establish clarity regarding liability for damages resulting from AI-enabled products and services. It seeks to enable users to receive compensation from technology providers for harm suffered while using AI systems. Such harm includes damage to life, property, health, or privacy due to the fault or negligence of AI software developers, providers, users, or manufacturers.

For the sake of consistency, the draft directive incorporates several essential concepts outlined in the draft AI Act, including terms such as "AI system," "high-risk AI system”, "provider” and "user”.

The proposal encompasses two key measures:

  • a rebuttable presumption of causality that establishes a link between the failure of an AI system and the resulting damage, and
  • access to information regarding specific high-risk AI systems
Most current liability schemes demand the claimant evidence the causal link between an action or omission of the other party and the damage suffered by them. However, the opacity of autonomous systems and artificial neural networks usually renders the individual AI user incapable of proving such causality.

Introducing a presumption of causality would help resolve this “black box” issue. In cases where claimants can demonstrate non-compliance of the AI system with the AI Act or other regulatory requirements or if a defendant fails to disclose required evidence, a presumption will arise that the defendant breached its duty and that the damage suffered was caused by this breach.

The defendant will then have the opportunity to rebut the presumption, for example, by proving that the fault could not have led to the specific damage.

It should be noted that this presumption of causality does not amount to a reversal of the burden of proof. Instead, the affected user must still prove the non-compliance of an AI system, that actual damage was suffered due to the output of said AI and that it is reasonably likely that the defendant's negligent conduct influenced this output.

The second measure establishes a new obligation on the companies behind high-risk AI systems, which have an impact on safety or fundamental rights, to disclose technical documentation, testing procedures, and compliance information. This new responsibility intends to facilitate the identification of the individual accountable for specific damage.

The EU Council and European Parliament must now consider and adopt the draft text. Should the proposal be adopted, tech companies should brace for a rise in claims being brought against them. The introduction of rebuttable presumptions and disclosure obligations will facilitate damaged parties receiving compensation to a considerable degree.

Proposal for revised Product Liability Directive

A proposal for a revised Product Liability Directive was also introduced by the Commission in September 2022. This proposal complements the EU’s AI strategy by updating the Product Liability Directive of 1985 and making it fit for the digital age.

Whereas the current version of the directive only applies to tangible products, the amendments envisioned by the draft would expand its applicability to cover intangible products, including software and AI systems. By accounting for cyber vulnerabilities and digital services necessary for the functionality of products as well as software and AI system updates, established liability rules are adapted for new technologies.

As under the AI Liability Directive, consumers will be granted access to information which defendants would not previously have had to disclose. This will help facilitate the enforcement of their claims. This will make it easier for them to clear the burden of proof and increase the chances of a successful compensation claim in complex cases.

The next step is for the European Council and Parliament to consider and adopt its position on the draft legislation.

Conclusion and outlook

With the adoption of the AI Act to be expected before the European Parliament elections in 2024, the next months and years will see significant changes being made to the legislative landscape currently applicable to artificial intelligence. As the EU strives to become a front-runner in regulating AI and setting an example for the rest of the world, companies must watch developments closely and implement changes accordingly – besides not least because of the massive administrative fines that can be imposed in case of violations and which even surpass those possible under the GDPR.

This already includes providing for contractual arrangements addressing risk allocation and the distribution of liability between all parties involved and influencing the AI system.

In the absence of particular regulations, the nature of the AI and its exact functioning should be specified an agreement, that is in particular, whether it is fully independent or semi-independent. For semi-independent systems, it should also be indicated whether there is a substantial degree of human control “behind” the machine and the importance of this control.

In view of the EU’s risk-based approach it is good to see that they are also seeking to implement substantive measures and processes (presumption rules, access to information and special regulations for certain types of high-risk AI) helping to protect users and in particular an injured party. This is an approach necessary in combatting liability issues and contributing to closing liability gaps for (not only) possible trade mark and copyright infringements committed through AI systems.

Useful links:

TM newsletter Read the latest edition
TM newsletter Read the latest edition
Design Book European Design Law
Design Book European Design Law