BRUSSELS – Today, ITI highlighted the following statement on the adoption of Member of the European Parliament (MEP) Axel Voss’s report on civil liability rules for Artificial Intelligence (AI) by the European Parliament plenary.

“While we look forward to reviewing the full report, we welcome its focus on ensuring a harmonized liability framework for AI across the EU that can enable innovation and protect consumers without fundamentally overhauling the existing, well-functioning and well-established liability regime in Europe,” said Guido Lobrano, ITI Vice President of Policy and Director General for Europe.

ITI also appreciates MEP Voss’s suggestion for a definition of high-risk AI as an autonomous operation that involves a significant potential to cause harm to one or more persons, in a manner that is random and goes beyond what can reasonably be expected.

“In particular, EU policymakers will need to develop a definition of high-risk AI applications that takes into account use case, complexity of the AI system, probability of worst-case occurrence, irreversibility, scope of harm in worst case scenario, and sector,” added Lobrano. “A specific, revised liability regime should only apply to products and services falling in this high-risk category.”

In previous comments on the European Commission’s White Paper on Artificial Intelligence and in responses to the Inception Impact Assessment (IIA) on Requirements for Artificial Intelligence (AI), ITI highlighted the importance of taking a context-specific and risk-based approach to AI policy to develop a balanced framework that addresses any unintended risks that AI may pose while still promoting technological innovation. A fundamental overhaul of liability rules would create legislative overlap and could ultimately slow down Europe’s AI industry.

With regard to the report’s suggestion to revise the Product Liability Directive (PLD), ITI stresses that strict liability frameworks, like the one set up by the PLD, remove any consideration of intent or negligence. This becomes especially critical if AI technology is purposefully misused, for example by bad actors for illegitimate surveillance or for consciously discriminating in hiring processes.

“An expansion of the PLD’s scope creating strict liability for all AI-based technologies would disproportionately spread liability throughout the development and supply chain, potentially making other actors, like developers, responsible for uses beyond their control,” explained Lobrano. “While AI systems can be well developed, bad actors might abuse these tools for their own agendas, and if liability rules are not carefully designed, they could unjustly expose developers, even when the cause of harm was not the AI system, but its misuse.”

The European Commission is expected to publish its legislative proposal for AI regulation in Q1 2021. The European Parliament and Council of Member States will be able to amend the proposal and eventually need to agree on a joint position in order for the law to pass.

Related [Artificial Intelligence]