BRUSSELS – In comments submitted today to the European Commission’s Inception Impact Assessment (IIA) on Requirements for Artificial Intelligence (AI), global tech trade association ITI highlighted the importance of taking a context-specific and risk-based approach to AI policy to develop a balanced framework that addresses any unintended risks that AI may pose while still promoting technological innovation.
“As the premier advocate for the global technology industry, ITI and its members share the firm belief that building trust in this era of digital transformation is essential. At the same time, it is important to promote innovation to ensure Europe’s global competitiveness and security. We welcome the Commission’s goal to foster the development and uptake of safe and lawful AI that respects fundamental rights and ensures inclusive societal outcomes while preserving an enabling environment for innovation,” said Guido Lobrano, ITI Vice President of Policy and Director General for Europe.
In its comments, ITI urges the Commission to take into account the different risk factors and use cases of different AI applications when approaching the development of policies related to AI. A mix of the various policy options proposed by the Inception Impact Assessment—as detailed in Option 4 of the IIA—would thus represent the most balanced approach. In contrast, any “one-size-fits-all” horizontal solution would fail to take into account the diversity of the technology and would ultimately result in overregulation and the stifling of innovation.
ITI notes that industry can solve many issues related to technical aspects, management, and governance of AI technology, as well as frame concepts and recommended practices to develop trustworthy AI applications that consider privacy, cybersecurity, safety, reliability, and interoperability through processes of voluntary, global, and industry-led standardisation.
Additionally, new legislation should be considered only where legislative gaps are clearly identified. ITI welcomes the consideration of risk as the key factor in defining the scope of potential legislation as proposed by Sub-options 3a and 3b of the IIA. ITI also urges the Commission to carefully consider the definition of high-risk AI applications and take into account use case, complexity of the AI system, probability of worst-case occurrence, irreversibility, scope of harm in worst case scenario and sector.