A PYMNTS Company

AI and Product Safety Standards Under the EU AI Act

 |  March 8, 2024

By: Hadrien Pouget & Ranj Zuhdi (Carnegie Endowment for International Peace)

Following the recent agreement on the European Union’s groundbreaking artificial intelligence legislation, the focus now shifts to the challenging path of implementation. Industry-led organizations will play a crucial role in establishing standards to operationalize the AI Act, guiding companies in evaluating and addressing risks associated with their AI products. However, compared to other industries, AI standards are still in their infancy and incomplete. This shortfall threatens to inflate compliance costs and result in inconsistent enforcement, undermining the EU’s goal of fostering innovation through the legal certainty provided by the act. Developing AI standards to the same level of quality as those in established sectors poses significant hurdles.

The AI Act primarily centers on safety requirements that companies must meet before introducing an AI product to the EU market. Similar to other EU regulations, these requirements are broadly outlined, leaving considerable room for standards to provide detailed guidelines. Robust standards would clarify expectations and strike a balance between safeguarding EU citizens from flawed AI systems and ensuring manageable compliance for businesses. The EU must learn from past experiences, such as with the General Data Protection Regulation (GDPR), where legal ambiguities and high compliance costs disproportionately affected small and medium enterprises.

To achieve this balance, standard setters must navigate the diverse and evolving landscape of AI technologies, applications, and potential risks covered by the act. However, establishing precise standards for a technology as novel and intricate as AI, while addressing concerns related to discrimination, privacy infringement, and other non-physical harms, is challenging in practice. This article examines standards in well-established sectors to outline the desired characteristics of AI standards—content, structure, level of detail—underscoring current deficiencies and the complexities of bridging the gap. We dissect the issue into three components, providing recommendations for each: risk assessment standards, risk mitigation standards, and the distinct challenges posed by general-purpose AI (GPAI) systems like OpenAI’s GPT-4.

BACKGROUND: THE AI ACT AND STANDARDS The AI Act encompasses requirements for AI systems intended for deployment in “high-risk” scenarios, such as medical devices or the education sector. These requirements, referred to as “essential requirements” in EU terminology, encompass all aspects of the product life cycle, including documentation, data governance, human oversight, accuracy, and robustness, with the goal of safeguarding “health, safety, and fundamental rights.” Purposefully broad, the rules permit a range of interpretations, placing the onus on providers to assess the level of risk and implement appropriate measures for risk mitigation. For instance, the act mandates, without specific details, that AI systems be designed for “effective oversight by natural persons” and that data exhibit “appropriate statistical properties.”…

 

CONTINUE READING…