By: Hadrien Pouget & Ranj Zuhdi (Carnegie Endowment for International Peace)
Following the recent agreement on the European Union’s groundbreaking artificial intelligence legislation, the focus now shifts to the challenging path of implementation. Industry-led organizations will play a crucial role in establishing standards to operationalize the AI Act, guiding companies in evaluating and addressing risks associated with their AI products. However, compared to other industries, AI standards are still in their infancy and incomplete. This shortfall threatens to inflate compliance costs and result in inconsistent enforcement, undermining the EU’s goal of fostering innovation through the legal certainty provided by the act. Developing AI standards to the same level of quality as those in established sectors poses significant hurdles.
The AI Act primarily centers on safety requirements that companies must meet before introducing an AI product to the EU market. Similar to other EU regulations, these requirements are broadly outlined, leaving considerable room for standards to provide detailed guidelines. Robust standards would clarify expectations and strike a balance between safeguarding EU citizens from flawed AI systems and ensuring manageable compliance for businesses. The EU must learn from past experiences, such as with the General Data Protection Regulation (GDPR), where legal ambiguities and high compliance costs disproportionately affected small and medium enterprises.
To achieve this balance, standard setters must navigate the diverse and evolving landscape of AI technologies, applications, and potential risks covered by the act. However, establishing precise standards for a technology as novel and intricate as AI, while addressing concerns related to discrimination, privacy infringement, and other non-physical harms, is challenging in practice. This article examines standards in well-established sectors to outline the desired characteristics of AI standards—content, structure, level of detail—underscoring current deficiencies and the complexities of bridging the gap. We dissect the issue into three components, providing recommendations for each: risk assessment standards, risk mitigation standards, and the distinct challenges posed by general-purpose AI (GPAI) systems like OpenAI’s GPT-4.
BACKGROUND: THE AI ACT AND STANDARDS The AI Act encompasses requirements for AI systems intended for deployment in “high-risk” scenarios, such as medical devices or the education sector. These requirements, referred to as “essential requirements” in EU terminology, encompass all aspects of the product life cycle, including documentation, data governance, human oversight, accuracy, and robustness, with the goal of safeguarding “health, safety, and fundamental rights.” Purposefully broad, the rules permit a range of interpretations, placing the onus on providers to assess the level of risk and implement appropriate measures for risk mitigation. For instance, the act mandates, without specific details, that AI systems be designed for “effective oversight by natural persons” and that data exhibit “appropriate statistical properties.”…
Featured News
Canada Investigates Major Grocers for Anticompetitive Practices
May 26, 2024 by
CPI
John Hess Scrambles to Secure Shareholder Approval for $53 Billion Chevron Merger
May 26, 2024 by
CPI
Petrobras Retains Refineries and Gas Pipeline in CADE’s Landmark Reversal
May 26, 2024 by
CPI
Meta Proposes New Data Limits on Facebook Marketplace in UK Amid CMA Oversight
May 26, 2024 by
CPI
EU Industry Chief Calls for Unified Tech Regulations Between US and Europe
May 26, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Merger Guidelines Retrospective
May 21, 2024 by
CPI
Mergers of Complements
May 21, 2024 by
CPI
Personality Traits, Private Equity, and Merger Analysis
May 21, 2024 by
CPI
The 2023 Merger Guidelines: Lessons in the Importance of Incipiency, Modern Economics, and Monopsony
May 21, 2024 by
CPI
The 2023 Merger Guidelines: Sharpening Merger Analysis
May 21, 2024 by
CPI