AI Regulation: How Could It Impact Everyday Life for Consumers and Businesses

By Danni Yu & Benjamin Cedric Larsen

Different AI regulatory regimes are currently emerging across Europe, the United States, China, and elsewhere. But what do these new regulatory regimes mean for companies and their adoption of self-regulatory and compliance-based tools and practices? This article outlines how and where AI regulations emerge and how these, in some cases, seem to be on divergent paths. Second, it discusses what this means for businesses and their global operations. Third, it comments on a way forward in the growing complexities of AI use and regulation, as it exists between soft law practices and emerging hard law measures.

AI Governance Conceptualized

Two distinct but connected forms of AI governance are currently emerging. One is soft law governance, which functions as self-regulation based on non-legislative policy instruments. This group includes private sector firms issuing principles, guidelines, and internal audits and assessment frameworks for developing ethical AI. Actionable mechanisms by the private sector usually focus on developing concrete technical solutions, including the development of internal audits, standards, or explicit normative encoding. Soft law governance also entails multi-stakeholder organizations such as The Partnership on AI, international organizations such as the World Economic Forum, standard-setting bodies such as the ISO/IEC, CEN/CENELEC, NIST and interest organizations such as the Association for Computing Machinery (ACM), among others. This means that soft-law governance and associated mechanisms are essential in setting the default for how AI technologies are governed.

Hard law measures, on the other hand, entail laws and legally binding regulations that define permitted or prohibited conduct. Regulatory approaches generally refer to legal compliance, the issuing of standards-related certificates, or the creation or adaptation of laws and regulations that target AI systems. Policymakers are currently contemplating several approaches to regulating AI, which broadly can be categorized across AI-specific regulations (EU AI Act), data-related regulations (GDPR, CCPA, COPPA), existing laws and legislation (antitrust and anti-discrimination law), and domain or sector-specific regulations (HIPAA and SR 11-7).

Emerging Regulatory Landscapes

According to the OECD AI Policy Observatory, which tracks 69 countries and territories, these have already released more than 200 initiatives targeting AI governance and regulation. Initiatives are aimed at different areas such as antitrust concerns, interoperability standards, risk mitigation -hereunder consumer and social protection, the delivery of public services, and the protection of public values.

While many countries have implemented national AI strategies, not all countries and territories take the same approach to AI governance and regulation. Different approaches are connected to a country’s existing institutions, including culture and value systems and economic considerations, e.g. regarding innovation. Before understanding what this means for businesses and their international operations, a few examples of emerging AI regulations are highlighted below.

In many ways, the European Union has been a frontrunner in data and AI regulation. The EU’s AI Act (“AIA”), which is expected to gradually go into effect starting in 2024, establishes a horizontal set of rules for developing and using AI-driven products, services and systems within the EU. The Act is modeled on a risk-based approach where AI systems that pose unacceptable risks are entirely banned, while high-risk systems will be subject to conformity assessments, including independent audits and new forms of oversight and control. Limited risk systems are subject to transparency obligations, and little or no risk systems remain unaffected by the EU AI Act. The EU has also proposed an AI Liability Directive, which targets the harmonization of national liability rules for AI.

In the United Kingdom, the government released a proposal for regulating the use of AI technologies in June 2022, which focuses on a “light touch” sectoral approach where guidance, voluntary measures and sandbox environments are encouraged as a means to assess and test AI technologies before they are marketed. The proposal is meant to reflect a less centralized approach than the EU AI Act.

In Canada, the Directive on Automated Decision-Making came into effect in April 2019 to ensure that the government’s use of AI to make administrative decisions is compatible with core administrative values. Canada’s Artificial Intelligence and Data Act (“AIDA”) was introduced in June of 2022 and would be the first law in the country to regulate the use of AI systems if approved. The objective of AIDA is to establish common requirements across Canada for the design, development, and deployment of artificial intelligence technologies that are consistent with national values and international standards.

The United States‘ approach to artificial intelligence is more fragmented and characterized by the idea that companies, in general, must remain in control of industrial development and governance-related criteria.

Read the complete story at Competition Policy International.