Prescription or Principle: A Framework for Designing AI Regulation

By Travis LeBlanc, Jonas Koponen, Anna Caro and Mari Dugas

The meteoric rise of AI in the public’s consciousness and the launch of several innovative services, including Open AI’s ChatGPT-4 and Stability AI’s Stable Diffusion, has brought with it a fierce debate on how, if at all, the technology should be regulated.

Several regulatory approaches have been broached in the debate, ranging from detailed and prescriptive “monolithic” statutes to principle-based non-statutory initiatives. Regulatory design choices affecting AI will have profound impacts beyond the societies and economies to which they apply directly. Any intervention should, therefore, be carefully considered and based on globally aligned principles, including the three proposed in this paper.

Existing Regulation

The discussion about AI regulation is often framed — incorrectly — as if the technology appears in a regulatory vacuum. It is also implied that once AI regulations have been introduced, that will be the end of the matter and AI will henceforth be “regulated.” This overlooks two key points. First, there is already a significant body of regulation that impacts the development and use of technology, including AI, such as workplace discrimination laws, competition law and privacy and data protection laws. Second, with AI’s disruptive potential across a variety of sectors, it would be audacious to embark on a “once and for all” effort to regulate: the regulatory landscape will have to be adapted significantly as technologies continue to evolve. It is, therefore, key to understand and define what additional regulation is needed to supplement the existing body of laws, standards, and principles and how these can be future-proofed to allow it to respond to new regulatory challenges as new AI use cases and technologies emerge.

As of today, new regulatory initiatives are relatively advanced in Europe. Although different approaches are being pursued, and many options exist between these polarities, we consider by way of example the two leading initiatives emerging within the European Union (EU) and United Kingdom (U.K.).

In the EU, the European Commission put forward its proposal for an AI Act in 2021, in response to which the European Parliament presented its proposed changes in June. Inter-institutional negotiations are underway, and the final AI Act is expected to take effect by 2025. A prescriptive statutory instrument is envisaged to facilitate the development and uptake of human-centric and trustworthy AI in a safe manner while protecting fundamental rights and democracy from its harmful effects. 

The EU regulatory approach contrasts that being proposed in the U.K., which aims to introduce a non-statutory framework based on five overarching principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. These principles are to be implemented by industry and subject-matter regulators in the U.K., within their respective areas of competence. 

Read the full article at Competition Policy International.