Why Regulating AI Is Like Regulating Air or Water, Says UPenn Professor

Those nations shepherding to market modern technical innovations have so far prioritized innovation over regulation.

None more so than the United States, particularly as it relates to the advent of generative artificial intelligence (AI), which proponents believe is due to usher humanity into a new era of information-driven productivity and efficiency.

“Trying to regulate AI is a little bit like trying to regulate air or water,” Professor Cary Coglianese, the Edward B. Shils Professor of Law and professor of political science at the University of Pennsylvania Law School and founding director of the Penn Program on Regulation, told PYMNTS as part of the “TechReg Talks” series presented by AI-ID.

Fortunately, air and water are already regulated in most major market economies. But just as air and water have distinct characteristics that require tailored approaches for effective oversight, so does AI.

Coglianese explained that regulating AI will be a multifaceted activity that varies depending on the type of algorithm and its uses.

“It’s not one static thing. Regulators — and I do mean that plural, we are going to need multiple regulators — they have to be agile, they have to be flexible, and they have to be vigilant,” he said, adding that “a single piece of legislation” won’t fix the problems associated with AI.

Complex Issues Require Sophisticated Solutions

Over the past year, various prominent actors involved in the development of AI technology have addressed legislators and regulators worldwide calling for AI to be regulated.

This approach, where the public and private sector work hand-in-hand to understand and address the technology’s promise and pitfalls, is crucial. 

“It’s going to be an ongoing continuous process of interaction between government and the private sector to make sure that the public gets all of the benefits that can come from this technological innovation but also is protected from the harms,” Coglianese explained.

That’s because one of the key challenges in regulating AI is its rapidly evolving nature. As the technology advances, its potential to revolutionize industries and reshape society is being recognized globally.

“I don’t think that we can expect any one single institution to have the kind of knowledge and capacity to address the varied problems,” Coglianese said.

He emphasized that what is most important is to think about where AI is being used and the nature of the potential risks and harms associated with that use.

“We could think about AI in self-driving automobiles, we can think about AI in medical devices, we can think about AI in precision medicine of other kinds, AI in social media, AI in marketing, AI now in generative, large language models. The nature of AI’s uses vary widely, and many of those uses fall into categories that, first of all, already have regulators. … There’s no question that the National Highway Traffic Safety Administration is going to be a better regulator of autonomous automobile technology than some kind of new startup AI regulator would be,” he added.

For the time being, AI models and machines are operating and scaling absent any regulation or policy guardrails — at least until Aug. 15, when China’s interim rules go into effect.

The Importance of Existing Regulatory and Institutional Capacity

One of the more pressing challenges confronting the government as it relates to AI regulation is simply understanding how the technology operates and gaining the kind of knowledge necessary to productively oversee it.

That’s why Coglianese suggested that the creation of an institutional “center of excellence” meant to share knowledge and develop best practices for AI auditing and impact assessments by creating what’s commonly known as regulatory sandboxes.

“Regulatory sandboxes provide an opportunity for the government to really provide vigilant oversight and focus on a new technology, on a new AI tool, and understand better what it’s doing, what it’s not doing, how it’s operating, what some of its potential harms or side effects might be,” Coglianese said.

“If there was an equivalent of a seat belt that we could require be installed with every AI tool, great. But there isn’t a one-size-fits-all action that can be applied. … It’s going to be an all-hands-on-deck kind of approach that we need to take, and that’s why vigilance is so important,” he explained. 

Regulation itself is a kind of an algorithm, Coglianese added. It can tell people exactly what to do, what kind of action to take, or what kind of protective measures to adopt. It can command action, or the avoidance of action.

As the field of AI continues to expand and evolve, it is imperative that regulators and policymakers take a proactive and comprehensive approach to ensure the responsible development and deployment of AI technologies.

By implementing tools such as algorithmic impact assessments, algorithmic audits, and regulatory sandboxes, governments can effectively regulate AI — holding firms accountable for their responsible use of the technology and assessing potential harms while staying vigilant and adaptable to its advancements.