AI Policy Group Says Promising Self-Regulation Is Same Thing as No Regulation

AI regulation

Artificial intelligence (AI) has emerged as the foundational technological innovation of the present era. 

That is why the most effective approach to regulating its varied use cases is already proving to be one of the century’s biggest — and most important — questions. 

Get it right, and governments can help usher in a new era of productivity and prosperity. Get it wrong, and the best-case scenario is a stunted innovation and business landscape. As for the worst-case scenario, well, let’s just say that the phrases “human extinction event” and “end of the world apocalypse” have both been used to describe what might happen were AI left free to run amok. 

Even the United Nations is worried about the technology. 

And while fears may be overblown and exaggerated, with some observers even believing that the apoplectic apocalypticism is in service to public relations, it doesn’t change the simple fact that right now, China is the only country to have passed a policy framework meant to regulate AI. 

A group of tech sector nonprofits are sending around an AI policy proposal called “Zero Trust AI Governance” to lawmakers and industry groups urging the government to use existing laws to oversee the industry, including enforcing anti-discrimination and consumer protection laws.

The nonprofits include Accountable Tech, AI Now, and the Electronic Privacy Information Center (EPIC). 

Their main point of contention, outside of the ongoing lack of AI regulation, is that tech companies should not be relied upon to self-regulate their AI ambitions. 

The recommendation framework is titled “Zero Trust,” after all. 

Read more: How AI Regulation Could Shape Three Digital Empires

Voluntary Action Won’t Cut It, Say AI Groups

“Industry leaders have taken a range of voluntary steps to demonstrate a commitment to key ethical AI principles. But they’ve also slashed AI ethics teams, ignored internal alarms, abandoned transparency as the arms race has escalated, and sought to pass accountability off to downstream users and civil society,” the Zero Trust AI Governance policy states. 

“For too long, we’ve misplaced trust in Big Tech to self-regulate and mistaken technological advances for societal progress, turning a blind eye to the torrent of escalating harms. … Rather than relying on the good will of companies, tasking under-resourced enforcement agencies or afflicted users with proving and preventing harm, or relying on post-market auditing, companies should have to prove their AI offerings are not harmful,” it adds. 

As for how to do that? The policy report points to the pharmaceutical industry as a best-practice example of how to regulate an important industry whose products are required to undergo substantial research and development before they can receive FDA approval.

“Large-scale AI models and automated decision systems should similarly be subject to a strong set of pre-deployment requirements,” the group wrote

The coalition’s strongly worded recommendations rest on three principles: enforce existing laws; create bright-line rules; and put the burden of proof on AI companies. 

The groups’ Zero Trust AI framework also seeks to redefine the limits of existing laws like Section 230 so that generative AI companies are held liable if their products produce false or dangerous information.

It comes in response to the voluntary commitments senior representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI made with the Biden administration to help move toward safe, secure and transparent development of AI technology.

But self-regulation is equivalent to no regulation, the groups say. 

Read alsoFTC Chair: Immediate AI Regulation Needed to Safely Develop Industry

Balancing Innovation and Regulation

PYMNTS has previously covered how the sign of a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.

The best way to ensure that is for industry leaders and policymakers to work together toward a common goal absent of any hint of industry capture. 

“It’s going to be an ongoing and continuous process of interaction between government and the private sector to make sure that the public gets all of the benefits that can come from this technological innovation but also is protected from the harms. … I don’t think that we can expect any one single institution to have the kind of knowledge and capacity to address the varied problems,” Cary Coglianese, the Edward B. Shils Professor of Law and professor of political science at the University of Pennsylvania Law School and founding director of the Penn Program on Regulation, told PYMNTS. 

Echoing that sentiment, Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS that industry players should approach lawmakers with the attitude of, “We know this is new, we know it’s a little bit spooky, let’s work together on rules, laws, and regulations, and not just ask for forgiveness later, because that will help us grow as an industry.”

Other industry insiders have drawn parallels between the purpose of AI regulation in the West to both a car’s airbags and brakes and the role of a restaurant health inspector in previous discussions with PYMNTS.

One thing is certain — by fostering transparency, accountability, and robust stakeholder engagement, it is possible to mitigate the risks of regulatory capture and promote a regulatory environment that safeguards the responsible development and use of AI systems.