Google and OpenAI Disagree on Government Oversight of AI

Major players in the burgeoning generative AI sector, Google and OpenAI, have markedly different views about regulatory oversight of the world-changing technology.

In widely published reports, Google is diverging from OpenAI and its partner Microsoft on the structure of AI regulations. On Tuesday (June 13), The Washington Post reported that in a filing with the Commerce Department, Google is asking for AI oversight to be shared by existing agencies led by the National Institute of Standards and Technology (NIST).

Google and Alphabet President of Global Affairs Kent Walker told the Post, “We think that AI is going to affect so many different sectors, we need regulators who understand the unique nuances in each of those areas.”

OpenAI CEO Sam Altman has taken a different direction, saying during a U.S. Senate hearing in May, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” suggesting a more centralized and specialized approach.

In an OpenAI blog post published May 22, Altman and co-authors wrote that generative AI requires something akin to an International Atomic Energy Agency (IAEA), but for “superintelligence.”

That blog reads, in part, that “any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.”

By contrast, Google’s response to the Commerce Department’s request for comment said, “At the national level, we support a hub-and-spoke approach — with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation — rather than a ‘Department of AI.’”

“There is this question of should there be a new agency specifically for AI or not?” Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology, told CNBC, adding, “Should you be handling this with existing regulatory authorities that work in specific sectors, or should there be something centralized for all kinds of AI?”

At this point, the Biden administration is in fact-finding mode. But with OpenAI calling for IAEA-style oversight for superintelligence, many anticipate robust regulatory responses worldwide.