
Major players in the burgeoning generative AI sector, Google and OpenAI, have markedly different views about regulatory oversight of the world-changing technology.
In widely published reports, Google is diverging from OpenAI and its partner Microsoft on the structure of AI regulations. On Tuesday (June 13), The Washington Post reported that in a filing with the Commerce Department, Google is asking for AI oversight to be shared by existing agencies led by the National Institute of Standards and Technology (NIST).
Google and Alphabet President of Global Affairs Kent Walker told the Post, “We think that AI is going to affect so many different sectors, we need regulators who understand the unique nuances in each of those areas.”
OpenAI CEO Sam Altman has taken a different direction, saying during a U.S. Senate hearing in May, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” suggesting a more centralized and specialized approach.
In an OpenAI blog post published May 22, Altman and co-authors wrote that generative AI requires something akin to an International Atomic Energy Agency (IAEA), but for “superintelligence.”
Read more: Generative AI Shows Its Flaws as Google, OpenAI Competition Intensifies
That blog reads, in part, that “any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.”
By contrast, Google’s response to the Commerce Department’s request for comment said, “At the national level, we support a hub-and-spoke approach — with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation — rather than a ‘Department of AI.’”
“There is this question of should there be a new agency specifically for AI or not?” Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology, told CNBC, adding, “Should you be handling this with existing regulatory authorities that work in specific sectors, or should there be something centralized for all kinds of AI?”
At this point, the Biden administration is in fact-finding mode. But with OpenAI calling for IAEA-style oversight for superintelligence, many anticipate robust regulatory responses worldwide.
Featured News
Spain’s BBVA Remains Optimistic About Hostile Takeover of Sabadell
Mar 18, 2025 by
CPI
BlackRock, Vanguard and State Street Seek Dismissal of Texas Antitrust Lawsuit
Mar 18, 2025 by
CPI
EU to Boost Metal Sectors with Energy Relief and Safeguards
Mar 18, 2025 by
CPI
Players’ Association Sues Tennis Governing Bodies Over Alleged Antitrust Violations
Mar 18, 2025 by
CPI
Turkey Moves to Curb Big Tech’s Power with New Regulatory Bill
Mar 18, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Self-Preferencing
Feb 26, 2025 by
CPI
Platform Self-Preferencing: Focusing the Policy Debate
Feb 26, 2025 by
Michael Katz
Weaponized Opacity: Self-Preferencing in Digital Audience Measurement
Feb 26, 2025 by
Thomas Hoppner & Philipp Westerhoff
Self-Preferencing: An Economic Literature-Based Assessment Advocating a Case-By-Case Approach and Compliance Requirements
Feb 26, 2025 by
Patrice Bougette & Frederic Marty
Self-Preferencing in Adjacent Markets
Feb 26, 2025 by
Muxin Li