Visa The Embedded Lending Opportunity April 2024 Banner

Ex-Google CEO Wants World to Treat AI Like Climate Change

Ex-Google CEO Wants World to Treat AI Like Climate Change

Many people have said many of things about generative artificial intelligence (AI).

U.S. Securities and Exchange Commission (SEC) Chairman Gary Gensler said Sunday (Oct. 15) that if AI technology is left unchecked, it will lead to a financial crisis within a decade. Meanwhile, venture capitalist Yanev Suissa claimed Saturday (Oct. 14) that AI’s hype-cycle valuation bubble won’t last long, but the underlying technology will.

Former Google CEO Eric Schmidt, along with Inflection and DeepMind Co-founder and AI-pioneer Mustafa Suleyman, said Thursday (Oct. 19) that they want to establish an international panel to oversee AI, the same as global nations already do for climate change.

“AI is here,” the two tech leaders wrote in an op-ed in the Financial Times. “Now comes the hard part: learning how to manage and govern it.”

“Actionable suggestions are in short supply,” they added. “What’s more, national measures can only go so far given [AI’s] inherently global nature.”

Their call for an International Panel on AI Safety (IPAIS) to offer an objective body to help shape protocols and norms takes its inspiration from the Intergovernmental Panel on Climate Change (IPCC), specifically the panel’s supranational approach of providing policymakers with “regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.”

Per Thursday’s op-ed, an expert-led body rigorously focused on a science-led collection of data would help answer questions surrounding AI. What models are out there? What can they do? What are their technical specifications? Their risks? Where might they be in three years? What is being deployed where and by whom? What does the latest R&D say about the future?

Schmidt was executive chairman of Google when the tech giant acquired Suleyman’s DeepMind AI research laboratory in 2014.

The pitch for oversight from the two former colleagues comes in advance of the United Kingdom’s upcoming AI Safety Summit Nov. 1 and 2.

Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace; Dario Amodei, co-founder and CEO of AI startup Anthropic; Jason Matheny, president and CEO of RAND Corporation; and other diplomats and business leaders all contributed to the oversight proposal.

Read also: There Are a Lot of Generative AI Acronyms — Here’s What They All Mean

Right Now, Confusion and Uncertainty Reign

While generative AI had been making waves among expert and academic circles since the introduction of GPT-2 in 2019, the technology’s revolutionary capabilities and its downstream opportunities are just now becoming clear to enterprise players.

The emergence of consumer-facing generative AI tools in late 2022 and early 2023 radically shifted public conversation around the power and potential of AI — and left lawmakers around the world scrambling to educate themselves about the innovations.

Even the United Nations is worried about the technology, as are the Pope and the Pentagon.

“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania Law Professor Cary Coglianese, told PYMNTS earlier this month as part of the “TechReg Talks” series presented by AI-ID.

“It’s not one static thing,” he added. “Regulators — and I do mean that plural, we are going to need multiple regulators — they have to be agile, they have to be flexible, and they have to be vigilant.”

See also: How AI Regulation Could Shape Three Digital Empires

But not everyone is gung-ho about the necessity of overseeing the technology.

Meta Vice President and Chief AI Scientist Yann LeCun said Thursday that calls to regulate AI are premature and will only hinder competition.

“Regulating research and development in AI is incredibly counterproductive,” LeCun said. “They want regulatory capture under the guise of AI safety.”

PYMNTS explored the dangers of over-regulating AI last month, writing that “a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.”

But while there exists no clear global leader or multilateral policy for AI oversight, it is not as though nations around the world are sitting still while one of the biggest computing transformations the world has ever known marches on beneath their noses.

Already, the European Union moved first in creating laws, and China was the first major economy to enact an actual framework around policing AI within its borders.

The United States, for its part, is hoping to strike a balance between supporting innovation while safeguarding for some of the technology’s obvious perils.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.