Former Google CEO Says Industry Must Develop AI ‘Guardrails’

artificial intelligence

Former Google head Eric Schmidt opposes a six-month pause of artificial intelligence (AI) experiments.

While the concerns about AI are legitimate — and may even be understated — a pause would only benefit China, Schmidt told The Australian Financial Review in an interview posted Thursday (April 6).

Schmidt was Google’s CEO and chairman from 2001 to 2011 and is now chair of the Special Competitive Studies Projects, an organization devoted to strengthening America’s competitiveness in AI and other emerging technologies.

In the interview, Schmidt was responding to an open letter signed by other tech experts, including Elon Musk and Steve Wozniak, that called on AI labs to pause for six months the training of AI systems more powerful than OpenAI’s GPT-4.

Rather than pausing their experiments, tech leaders should immediately work on developing standards, Schmidt said in the interview.

“I’m not in favor of a six-month pause because it will simply benefit China,” Schmidt said. “What I am in favor of is getting everybody together to discuss what are the appropriate guardrails.”

If the industry doesn’t do so, the government will, and its response would be “clumsy,” Schmidt said.

“So, I’m in favor of letting the industry try to get its act together,” Schmidt said. “This is a case where you don’t rush in unless you understand what you’re doing.”

The interview was published about a week after the Center for AI and Digital Policy (CAIDP) — a nonprofit group whose president, Marc Rotenberg, was among the signers of the open letter — filed a complaint with the Federal Trade Commission (FTC) asking it to investigate OpenAI and put a halt to its development of large language models for commercial purposes.

At the same time, OpenAI’s offering of access to its generative AI products is giving a rising generation of developers across industries access to innovative AI capabilities.

President Joe Biden met with a council of science and technology advisers Tuesday (April 4) to discuss the risks, as well as the opportunities, that AI development may pose for both individual users and national security.