Musk and Gates Offer Senate Dueling Visions of AI

Bill Gates Innovation

Elon Musk thinks AI could end humanity. Bill Gates thinks it might save us.

But as The Wall Street Journal (WSJ) reported Wednesday (Sept. 13), there’s one thing both billionaires — and a number of their tech compatriots — agree on when it comes to artificial intelligence (AI): the government needs to have a hand in regulating the technology.

Musk, Gates, Meta CEO Mark Zuckerberg, and the heads of Google, Microsoft, and IBM, plus civil rights advocates and union leaders, were all among guests at a closed-door meeting on AI Wednesday convened by Senate Majority Leader Chuck Schumer.

According to published reports, Musk warned about the possible threat to AI, apparently sharing something he had told Chinese officials: “If you have exceptionally smart AI, the Communist Party will no longer be in charge of China.”

Gates, meanwhile, said the technology could help address world hunger, per the WSJ report. And in remarks released by Meta, Zuckerberg stressed the importance of open-source AI software, arguing that it’s “safer and more secure” and that it “democratizes access” to AI.

Schumer told reporters that at one point he asked the guests if they agreed that the government should play a part in AI regulations. Everyone in the room raised their hands.

“No one backed off in saying we need government involvement,” Schumer told the WSJ following the meeting. “They understood that there needed to be government responsibility because let’s say even these companies would be willing to install guardrails on themselves — they’ll have competitors who won’t.”

As noted here earlier this week, some nonprofits have expressed concerns that tech companies are playing too large a role in the debate around AI regulation.

“Their voices can’t be privileged over civil society,” said the Center for AI and Digital Policy, an independent research organization that examines national AI policies and practices.

The group also objected to the closed-door nature of Wednesday’s meeting, saying that “the work of Congress should be conducted in the open.”

One of the problems with regulating AI, as Dr. Johann Laux argued here last month, is that for the people with the skills to properly govern the technology, the most profitable work is within the private sector, not with public regulators.

“That skill gap, as well as the increasing reliance of lawmakers on industry insiders for expertise, sits at the heart of ongoing fears by observers and civil groups that U.S. regulators may craft legislation and technical standards for AI that cater to the industry’s own interests, a phenomenon known as regulatory capture,” PYMNTS wrote Tuesday.