The leaders of the G7 nations want to hold talks on reaching “common vision and goal of trustworthy AI.”
That’s according to a bulletin issued by the seven countries — the U.S., U.K., Canada, France, Germany, Italy and Japan — as part of their talks in Japan. The plan, per a Saturday (May 20) CNet report, is to convene a summit on artificial intelligence (AI) later this year.
“These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies,” the bulletin says.
According to the bulletin, the G7 leaders said they would collaborate with tech companies to create AI standards that promote “responsible innovation and implementation,” while also admitting government policy has sometimes trailed tech development.
The news is the latest example that governments around the world are — as noted here last week — trying not just to understand the AI movement but also to develop regulations.
“The last time the United States passed meaningful regulation impacting the tech sector was in the late ’90s during Microsoft’s antitrust case,” PYMNTS wrote. “Now, the U.S. risks falling behind its global peers.”
European lawmakers voted earlier this month in favor of a draft form of regulations governing the use of AI, including restrictions on chatbots like ChatGPT along with a ban on the use of facial recognition in public and on predictive policing tools.
Last week saw a Senate subcommittee hear from Sam Altman, CEO of ChatGPT parent OpenAI, who urged lawmakers to develop regulations for his company’s flagship technology.
During his testimony, Altman likened AI’s potential to that of the printing press, but said the technology needs guardrails to prevent potential harm.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said in widely reported testimony. “We want to work with the government to prevent that from happening.”
Meanwhile, PYMNTS spoke last week with Amias Gerety, partner at QED Investors, who said tech companies should “avoid the temptation of going down the technological rabbit hole,” when testifying about AI before Congress.
Rather, he recommended using the Turing Test — whether a computer can “fool” a person into thinking the computer is human — as a yardstick to help create policy.
“This is a place where I hope that technologists will be focused less on ‘Terminator’ scenarios and be focused more on what’s happening today and tomorrow,” said Gerety.