Does artificial intelligence (AI) threaten life on earth? The head of OpenAI thinks it could.
He’s one of 350 people — many of them tech executives and scientists in the AI field — who signed a joint statement Tuesday (May 30) calling for greater AI regulations.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement by the Center for AI Safety.
The statement comes amid an era of excitement and unease about AI, prompted by the rising popularity of generative AI like OpenAI’s ChatGPT. Critics are calling for the technology to be regulated, while investors have increasingly been pumping money into the field.
That report argued that — at a high level — generative AI has the potential to create a new data layer, similar to the advent of HTTP, which ushered in the internet 30 years ago. And as with any new data layer or protocol, governance, rules and standards need to carry the day, Sarkissian told PYMNTS’ Karen Webster.
From a regulatory point of view, there is a need to develop an arena in which everyone can compete, while also establishing controls to ensure that the technology is not destructive.
Sarkissian posited a scenario in which governments ask: “How do we create a playing field in which everybody can compete, but we put a bit of a harness around it, such that they compete in a way that makes sense, in a way that it doesn’t destroy the world and in a way that we can actually see?”
And even if AI isn’t an extinction-level threat, there are a number of concerns about the use of the technology to create phone scams or otherwise trick people into thinking an AI output is real, as noted in a separate report here Tuesday.
“Moreover, the technology is likely to become only more sophisticated, opening the door to all kinds of AI-generated materials being passed off as something else, whether that be a person’s voice, a certified document or an image of an event that never happened,” PYMNTS wrote.