A PYMNTS Company

Top AI Researchers Demand One-Third of Funding for AI Safety & Regulation

 |  October 24, 2023

In a paper released on Tuesday, leading artificial intelligence researchers have urged both AI companies and governments to allocate a minimum of one-third of their AI research and development budgets to ensuring the safety and ethical use of AI systems. This call to action comes just a week before the International AI Safety Summit in London and lays out a series of measures to address the potential risks associated with artificial intelligence, reported Reuters.

The paper, authored by three Turing Award winners, a Nobel laureate, and more than a dozen renowned AI academics, not only underscores the importance of investing in AI safety but also urges governments to legally mandate that companies bear responsibility for any foreseeable and preventable harms caused by their advanced AI systems.

At present, there is a notable absence of comprehensive regulations focused on AI safety, and the European Union’s first set of AI legislation is still pending due to unresolved disagreements among lawmakers.

Read more: FTC Investigating OpenAI Over Data Security

“Recent state-of-the-art AI models are too potent and impactful to be developed without democratic oversight,” noted Yoshua Bengio, one of the esteemed figures often referred to as the “godfather of AI.” He stressed the urgency of increased investments in AI safety, emphasizing that AI technology is advancing at a pace that surpasses the current level of precautions in place.

The list of authors includes prominent figures in the AI field such as Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, and Yuval Noah Harari, highlighting the breadth of support for these safety measures within the AI community.

Concerns regarding the risks associated with AI have been mounting, especially with the introduction of powerful generative AI models like those developed by OpenAI. Prominent individuals including CEOs like Elon Musk have previously raised alarms and even called for a temporary halt to the development of high-capacity AI systems to address potential threats and challenges.

Source: Reuters