Depending on who’s asked, artificial intelligence (AI) represents either the destruction or salvation of civilization.
That’s because the increasingly popular innovative technology could either displace today’s jobs or power an explosion in future productivity.
It is a bit of a coin flip — and the truth probably embraces both scenarios.
This, is as the United Nations Security Council held its first high-level briefing on Tuesday (July 18) in New York City to discuss the threat AI could pose to international peace and stability.
AI’s potential impact transcends borders, with many observers – including U.N. Secretary General António Guterres — believing there needs to be a globally coordinated approach to both reining in its potential perils and supporting its potential good.
That’s why countries worldwide are starting to ask the same questions about how to tackle regulating the technology.
It may be one of the defining questions of our generation.
The 15-member U.N. Security Council was briefed by Jack Clarke, the co-founder of AI startup Anthropic which bills itself as a safer alternative to other AI models, Professor Zeng Yi, co-director of the China-UK Research Center for AI Ethics and Governance, as well as Guterres.
Firms worldwide will need to comply with the interim set of guidelines from the country’s top internet watchdog, the Cyberspace Administration of China (CAC), if they want to operate and do business in China.
Accordingly, China’s U.N. ambassador, Zhang Jun, spent the session pushing back against creating a set of global laws, emphasizing during the meeting that international regulatory bodies established to govern AI must be flexible enough to allow countries to develop their own rules.
During the meeting, Guterres expressed his desire to create an AI watchdog charged by the U.N. to act similarly to other U.N. agencies which oversee the climate, nuclear energy and aviation regulations of the international body’s 193 member nations.
“I welcome calls from some Member States for the creation of a new United Nations entity to support collective efforts to govern this extraordinary technology, inspired by such models as the International Atomic Energy Agency, the International Civil Aviation Organization or the Intergovernmental Panel on Climate Change,” Guterres said.
“A new U.N. entity would gather expertise and put it at the disposal of the international community. And it could support collaboration on the research and development of AI tools to accelerate sustainable development,” he added.
Several Security Council members expressed concerns that AI’s new technology might prove a major threat to world peace in the wrong hands while also taking a hopeful stance toward the innovation’s positive impact within key sectors like healthcare.
“The interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming,” Guterres said in his prepared remarks, emphasizing that “generative AI has enormous potential for good and evil at scale.”
“We welcome this discussion to understand how the Council can find the right balance between maximizing AI’s benefits while mitigating its risks,” said Ambassador Jeffrey DeLaurentis, Acting Deputy Representative from the U.S. to the U.N.
“This Council already has experience addressing dual-use capabilities and integrating transformative technologies into our efforts to maintain international peace and security,” DeLaurentis added.
For their part, Russia’s ambassador to the U.N. questioned whether the Security Council should even be discussing the misuse and impact of AI given the scope of responsibilities it was charged with.
Guterres called for the U.N. to draft a legally binding agreement banning the use of AI in automated weapons of war by 2026.
But despite a majority of the assembled council diplomats expressing support for the regulatory body charged by the U.N. with oversight of AI, the prospect of broader commercial regulation of the technology on a global scale appears to remain distant.
Observers believe that collaboration between industry and regulators is crucial for the AI industry’s growth.
Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS last month that industry players should approach lawmakers with the attitude of, “We know this is new, we know it’s a little bit spooky, let’s work together on rules, laws, and regulations, and not just ask for forgiveness later, because that will help us grow as an industry.”
He gave as an example the dichotomy between a health inspector and a restaurant — where the inspector is responsible for ensuring that the restaurant meets certain criteria around cleanliness and process compliance, but whose role is another planet away from telling the chef what recipe to be using in the (clean) kitchen.