AI Firms Agree to ‘Kill Switch’ Policy, Raising Concerns and Questions

At last week’s AI summit in Seoul, artificial intelligence companies from around the world reached a landmark agreement to implement a “kill switch” policy, potentially halting the development of their most advanced AI models if certain risk thresholds are exceeded. The decision has sparked a heated debate about the future of AI and its implications for commerce, with experts questioning the practicality, effectiveness, and potential consequences of such a policy on innovation, competition and the global economy.

Proponents see the proposed kill switch, which would be triggered if an AI model poses significant risks, as a necessary safeguard against the potential dangers of unchecked AI development. They argue that it is a responsible step toward ensuring AI technologies’ safe and ethical development, which could revolutionize industries from healthcare to finance to transportation.

Skepticism Surrounds ‘Kill Switch’ Terminology and Practicality

However, skeptics have raised concerns about the term “kill switch” and its implications. “The term ‘kill switch’ is odd here because it sounds like the organizations agreed to stop research and development on certain models if they cross lines associated with risks to humanity. This isn’t a kill switch, and it’s just a soft pact to abide by some ethical standards in model development,” Camden Swita, head of AI and ML Innovation at AI firm New Relic, told PYMNTS. “Tech companies have made these kinds of agreements before (related to AI and other things like social media), so this feels like nothing new.”

The practicality of the proposed kill switch has also been called into question. “In theory, the way this kill switch would work is that all AI companies would need to be explicit on how they define risk and how their models measure against that. On top of that, they would have to provide auditable reports of their compliance and when they did or didn’t use this kill switch,” Vaclav Vincalek, virtual CTO and founder of, told PYMNTS. “Even with government regulations and legal weight behind the agreed upon ‘kill switch,’ I can see companies continuing to push the thresholds if their AI systems approach that ‘risky’ line.”

Concerns Over Effectiveness and Impact on Innovation

The effectiveness of the kill switch has also been called into question. “As effective as any other agreement without enforcement or strong regulatory policies behind it. And only as effective as any single stakeholder allows it to be. In other words, if Company X agrees to the ‘kill switch’ policy but doesn’t abide by the agreement in practice, then the kill switch is not effective,” Swita said.

Doubts have also been raised about the ability of governments to maintain adequate oversight over AI research projects. “Even if governments pass strong regulations intent on controlling AI model development, it’s unlikely that government organizations will be able to act quickly enough or with enough expertise to maintain adequate oversight over every frontier AI research project,” said Swita.

Adnan Masood, chief AI architect at UST, told PYMNTS there are significant limitations and challenges around relying solely on a “kill switch.” “Defining the criteria for when to trigger it is complex and subjective,” Masood said. “What constitutes an unacceptable risk, and who decides?”

Mehdi Esmail, co-founder and chief product officer at ValidMind, highlighted the challenges these companies face in self-regulation. “We’ve seen article after article recently that highlights these companies’ struggles with self-regulation,” he told PYMNTS. “So in that regard, this is a step in the right direction; however, that same inability to self-regulate could be the ultimate downfall of any such ‘kill switch’ to work the way it’s intended.”

When asked about the ability of AGI to circumvent the kill switch, Swita shifted the focus to human responsibility. “In general, I am far more concerned about what humans will do to humanity and the world. What are we willing to do to keep AI research in check despite the interests of shareholders and individual governments vying for dominance? What are we willing to give up?” he asked. “Will shareholders in major corporations conducting AI research be willing to sacrifice profits to keep AI safe? Will the USA, China and Russia be willing to lose some perceived strategic advantage to keep models safe?”

As the AI industry continues to grapple with the challenges of responsible development, finding the right balance between innovation and safety will be a critical challenge for the industry and society. While a step in the right direction, the proposed “kill switch” agreement has raised more questions than answers about the practicality, effectiveness and potential consequences of such a policy on the global competitive landscape and the pace of AI innovation. As the debate continues, it is clear that more specific, technically grounded solutions, regulations and international coordination will be essential to address the risks and opportunities presented by AI while ensuring that the technology’s potential to transform industries and drive economic growth is supported.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.