Artificial intelligence is evolving quickly, and global regulation is falling behind.
The pace of the technology is moving so fast that each day policymakers delay action, they are, in effect, set back further than the day prior.
While the European Union moved first in creating laws, and China was quickest in enacting a framework around AI, there exists no clear global leader or multilateral policy for the technology, which observers and insiders alike consider to be one of the biggest computing transformations the world has ever known.
“We’ve been calling for regulation, but only of the most powerful systems,” Altman said Monday.
“Models that are, like, 10,000 times the power of GPT4, models that are, like, as smart as human civilization, whatever, those probably deserve some regulation,” he added.
“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden said. “… We cannot afford to become trapped in debates about whether AI is a tool for good or a tool for ill; it will be a tool for both. We must prepare for both and insure against the latter … In the past, leaders have responded to scientific and technological developments with retrospective regulation. But in this instance, the necessary guardrails, regulation and governance must be developed in a parallel process with the technological progress.”
So, what does under-regulation mean within a context where certain historical forms of AI, such as predictive forecasting and machine learning (ML) are already integrated into much of daily life and have been for years?
The powerful versions of AI that tech CEOs and national leaders are sounding the alarm about may not exist today, at least outside of research and corporate laboratories.
During a Senate subcommittee hearing Sept. 12, Woodrow Hartzog, a professor of law at Boston University focusing on privacy and technology law, said that the U.S. government should “flat out ban extremely dangerous or risky uses of AI, including biometric tracking, predictive policing, social scores… Facial recognition and biometric recognition tech should be prohibited outright. Also emotion recognition. We need bright line measures against these rather than procedural protections.”
“If you can’t make it safe, it shouldn’t exist,” Hartzog added.
Professor Cary Coglianese, founding director of the UPenn Program on Regulation, told PYMNTS Aug. 4: “AI regulation is going to be an ongoing continuous process of interaction between government and the private sector to make sure that the public gets all of the benefits that can come from this technological innovation but also is protected from the harms.”
PYMNTS has previously covered how a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress. Within this context, the “under-regulation” that AI insiders have expressed reservations about relates more to its deployment than guardrails around its development.
As the U.K.’s Dowden noted in his speech to the UN, “every single challenge discussed at this year’s General Assembly — and more — could be improved or even solved by AI.”
“If you make it difficult for models to be trained in the EU versus the U.S., well, where will the technology gravitate? Where it can grow the best,” Shaunt Sarkissian, founder and CEO at AI-ID, said in an interview with PYMNTS published in June. “Just like water, it will flow to wherever is most easily accessible.”
As Microsoft Vice Chair and President Brad Smith, told U.S. lawmakers Sept. 12, “those countries that succeed in rapidly adopting and using AI responsibly are the ones most likely to reap the greatest benefit.”
Smith went on to note that while the printing press was invented in Germany in the 1400s, it was the Dutch and the English who first truly embraced printing and books and reaped the immediate economic benefits of the revolutionary technology.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.