PYMNTS Black Friday 2023 Report November 2023 Banner

US Tackles AI Frontier Models While UK Spearheads Supranational Safety

Why Companies Must Take AI Implications Seriously

As artificial intelligence (AI) advances, fears that it will catastrophically end humanity in a “catastrophic” way are growing.

However, that’s only if humanity doesn’t end AI first.

The White House’s “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence,” announced Monday (Oct. 30), mandates that any future AI foundation models larger than GPT-4 will be subject to red-team testing and details of their inner architectures handed over to government agencies.

OpenAI’s GPT-4 model is rumored to have required roughly 20 trillion trillion operations to build. Per the executive order, any future AI model trained using “1026 integer or floating-point operations” (100 trillion trillion operations) or greater will be subject to new rules and must be disclosed to the National Institute of Standards and Technology (NIST).

Monday’s executive order came as the United States looked to place itself at the center of AI governance in advance of the United Kingdom’s own multinational AI Safety Summit held Wednesday (Nov. 1) and Thursday (Nov. 2) of this week.

“From AI-enabled cyberattacks at a scale beyond anything we have seen before to AI-formulated bio-weapons that could endanger the lives of millions, these threats are often referred to as the ‘existential threats of AI’ because, of course, they could endanger the very existence of humanity,” said U.S. Vice President Kamala Harris during a speech at the U.S. embassy in London. “These threats, without question, are profound, and they demand global action.”

“Only governments, not companies, can keep people safe from AI’s dangers,” Rishi Sunak, the U.K.’s Prime Minster said.

For his part, President Joe Biden called his executive order on AI “the most significant action any government, anywhere in the world, has ever taken on AI safety, security and trust.”

On Wednesday, China, the U.S., and the European Union all agreed to work together to chart a safe way forward on AI, signing the “Bletchley Declaration” along with over 25 nations.

Read also: US Redlines EU’s AI Act as Nations Strive to Balance Regulation With Innovation

Balancing AI Safety, Security and Trust With Innovation

Because of the rate and speed at which AI technology’s capabilities are evolving, the present moment represents an increasing time of urgency for businesses, governments and both international and intranational institutions to understand and support the benefits of AI while at the same time working to mitigate its risks.

Still, many of the provisions of Biden’s executive order don’t do anything directly themselves. Rather, the order instructs other government officials and agencies to draft reports and kick off rulemaking processes, meaning that 2024 is shaping up to be one full of research, reports and recommendations around AI.

For his part, Senate Majority Leader Chuck Schumer reportedly believes action on AI needs to come from Congress, not the White House. However, given the lack of action so far despite the many words of urgency, it is difficult for observers to believe that Congress will pass any meaningful federal legislation before the start of the 2025 session.

The White House’s order on AI runs for over 100 pages and spans a vast array of AI-centric objectives, from opening up visa requirements for skilled AI workers to addressing algorithmic biases and discrimination.

And while the invoking of emergency powers to impose new regulations on the most advanced foundation models does appear, at face value, to have the potential of heralding the stifling of future AI innovation, it is important to note that the organizations most likely to train and build models more powerful than GPT-4 in the near term have already voluntarily committed to perform the tests required before commercially deploying them.

See also: US Eyes AI Regulations that Tempers Rules and Innovation

Still, the precedent the presidential order sets around model testing and red-team reporting requirements could be increasingly significant over the longer term. Already, advances in computing power have transcended Moore’s Law of doubling every two years — meaning that while 1026 compute operations represent a massively high threshold today, it may not be such a lofty ceiling five or so years from now.

Additionally, algorithmic improvements down the line could soon see foundation models achieve better performance than GPT-4 with less training. So, while the White House order provides a strong right-now declaration, further refinement in step with technical advances will be necessary for AI’s safe and secure development.

Elsewhere in the order is a new framework regulating foreign actors using U.S. cloud services and infrastructure to train their own, uber-powerful models.

“If you make it difficult for models to be trained in the EU versus the U.S., well, where will the technology gravitate?” Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS. “Where it can grow the best — just like water, it will flow to wherever is most easily accessible.”

That’s why the opportunity for the U.S. is to lead the way by supporting innovation while being smart and clear-eyed about the risks of AI technology.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.