Visa The Embedded Lending Opportunity April 2024 Banner

US Eyes AI Regulations that Tempers Rules and Innovation

AI techreg

The world’s moving on artificial intelligence (AI), and it’s because AI already moved the world.

But the story is nothing new: governments are often, and nearly always, caught playing catchup when it comes to overseeing novel and innovative new technologies.

Of the world’s largest market economies, the European Union (EU) moved first in creating laws, while China got there first in enacting a framework around AI.

Now, the U.S. is starting to shape up its own approach with a series of congressional hearings, both public and closed-door, featuring in-depth testimony from AI executives and industry experts.

That’s because one of the more pressing challenges confronting the world’s governments is simply socializing a basic understanding of how the technology operates among lawmakers so they are able to productively oversee it without stifling growth.

“If you go too fast, you can ruin things,” U.S. Senate Majority Leader Chuck Schumer reportedly told journalists after a closed-door meeting Wednesday (Sep. 13) where he brought together nearly two dozen tech executives and AI experts, including Meta Platforms CEO Mark Zuckerberg and Tesla and X CEO Elon Musk.

The European Union went “too fast,” he added.

Read AlsoCalls to Pause AI Development Miss the Point of Innovation

Disruptive Technologies Can Be Used for Many Purposes 

The EU and the US represent two key legs of the emerging three-legged stool of global AI regulation.

Ensuring that the approach of the EU and the U.S. are both aligned will facilitate bilateral trade, improve regulatory oversight, and enable broader transatlantic cooperation.

“Europe has led on managing the risks of the digital world… I believe Europe, together with partners, should lead the way on a new global framework for AI…we should bring all of this work together towards minimum global standards for safe and ethical use of AI,” said the Head of the European Commission in her 2023 State of the Union Address Wednesday (Sep. 13).

As approaches to controlling for AI’s risks become codified into law, they will, in turn, become key elements of international relations and the interoperability of innovation.

In what’s known as the “Brussels Effect,” multinational companies frequently standardize their global operations in such a way as that they adhere to EU regulations no matter where the business is taking place.

The danger with AI is the same pointed out by Schumer – the EU’s regulatory regime historically been hostile to innovation, and by moving “too fast” with its AI Act, the 27-nation bloc risks stunting its tech sector’s growth.

“If you make it difficult for models to be trained in the EU versus the U.S., well, where will the technology gravitate? Where it can grow the best — just like water, it will flow to wherever is most easily accessible,” Shaunt Sarkissian, founder and CEO at AI-ID, told PYMNTS. “It’s an interesting Rorschach to figure out.”

That’s why the U.S. can’t afford to follow the same path. The opportunity here is for America to lead the way by supporting innovation while being smart and clear-eyed about the risks of AI technology.

Read moreHow AI Regulation Could Shape Three Digital Empires

Capturing Innovation While Avoiding Regulatory Capture

There is a narrowing window of opportunity to guide AI technology around the world responsibly, but the US is best positioned relative to Brussels and Beijing to do so.

For one, most — if not all — of the innovations in AI have been happening in the US.

PYMNTS has previously covered how a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.

And the danger of leading too strongly with regulation of the novel technology is that it could force AI development to other areas of the globe where its benefits can be captured by other interests.

Speaking to Congress on Tuesday (Sep. 12), NVIDIA’s chief scientist and senior vice president of research William Dally, said that “the genie is already out of the bottle” with AI, and stressed to lawmakers that, “AI models are portable; they can go a USB drive and can be trained at a data center anywhere in the world … We can regulate deployment and use but cannot regulate creation. If we do not create AIs here, people will create them elsewhere. We want AI models to stay in the U.S., not where the regulatory climate might drive them.”

“Uncontrollable general AI is science fiction. At the core, AIs are based on models created by humans. We can responsibly create powerful and innovative AI tools,” Dally added.

For all PYMNTS retail coverage, subscribe to the daily AI Newsletter.