Visa The Embedded Lending Opportunity April 2024 Banner

US Signs International AI Safety Agreement

AI regulation

Can artificial intelligence (AI) companies create systems that are “secure by design?”

That’s the hope of more than a dozen countries — the United States among them — which have signed onto what’s been called the first detailed international agreement on how to protect AI from rogue actors, Reuters reported Monday (Nov. 27).

The 18 countries agreed that businesses that design and use AI should do so in a way that protects customers and the public from misuse, the report said.

Although the pledge is non-binding, U.S. Cybersecurity and Infrastructure Security Agency Director Jen Easterly told Reuters it is important that so many countries sign on to a document that puts safety ahead of other concerns regarding AI.

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly said, per the report.

She added that the guidelines show “an agreement that the most important thing that needs to be done at the design phase is security.”

The news comes one week after a trio of European countries — Germany, France and Italy — came to an agreement on how AI should be regulated.

The countries issued a joint paper saying their governments are in favor of voluntary yet binding commitments on AI providers in the European Union.

Meanwhile, AI-ID CEO and founder Shaunt Sarkissian told PYMNTS last month that he believes the tact the U.S. is taking with the White House’s executive order on AI is “the best bet on the board right now” and praised its focus on critical standards and safety guidelines, as well as the commitment to identifying AI-generated content.

He also argued that the EU’s AI Act, which unlike the executive order is an actual piece of legislation, is “a little myopic.”

“It’s focused on content and on consumer protections, which are not bad things to be focused on, but [the AI Act] still is really blurred by GDPR [General Data Protection Regulation],” he said. “It’s taken the lessons of GDPR and said, we’re going to limit informing the models.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.