The Commerce Department’s Bureau of Industry and Security (BIS) aims to require the world’s leading artificial intelligence (AI) developers and cloud providers to provide detailed reporting to the federal government.
The BIS released a Notice of Proposed Rulemaking Monday (Sept. 9), saying that the new mandatory reporting requirements are intended to ensure that AI is safe and reliable, can withstand cyberattacks and has limited risk of misuse by foreign adversaries or non-state actors, according to a Monday (Sept. 9) press release.
“As AI is progressing rapidly, it holds both tremendous promise and risk,” Secretary of Commerce Gina M. Raimondo said in the release. “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
The reporting mandated by the proposed rule would encompass developmental activities, cybersecurity measures and outcomes from red-teaming efforts, according to the release.
The red-teaming efforts would involve testing for the ability to assist in cyberattacks; the ability to lower the barriers to entry to developing chemical, biological, radiological or nuclear weapons; and other dangerous capabilities, per the release.
The BIS has long conducted defense industrial base surveys that inform the government about emerging risks in important industries, Under Secretary of Commerce for Industry and Security Alan F. Estevez said in the release.
“This proposed reporting requirement would help us understand the capabilities and security of our most advanced AI systems,” Estevez said.
The Biden Administration issued an executive order aimed at safe AI development in October 2023, adding that more action is required and that the White House would work with Congress in hopes of crafting bipartisan AI legislation.
Biden’s requirements for AI companies included a rule saying that the developers “of the most powerful AI systems” share their safety test results and other key information with the federal government; that AI firms must come up with “standards, tools and tests” to make sure their systems are secure and trustworthy; and that the companies guard against the threat of “using AI to engineer dangerous biological materials” by establishing strong standards for biological synthesis screening.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
Agentic artificial intelligence (AI) promises to improve operational efficiencies and the customer experience offered by enterprises.
The advanced technology is finding applications in loan underwriting and fraud detection, and now it’s moving across borders.
TerraPay Co-Founder and Chief Operating Officer Ram Sundaram told PYMNTS as part of the “What’s Next in Payments” series focused on exploring AI’s use in banking and by FinTechs that automated decision making and streamlined processes will continue to transform global money movement, especially as faster payments gain ground in cross-border transactions. That’s the inexorable trend, but as Sundaram put it, there’s still room, and a necessity, to have some human interaction in the mix.
In terms of global fund flows, TerraPay’s single connection ties more than 3.7 billion mobile wallets together across 200 sending and 144 receiving countries, touching 7.5 billion bank accounts. As one might imagine, coordinating and enabling the transactions is complex.
“Obviously, in the best-case scenario, everything goes smoothly, but when things are not going smoothly, that’s when the customer queries come in,” Sundaram said.
It’s no easy task to find out straight away where a transaction is, as analysts and representatives at the company have to look at logs and query partner systems.
“A lot of that work is done manually,” said Sundaram, who added that the agents “know the corridors and the markets that they are working in, but it still takes some time.”
TerraPay is using AI models with machine learning to bolster customer support and automate tasks as financial institutions (TerraPay’s client base) send payments in real time, and those payments are processed into local markets’ beneficiary banks.
“We still don’t trust [AI models] to let them respond to the customer straight away, but we can do the analysis, and then that gets reviewed by an agent who decides if [information] is accurate or not and then sends it off,” Sundaram said.
The same principles are guiding AI models and company practices to improve technical and security operations, analyzing and categorizing anomalous transactions and automating integrations with partner firms.
“Compliance is an issue where there is a lot of review needed of the alerts, and we are using [AI models] to speed up those processes,” Sundaram said.
Asked by PYMNTS about how agentic AI can be harnessed, he said: “In financial services, you can’t take chances on technology like this, which has the freedom to go wrong. You have to be careful about making sure that it’s 100% reliable before we can let things run entirely by automation.”
Agentic AI also remains pricey. For example, OpenAI is charging $20,000 a month for its specialized agents. However, Sundaram said the industry will become commoditized quickly, which will lower prices, and some open-source offerings are capable.
“There’s a fire hose of news about breakthroughs and new ideas and new ways of doing things that are coming out on a daily basis,” he said.
Data underpins it all, and Sundaram told PYMNTS that no matter what the application, the information fed into the models must be clean. Most organizations have a range of data sitting in different intra-company silos, and those silos need to come down.
In addition, the data must be structured so that it is accessible and can be synthesized by the models. Many firms may have more than 1,000 software-as-a-service (SaaS) resources to which they are subscribed but are not accurately tracked or monitored.
“Every database is separated, each one sitting somewhere else,” he said.
The days of stitching together those separate SaaS offerings to run an enterprise are ending, he said, and we’re headed to a future when data is collected in one place.
AI models and agentic AI “are extensions of what we’ve always valued at TerraPay, which means building the most efficient infrastructure possible in order to make sure that transactions are processed safely, quickly and affordably,” Sundaram told PYMNTS. “We see AI and [AI models] as powerful tools that help us scale all this very quickly while making sure we build more and more efficiency into the system.”