Artificial intelligence is moving from pilot projects to production systems across banking and payments. That shift creates a basic problem regulators and compliance teams know well. When everyone uses the same tool but speaks a different language about it, oversight gets sloppy. One team calls a model “machine learning;” another calls it “AI;” a third calls it “automation.” Yet, the risks are real and familiar. Bias, opaque decision-making, data leakage, fraud, and consumer harm do not get easier to manage just because the technology is new.
That is the backdrop for a fresh set of Treasury guidelines aimed at making AI use in finance easier to govern and harder to misuse. In a new post from Financial Regulation News, the U.S. Department of the Treasury said it issued two resources designed to guide AI use in the financial sector and “support more widespread adoption.” The two documents are titled, Artificial Intelligence Lexicon and Financial Services AI Risk Management Framework, respectively.
The point of the lexicon is straightforward. Treasury is trying to get financial institutions, regulators, and technology providers to use common definitions when they talk about AI capabilities and AI risk. Treasury notes that as institutions rely more on AI, “inconsistent terminology and uneven risk management practices” have created challenges for governance and oversight. In plain terms, if people cannot agree on what they are describing, they cannot reliably manage it.
Treasury’s second resource is about controls, not vocabulary. The Financial Services AI Risk Management Framework (FS AI RMF) adapts the federal government’s broader NIST AI Risk Management Framework to the operational and regulatory realities of financial services, including consumer protection.
We’d love to be your preferred source for news.
Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!
The post says the framework offers practical tools to help institutions evaluate AI use cases, manage risk across the AI lifecycle, and build accountability, transparency, and resilience into decisions about deploying AI. It is also meant to scale, so a community bank is not forced into the same process as a multinational institution.
Related: Wall Street’s Data Hunger Is Growing and AI Is Making It Riskier
Treasury framed the move as a way to accelerate adoption without sacrificing safety. Paras Malik, Treasury’s chief AI officer, put it in unusually direct terms. “Clear terminology and pragmatic risk management are essential to accelerating AI adoption in financial services,” Malik said. “These resources are designed to help institutions move faster with AI by reducing uncertainty and supporting consistent, scalable implementation.”
The post also signals that Treasury is positioning these guides as an implementation layer for broader White House AI policy. It says the publications were developed through the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council’s AI Executive Oversight Group, translating national AI priorities into tools that financial institutions, regulators, and technology providers can use. Derek Theurer, performing the duties of Deputy Secretary of the Treasury, made that link explicit, arguing that the President’s AI Action Plan requires “practical resources” rather than “aspirational statements.”
What happens next will look less like a one-time release and more like a continuing coordination effort. Treasury says it will keep working with federal and state regulators, industry leaders, and other stakeholders to advance the President’s AI Action Plan. That matters because in U.S. financial regulation, guidance only goes so far if agencies interpret it differently or institutions implement it unevenly.
Expect follow-on activity in three areas. First, more alignment pressure. If Treasury’s lexicon becomes a reference point, it could shape how examinations describe AI systems and how banks document them for supervisors. Second, more expectations around lifecycle controls, especially around testing, monitoring, and accountability. The framework is designed to be used across the AI lifecycle, which is where many failures happen. Third, more cross-sector engagement, because Treasury is explicitly working through industry and coordinating bodies rather than acting alone.
In other words, Treasury is trying to make AI in finance easier to talk about and easier to govern at the same time. That combination is not flashy. It is also how regulation often works when the goal is adoption with guardrails.