Treasury Sets New AI Guardrails for Banks and FinTechs

AI guardrails

Regulators are moving from talk to testing when it comes to artificial intelligence in financial services. Across Europe, the United Kingdom and the United States, authorities are shifting from high-level AI principles to operational proof.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Banks and FinTechs that use models to approve loans, set prices, detect fraud or personalize offers are increasingly being asked to show their work. The tone is not prohibitionist. It is pragmatic and, in many respects, optimistic about AI’s role in modern finance.

    The change is subtle but consequential. It marks the difference between publishing responsible AI guidelines and documenting how a model behaves in production. For financial institutions, that shift is transforming governance from a policy document into a competitive capability.

    In the U.S., the Treasury Department recently released two new resources to guide AI use in the financial sector, including a Financial Services AI Risk Management Framework and an accompanying report on responsible innovation.

    These guidelines emphasize life cycle risk management, board oversight, documentation standards and alignment with existing safety and soundness rules. AI can be deployed in underwriting and fraud detection, but institutions must evidence how risks are identified, measured and controlled.

    From Principles to Proof

    In the European Union, the EU AI Act classifies credit scoring and similar financial uses as “high risk.” That designation triggers specific obligations around documentation, transparency, human oversight and continuous monitoring. While certain provisions phase in through 2026 and 2027, firms operating in Europe are already building internal model inventories, control frameworks and audit trails.

    Advertisement: Scroll to Continue

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    Under the act, institutions must maintain technical documentation describing training data sources, feature engineering, validation testing and known limitations. They must implement monitoring systems to detect drift and unintended bias. Adverse credit decisions must be explainable in clear language. Black-box deployment in regulated lending environments is increasingly untenable.

    In the U.K., scrutiny is also intensifying. The Financial Conduct Authority recently launched the “Mills Review” to examine how AI could shape the future of financial services and to foster informed debate about its regulatory implications. While the FCA has not introduced a standalone AI rulebook, it has made clear that existing conduct, consumer protection and operational resilience standards apply to AI-driven systems. Through supervisory engagement and regulatory sandboxes, the FCA is effectively testing how firms evidence fairness, accountability and control.

    Many institutions are using the National Institute of Standards and Technology AI Risk Management Framework as a structural reference. NIST’s model, built around governing, mapping, measuring and managing AI risks across the life cycle, is increasingly cited in board discussions and audit committee reviews. It offers a common language that aligns internal AI programs with regulatory expectations on both sides of the Atlantic.

    Agentic Commerce Raises the Bar

    The oversight push arrives just as AI moves from analytics to action.

    When an autonomous system makes a real-time financial decision, regulators expect the same level of explainability, and fairness analysis required of traditional underwriting engines. That creates operational demands. Can transaction-level decisions be reconstructed after the fact. How quickly can a model be disabled if it behaves unexpectedly.

    For CFOs, this is no longer solely a compliance question. It is an operational resilience issue. Recent PYMNTS data shows finance leaders are proceeding carefully but deliberately. About 45% of CFOs report using AI today to monitor working capital and cash flows, reflecting comfort in areas where rules are clear and performance can be measured. Another 52% say they would allow AI to recommend liquidity and payment timing adjustments, provided human decision-makers retain final authority.

    Meanwhile, 62% indicate they would permit AI systems to automatically monitor and adapt to new regulations, signaling growing confidence in AI for compliance-oriented tasks. While fewer CFOs currently trust AI with complex cross-system execution, the trajectory is clear: finance teams are scaling adoption in direct proportion to governance, control and explainability.