A PYMNTS Company

AI Regulatory Reform Gets Closer Ties to Banking Innovation

 |  October 30, 2025

The Bank Policy Institute (BPI) has called on the Trump administration to modernize and harmonize federal oversight of AI in the financial sector, warning that outdated supervisory frameworks are stifling innovation and slowing the deployment of critical fraud- and risk-management technologies.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In comments submitted in response to a Request for Information on AI regulatory reform by the Office of Science and Technology Policy, BPI said AI adoption in the financial sector is at a critical inflection point.

    “Although banks have been using traditional AI and machine learning tools for decades, the emergence of generative AI, including large language models… has opened the door to transformational benefits for financial institutions,” the trade organization wrote.

    It warned, however, that the current regulatory climate “often discourages, rather than incentivizes, responsible AI innovation,” as supervisors and examiners sometimes apply outdated compliance expectations that fail to account for AI’s evolving risk profile. This, the group said, has created “hesitation, if not risk-aversion,” among institutions seeking to modernize their operations.

    BPI also stressed that financial institutions are facing increasingly sophisticated attacks from bad actors using AI tools, and urged regulators to consider “the risks of not innovating.” In particular, it said, banks need flexibility to deploy AI-powered defenses rapidly to combat synthetic identity fraud and deepfake-based scams, threats already documented by federal agencies and the Government Accountability Office.

    The institute’s submission is among thousands filed by a wide range of stakeholders across sectors, but it stands out for its focus on how regulation affects financial institutions’ ability to adopt and deploy AI tools responsibly and effectively.

    At the core of BPI’s recommendations is a call to narrow the scope of the federal banking regulators’ Model Risk Management Guidance, which the institute said is ill-suited for evaluating non-deterministic AI models. The current framework, dating to 2011, was designed for deterministic models used in capital and liquidity calculations. Applying it to generative or adaptive AI systems, BPI argued, has produced delays of up to nine months in approving new or updated tools.

    Read more: With Congress Still MIA on AI, State Legislators Expand Their Efforts at Regulation

    BPI urged regulators to clarify that not every AI use case should be subject to model risk requirements, such as generative productivity assistants, natural language search, or cybersecurity monitoring. It also called for procedural requirements to be based on risk, with lower-risk models exempted from overly burdensome independent validation processes.

    In addition to risk-based AI supervision, recommendations included mandatory examiner training on AI technologies and cross-agency coordination to ensure consistent regulatory treatment and reduce duplicative oversight.

    BPI also urged OSTP to look to international examples, such as the U.K. Financial Conduct Authority’s “technology-neutral, outcomes-focused” approach, as models for balanced innovation oversight.

    BPI’s submission directly aligns with the first pillar of the White House AI Action Plan, released in July, which seeks to “remove red tape and onerous regulation” that hinders private-sector AI development.

    “Maintaining U.S. AI dominance requires that banks not only have the ability to adopt AI without excessive restrictions, but also the freedom and incentives to drive AI innovation,” it said.