The debate over artificial intelligence policy has entered a new phase, as Washington moves toward a national framework that seeks to unify a fragmented regulatory landscape and establish clearer expectations for how the technology is governed across industries, including banking.
For banks, the timing is not incidental. The industry has already embedded AI into core operations, a process that has been in the works for years. PYMNTS Intelligence data from 2024 shows that nearly three-quarters of finance leaders reported their departments were using AI, with applications spanning fraud detection, risk management and automation. These are the operational systems that influence how accounts are opened, how transactions are approved and how risk is priced.
The White House-directed push toward a national framework signals that policymakers are unlikely to regulate AI as a stand-alone distinct category. AI is being positioned as a capability that will be absorbed into existing financial rules, rather than governed through a standalone regime. That approach carries consequences for banks.
AI Within Existing Regulatory Boundaries
By extension, AI inherits the rules that already govern the activities it touches. A fraud model that declines a transaction is subject to the same expectations as any other payment decision. An onboarding model that flags a customer is bound by the same requirements that govern identity verification and fair access.
If a model contributes to an erroneous denial, a missed fraud event or a discriminatory outcome, the responsibility rests with the institution that deployed it. The technology becomes inseparable from the financial action it enables.
The 2025 State of Fraud and Financial Crime report illustrates how deeply AI is already embedded in that decision layer. Financial institutions are shifting toward intelligence-driven fraud defenses, combining machine learning and behavioral analytics to manage increasingly complex threats. At the same time, 68% of institutions have increased fraud detection spending, reflecting the central role these systems now play in operational risk management.
Advertisement: Scroll to Continue
Fraud, Identity and the Weight of Decisions
The same report underscores why regulators are unlikely to treat AI outputs as abstract or experimental. Unauthorized-party fraud now accounts for 71% of incidents and losses, driven by credential theft and account takeovers. These are precisely the areas where AI is deployed to make real-time judgments about identity, authorization and intent.
We’d love to be your preferred source for news.
Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!
In that context, AI-driven decisions are indistinguishable from the financial actions they trigger. When a model approves a payment or allows account access, it is participating in a regulated activity. When it fails, the consequences extend beyond losses to include reputational damage and erosion of customer trust, which half of institutions report experiencing.
From Outputs to Accountability
For banks, AI-driven identity checks, fraud decisions and payment approvals will not be assessed as technological outputs. They will be judged as financial decisions subject to established consumer protection, anti-fraud and compliance frameworks.
This distinction alters how institutions must approach model development and deployment. Performance metrics such as speed and accuracy remain important. But in addition, models must be explainable, auditable and consistent with regulatory expectations that were designed for human decision-making but now apply to automated systems.
The shift also exposes gaps between adoption and readiness. While AI usage is widespread, the PYMNTS Intelligence 2024 data points to persistent concerns around consumer trust, cybersecurity exposure and regulatory uncertainty. Those concerns now take on added weight as policy begins to formalize expectations.
The Read Across for Banks
The next phase of competition will hinge on which institutions can demonstrate that their models produce outcomes that withstand examination by regulators, auditors and, if necessary, courts.
This is a different kind of arms race. It places a premium on governance and model transparency. It also requires tighter integration between risk, compliance and technology teams, as decisions once made in isolation are now subject to broader institutional accountability.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.