Capitol Hill Confronts AI’s Growing Grip on Financial Services

Highlights

Lawmakers zeroed in on how AI is already influencing underwriting, fraud analytics, surveillance systems, and other functions central to financial services.

Industry witnesses from markets, cloud infrastructure, cybersecurity and housing platforms agreed that AI can strengthen financial systems, but warned of risks tied to data quality and scoring models.

The panel signaled broad support for clearer federal guardrails as AI becomes embedded in everyday financial workflows.

Artificial intelligence (AI) has moved rapidly into every corner of the economy.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Financial services is a flash point over whether AI will strengthen or destabilize markets, as illustrated at a hearing before the House Financial Services Committee on Wednesday (Dec. 10).

    Market operators, cloud providers, cybersecurity leaders, consumer advocates and housing-platform executives differed on policy, they shared a broad consensus: AI is already transforming risk scoring, fraud detection and financial decisioning, and those shifts require clear guardrails to ensure safety, fairness and trust.

    AI’s Expanding Role

    At the hearing, titled “From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services,” Nasdaq President Tal Cohen described an industry changing at the speed of technology, noting that the firm sees AI as central to advancing liquidity, transparency and market integrity. In his words, “We need to deploy AI to make our markets better.” Cohen detailed that his firm uses it to detect market manipulation, strengthen surveillance, and enhance fraud identification in vast data environments.

    That view of AI as both practical and necessary echoed across the hearing. Google Cloud executive Jeanette Manfra, who heads risk and compliance, highlighted the scale of potential efficiency gains in banking and the need for clarity on how to manage the risks that accompany those gains. She told lawmakers that AI’s productivity improvements “across the banking sector are forecast to be substantial” with potential annual gains reaching hundreds of billions of dollars.

    Yet Manfra cautioned that firms need consistent regulatory expectations around documentation, testing, and oversight of AI models. She argued that existing frameworks can be adapted but may require clarification to address “the new types of risks introduced by these advanced AI systems.”

    Advertisement: Scroll to Continue

    Banking Risk, Scoring Models and Governance

    A central theme of the hearing was the role of AI in risk scoring, underwriting and assessments that directly affect consumers. Zillow Vice President of Product Nicholas Stevens described how earlier generations of machine learning reshaped the housing search and how generative AI is now being built into consumer-facing products in real time.

    He pointed to rising expectations for speed, transparency and fairness as  consumers use AI tools to make decisions about affordability and home selection. He emphasized that national standards could help ensure consistency across platforms. According to his testimony, “If AI regulation is hyper-fragmented, the same digital assistant could have different capabilities in different places — what it can display, what it must ask or confirm before acting, or whether it can complete a task.”

    Read more: AI Fraudsters Crash Identity Systems Built for Yesterday

    Consumer advocates pushed harder on the risks embedded in these systems. Public Citizen’s J.B. Branch stressed that the consequences of algorithmic decisioning are already visible and often harmful, telling the committee that “AI systems are already producing real-world damage at scale” with discrimination, consumer manipulation and fraud. Branch warned that states have been the primary actors responding to bias and automated harm, and argued that federal preemption attempts would undermine protections as AI adoption accelerates.

    Branch framed fairness not as a political preference but as a statutory requirement, noting that “algorithmic fairness simply means that automated systems should not produce discriminatory outcomes” and that civil rights protections “are neither novel nor partisan” when applied to AI. His testimony underscored the risk that opaque scoring algorithms can perpetuate or worsen inequities in credit and housing access.

    Cybersecurity, Fraud, Market Integrity

    Another major theme was the interplay between AI and cybersecurity. Palo Alto Networks’ Chief Security Intelligence Officer Wendi Whitmore warned that the attack surface facing financial institutions is expanding rapidly. She testified that “attacks have become faster, more automated, and harder to detect” and that financial institutions sit “at the center of the digital economy” where adversaries exploit APIs, cloud environments and third-party integrations.

    Whitmore described a cybersecurity landscape where generative and agentic AI shorten attack cycles significantly. Her testimony cited research showing ransomware campaigns compressed from days to roughly 25 minutes, and she emphasized that “AI-driven security operations” are now essential to counter threats that outpace human response times.

    Nasdaq’s Cohen stressed similar stakes on the market-operations side. He said financial crime is a “multitrillion-dollar problem” and that AI models must be trained on large and unbiased datasets to identify money-laundering patterns effectively.

    U.S. Rep. Joyce Beatty, D-Ohio, asked the panelists about the use of AI in combatting money laundering and other illicit activities, Cohen said that consortium data helps protect small and mid-sized banks, adding, “We are also are looking at nontraditional rails, like crypto rails, to consider how payments and transactions may be subject to fraud as well.” 

    Balancing Innovation, Accountability

    Across the hearing, witnesses diverged on the question of federal regulation, particularly around preemption and the level of detail Congress should prescribe. But several themes emerged as shared priorities.

    In his remarks ushering in the hearing, Committee Chairman French Hill, R-Ark., said that “Just as Congress navigated the uncertainties of the internet in the ’90s, we must view AI as an opportunity, not a threat, applying lessons of that era to today’s technological landscape.”

    He added, “Identifying gaps and obstacles in our regulatory frameworks will help Congress create an AI landscape where innovation can flourish without unnecessary barriers.”

    Cohen urged lawmakers to “leverage existing regulations wherever possible” and focus on risk-based oversight rather than technology-specific mandates . Manfra similarly argued that regulatory clarity must help firms manage AI safely without stifling beneficial applications.

    Stevens called for a national framework to create “strong, predictable protections” that enable scaling AI tools consistently and responsibly across housing and mortgage markets.

    Branch, however, urged Congress to reject sweeping federal preemption efforts that would “strip protections” enacted by states.