Congress Warns of AI-Driven Cyber Threats Ahead of Major US Events

Congress-Capitol-Hill

Hackers only need to get it right once to wreak havoc.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    This week, a congressional committee hearing, “Oversight of the Department of Homeland Security: CISA, TSA, S&T,” underscored that openings are multiplying as fraudsters increasingly go after the biggest targets possible.

    Lawmakers used the hearing to stress how cyber intrusions, autonomous drones and artificial intelligence are no longer discrete risks managed in silos. These risk factors are converging into a single operational reality capable of disrupting the transportation networks, energy corridors and digital systems that underpin the United States economy.

    “Rapid advances in emerging technologies, including AI, are further accelerating the scale, speed and sophistication of these cyber operations,” House Committee on Homeland Security Chairman Andrew R. Garbarino of New York said Wednesday (Jan. 21).

    “[S]everal major, high-profile events are set to unfold across the U.S., including the 2026 FIFA World Cup, America 250, and the 2028 LA Olympics,” he added. “These events will only increase the volume and complexity of threats.”

    Garbarino’s full set of remarks emphasized that modern threats are more automated, more scalable and more difficult to attribute than those of even a decade ago. The concern is not just about isolated breaches but about cascading failures across interconnected systems.

    For leaders in banking and payments, the relevance may be immediate. Financial services don’t operate airports or rail lines, but they are entwined with the same digital and physical infrastructure now under scrutiny. The hearing underscored a reality many security executives already feel. The line between critical infrastructure and private enterprise is increasingly blurring.

    Read also: Supply Chain Cyberattack Puts Enterprise Trade Secrets at Risk

    Why Financial Services Are Squarely in the Blast Radius

    Threats today have changed qualitatively, not just incrementally. Although banks and payment processors were not the direct subject of the hearing, they loomed in its implications. Financial firms sit at the crossroads of digital identity, real-time data exchange and physical infrastructure dependency. They rely on telecommunications networks, cloud service providers and power grids that are often designated as critical infrastructure themselves.

    At the same time, financial institutions are among the most prime targets for AI-enabled cybercrime. Generative models can produce tailored phishing campaigns, synthetic identities and deepfake audio that undermine traditional authentication controls. When combined with disruptions elsewhere in the economy, such as transportation outages, these attacks can become harder to detect and respond to in real time.

    One of the hearing’s more consequential themes was AI’s dual-use nature. DHS officials acknowledged that AI is indispensable for monitoring vast networks and detecting anomalies that human analysts would miss. At the same time, adversaries are using similar tools to probe defenses, automate exploits and generate deceptive content at scale.

    The PYMNTS Intelligence report “2025 State of Fraud and Financial Crime in the United States” found that 70% of institutions reported active use of behavioral analytics and 61% reported using machine learning in their defenses. But nearly 1 in 5 financial institutions, especially smaller and regional banks, still operate without these advanced technologies.

    The hearing implicitly challenged private firms to think of AI governance as a security issue, not just a compliance or ethics concern. In an environment where AI accelerates attack cycles, delays in oversight or controls can have immediate consequences.

    See also: Third-Party Risk and AI Gave Cyberattacks the Upper Hand in 2025

    Collaboration Moves From Optional to Essential

    A recurring theme in the testimony was the necessity of collaboration. Agencies like the Cybersecurity and Infrastructure Security Agency (CISA) have long promoted information sharing and joint exercises, but participation across sectors remains uneven. The hearing suggested that this gap is becoming untenable.

    For executives, this implies a cultural shift. Security can no longer be treated as an internal function optimized solely for competitive advantage. In a systemic crisis, collective defense and transparency may be the only way to contain damage.

    What can security-critical private firms take away from this moment?

    First, risk assessments must expand beyond traditional cyber scenarios to include physical-digital convergence. A bank’s threat model should account for disruptions to transportation, energy and communications that affect operations indirectly.

    Second, investment priorities may need recalibration. Spending on perimeter defenses is necessary but insufficient. Greater emphasis on resilience through redundant systems, manual fallbacks and crisis decision-making can align more closely with the risks highlighted in the hearing.

    Third, leadership engagement is crucial. The issues discussed on Capitol Hill are not purely technical. They touch governance, regulatory strategy and public trust, requiring fluency in how emerging technologies can reshape risk.

    In that sense, the hearing was less about transportation systems alone and more about the future of trust in complex, technology-driven economies.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.