A PYMNTS Company

Senators to Introduce Bipartisan Bill to Provide Federal Oversight of AI Risks

 |  September 29, 2025

Just under the wire before a possible government shutdown Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) were expected to introduce legislation Monday to establish a federal program to evaluate the risks posed by AI systems.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    According to Axios, the Artificial Intelligence Risk Evaluation Act would house the program within the Department of Energy to “collect data on the likelihood of adverse AI incidents, such as loss-of-control scenarios and weaponization by adversaries,” per a memo prepared by the senators’ offices.

    Under the legislation, developers of advanced AI systems would be required to submit information about their systems to the program office and would be barred from deploying their models unless and until the meet the criteria the program would establish.

    The bill reflects a bipartisan appetite in Congress for some measure of federal oversight of AI technology development and the potential risks associated with it.

    As such, it runs counter to the White House’s policy toward AI, which calls for the mostly unfettered development of AI technology with minimal regulation.

    “While some proposals would take a hands-off approach to AI, this new bipartisan legislation…would guarantee that there is common-sense government oversight of the most advanced AI systems to better inform and protect the public,” a description of the bill shared with Axios said.

    The Hawley-Blumenthal approach to federal AI oversight carries echoes of the European Union’s AI Act, which likewise takes a risk-based approach to setting regulatory thresholds for AI systems, with those posing the greatest systemic risk subject to the strictest rules.

    The European Commission last week issued draft guidelines and a template for reporting serious AI incidents. According to the guidelines “Incidents are generally defined by their actual or potential negative consequences, particularly (potential) harm to humans or critical systems, additionally considering sectorial specificities. An incident is a not planned/programmed deviation in the characteristics of performance.”

    The Commission is seeking input and feedback from targeted stakeholders as part of a public consultation on the draft guidelines that runs through Nov. 7.

    The Trump administration has been harshly critical of the European law, accusing the EU of unfairly imposing burdensome regulations predominantly on U.S. technology companies.

    Hawley and Blumenthal both sit on the Senate Judiciary Committee and have worked together on several bills aimed at placing guardrails around AI technology. In July, they introduced a bipartisan measure to protect consumers’ data rights and bar technology companies form using copyrighted works without permission to train AI models.

    “AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse,” Hawley said in a statement at the time. “It’s time for Congress to give the American worker their day in court to protect their personal data and creative works.”

    As for their latest effort, “Congress must not allow our national security, civil liberties, and labor protections to take a back seat to AI,” Sen. Hawley said. “”This bipartisan legislation would guarantee common-sense testing and oversight of the most advanced AI systems, so Congress and the American people can be better informed about potential risks.”

    Added Sen. Blumenthal, “Our legislation would ensure that a federal entity is on the lookout, scrutinizing these AI models for threats to infrastructure, labor markets, and civil liberties.”