A PYMNTS Company

Senate Bill Would Shield AI Developers From Civil Liability In Certain Uses of Their Tools

 |  June 12, 2025

Senator Cynthia Lummis (R-WY) on Thursday introduced a bill to shield AI developers from certain types of civil liability lawsuits provided they publicly disclosed their model’s design specifications.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Specifically, the Responsible Innovation and Safe Expertise (RISE) Act would clarify that professionals such as physicians, attorneys, of financial advisors use AI tools in their practice they retain a legal responsibility to exercise due diligence and verify the system’s outputs.

    “This legislation doesn’t create blanket immunity for AI – in fact, it requires AI developers to publicly disclose model specifications so professionals can make informed decisions about the AI tools they choose to utilize,” Lummis said in a press release put out by her office. “It also means that licensed professionals are ultimately responsible for the advice and decisions they make.”

    Liability for adverse outcomes where AI tools are used in a professional context has been a murky area, with different states adopting different standards. The RISE Act would create a single, federal standard.

    “AI is transforming professional industries including medicine, law, engineering, and finance, with these tools increasingly being utilized in critical decision-making processes that impact millions of Americans,” Lummis’ statement said. “Current liability rules create barriers to innovation by exposing AI developers to legal risk, even when their tools are used responsibly by trained, licensed professionals in their areas of expertise.”

    Read more: Seattle Considers Ban on Rent-Setting Algorithms Amid Collusion Allegations

    The bill is of a piece with, but separate from, a controversial provision in the One Big Beautiful budget bill passed by the House and now pending in the Senate to impose a 10-year moratorium on states enacting or enforcing laws regulating AI.

    The moratorium provision would bar states from placing “legal impediments” — including on the design, performance, liability, and documentation – on AI and “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output, including a score, classification, or recommendation, to materially influence or replace human decision making.”

    The Lummis bill differs in that it would preempt states’ efforts to legislate on liability for the use of such systems rather than on their design. Unlike the budget bill provision, Lummis’ bill also limits the relief it provides AI developers to the use of their products by licensed professionals and places conditions on that relief.

    “Developers may claim safe-harbor immunity only if they publicly release a model card and key design specifications so that physicians, attorneys, engineers, and other professionals can understand what the AI can and cannot do before relying on it for decisions,” her statement said.

    It also would not affect liability in other AI applications, such as self-driving vehicles.

    Lummis is a member of the Commerce Committee’s subcommittee on Consumer Protection, Technology, and Data Privacy, which would have jurisdiction to consider the bill.