A PYMNTS Company

As Congress Considers a Ban On State AI Regs, California and NY Forge Ahead

 |  June 19, 2025

As Congress considers imposing a 10-year ban on states passing or enforcing regulations on AI developers as part of the Big Beautiful budge bill, California and New York this week each took steps toward placing guardrails around at least the largest, so-called frontier AI models.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    On Tuesday, California, home to some of the largest U.S. AI companies, released the final version of its Report on Frontier AI Policy prepared by a working group of academic experts appointed by Governor Gavin Newsom. Although the report does not endorse any particular piece of legislation, it lays out a blueprint for a comprehensive regulatory framework.

    In New York, also an AI hub, Assemblyman Alex Bores (D-Manhattan) introduced the  Responsible AI Safety and Education Act, known as RAISE, the Albany Times Union Reported. The bill would require developers of frontier models to put in place robust safety and security plans, among other measures.

    The California report is an outgrowth of an earlier effort by the state to regulate AI technology. In 2024, state lawmakers passed SB1047, which would have required developers of the biggest AI models to submit a safety and security plan to the attorney general, who could hold them liable if their systems caused harm or created an imminent danger to public safety. But Newsom vetoed the measure in September.

    “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said at the time. “I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

    Instead, Newsom convened the working group, led by Stanford’s Li Fei-Fei, a leading figure in AI research and a critic of SB1047.

    Related: Senate Bill Would Shield AI Developers From Civil Liability In Certain Uses of Their Tools

    The report lays out core principles lawmakers should adopt in crafting any future AI regulation. They include a commitment to evidence-based policymaking; a focus on transparency and disclosure of risks; establishing an adverse-event reporting system, third-party validation of risk self-assessments, and moving beyond simple computational thresholds to consider a model’s capabilities, downstream impacts, and risk levels.

    The New York bill has already attracted opposition from major technology companies, including Armonk, NY-based IBM and Meta, who argue that it would place unworkable compliance burdens on small developers. Assemblyman Bores counters that it would only apply to the very largest AI companies that are spending at least $100 million to train frontier models.

    Both the California and New York efforts could be rendered moot if Congress goes ahead with federal preemption. The proposal to place a 10-year moratorium on state laws, however, has divided Republicans on Capitol Hill.

    The House included it in its version of the budget bill. But even some Republicans who voted for the overall bill, such as Rep. Marjorie  Taylor Greene (GA) now say they were unaware of the moratorium provision at the time and would vote against the bill if comes back to the House from the Senate.

    The Senate, meanwhile, included the provision in its initial draft of the bill but changed how it would apply. Even so, some Senate Republicans have balked at its inclusion.

    A group of conservatives last week sent a letter to Senate leadership warning that Congress is still “actively investigating” AI and “does not fully understand the implications” of the technology.

    Separately, Sen. Josh Hawley (R-MO) has indicated he is concerned about the economic impact of AI and said he would consider introducing an amendment to strike the provision when the Senate marks up the bill, according to The Hill.

    “I’m only for AI if it’s good for the people,” he told reporters. “I think we’ve got to come up with a way to put people first.”

    Sen. Ron Johnson (R-WI), a leader GOP opponent of the budget bill, also expressed skepticism of the moratorium provision.

    “I personally don’t think we should be setting a federal standard right now and prohibiting the states from doing what we should be doing in a federated republic,” he told the Capitol Hill publication. “Let the states experiment.”