A PYMNTS Company

OpenAI Wants California to Take the Lead in Setting a National Standard for AI Regulations

 |  August 17, 2025

The Brussels Effect may soon start to be felt in U.S. state houses. Last Monday, OpenAI wrote to California governor Gavin Newsom (D) urging the Golden State to “take to lead” in harmonizing state-level AI regulations with “emerging global standards,” as reflected in the European Union’s AI Act.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    “In particular, we recommend policies that avoid duplication and inconsistencies between state requirements and those of similar democratic regimes governing frontier model safety,” the letter said. “Last week, we became the first US Al company to announce our intent to sign the EU AI Act Code of Practice (CoP), which already creates requirements similar to many being contemplated in California.”

    The letter, was signed by Chief Global Affairs Office Christopher Lehane, proposed treating signatories to the Code of Practice as presumptively compliant with California’s AI rules.

    “By integrating the provisions of the EU CoP and agreements with the US Center for AI Standards and Innovation (CAISI) into any compliance pathway, the state can protect residents, uphold democratic values, and promote innovation on a global scale,” Lehane wrote. “In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the CoP or enter into a safety-oriented agreement with a relevant US federal government agency [sic].”

    Related: Colorado Special Session to Consider Amendments to Landmark AI Law

    In a blog post accompanying the letter, the company warned that the U.S. must set a clear national standard for AI safety, “or risk a patchwork of state rules—some subset of the 1,000 moving through state legislatures this year—that could slow innovation without improving safety.”

    As has become common in discussions of AI policy in the U.S., the letter raised the specter of China’s rapid progress in developing AI technology as a threat to U.S. leadership. “Companies operating in the communist-led People’s Republic of China are unlikely to abide by US state laws, and in fact will benefit from regulations that burden their US competitors with inconsistent standards,” it said. “Imagine how hard it would have been during the Space Race had California’s aerospace and technology industries been encumbered by regulations that impeded rapid innovation and transition technology, instead of strengthening national security and economic competitiveness.”

    OpenAI’s recommended approach marks a start turnaround in strategy for the company. Like most large technology companies, OpenAI had long urge the federal government to set a single national regulatory framework for AI that would override state-level rules. With the collapse of Republican-led effort to impose a moratorium on state AI regulations in the Big Beautiful budget bill, however, now seems to view state governments, led by California’s, as offering the more immediate path to regulatory harmonization.

    OpenAI was also active in lobbying the European Commission in an effort to water-down provisions in the AI Code of Practice. Having now committed to implementing the Code’s recommendations, with all the operational adjustments and investment that will require, the company is now keen for the Code to become a model for other jurisdictions to follow.