A PYMNTS Company

California Frontier AI Working Group Issues Report on Foundation Model Regulation

 |  March 28, 2025

By 

The Joint California Policy Working Group on AI Frontier Models (the “Working Group”) published a draft report on March 18, 2025, outlining recommendations for regulating foundation models. The goal is to provide a data-driven basis for AI policy in California, ensuring that these powerful technologies benefit society while addressing potential risks.

Governor Gavin Newsom (D) formed the Working Group in September 2024 after vetoing the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), introduced by State Senator Scott Wiener (D-San Francisco). The group builds on California’s existing AI policy framework, including its collaboration with Stanford University and the University of California, Berkeley, which was initiated under Newsom’s 2023 executive order on generative AI.

The report acknowledges that foundation model capabilities have advanced significantly since SB 1047’s veto and warns that California’s unique position to shape AI governance may not last indefinitely. It highlights three key areas for regulation: transparency, third-party risk assessments, and whistleblower protections.

Transparency Measures

The report asserts that transparency is a fundamental requirement for AI oversight and recommends prioritizing public-facing disclosures to enhance accountability. It proposes transparency standards covering five core areas: 1) Training data sources; 2) Developer safety protocols; 3) Security practices in model development; 4) Pre-deployment testing by developers and independent assessors; and 5) Potential downstream impacts, including disclosures from platforms that distribute foundation models.

Third-Party Risk Assessments

While transparency is crucial, the report argues that it alone is insufficient for accountability. It highlights the importance of independent risk assessments to encourage developers to enhance model safety. The report suggests that policymakers explore legal protections—such as safe harbor provisions—to support public-interest AI safety research. Additionally, it calls for mechanisms to efficiently relay discovered vulnerabilities to developers and impacted stakeholders.

Whistleblower Protections

The report also considers the need for safeguards for employees and contractors working within foundation model development. It advises policymakers to implement protections that extend beyond violations of existing laws, covering instances where companies fail to adhere to their own AI safety policies…

CONTINUE READING…