A PYMNTS Company

New California Law Requires AI Risk Assessments by End of 2027 

 |  February 4, 2026

While companies raced to adopt artificial intelligence throughout 2025, they may have overlooked a critical problem. AI hasn’t just created new risks—it has exposed infrastructure weaknesses that many organizations have carried since the late 1990s, according to legal experts.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Law firm Lowenstein Sandler is now urging companies to act immediately on AI risk assessments, warning that boards and regulators expect visible progress on AI governance. The firm points to mounting pressures: engineers deploying models faster than legal teams can review them, vendor contracts that don’t address who owns training data, and regulators who are paying close attention.

    Under California’s new mandatory risk framework, companies must include AI risk assessments as part of their enterprise risk evaluation by December 31, 2027. The recent executive order establishing a national AI policy framework signals that federal regulatory enforcement may intensify, even as states and the federal government clash over jurisdiction.

    The firm recommends that organizations adopt the National Institute of Standards and Technology (NIST) AI Risk Management Framework as their foundation. This sector-neutral framework is becoming the industry standard and provides practical implementation tools that help legal, risk, and engineering teams work together effectively.

    “We want your team to avoid a scenario such as discovering your customer service AI was making eligibility decisions it wasn’t authorized to make,” the law firm warns in its guidance. “Legal thought they’d prohibited automated decisioning. Engineering thought the model was advisory-only. Product thought they’d disclosed it. Nobody had mapped who owned the output or who could stop the model.”

    That nightmare scenario highlights why infrastructure mapping matters. Companies need to know who owns AI outputs when customer data is trained by engineering, deployed by product teams, and used for decisions that legal departments are liable for. Organizations must identify who has authority to pause or override systems when risks emerge.

    Related: January 2026 Brings a New Phase of AI Rules Across the United States, Europe, and China

    Lowenstein Sandler outlines a three-phase approach. In the first three months, companies should map AI usage, assign owners for AI risk and compliance, and create a system of record for all AI systems in use. The next three to nine months should focus on broadening testing protocols, revising contracts for AI-specific obligations, and implementing technical controls. The final phase, from nine to eighteen months and beyond, involves continuous monitoring through alerts, dashboards, and drift detection.

    Regulators understand that perfection isn’t immediately achievable. They expect to see visible progress and a credible improvement narrative. Companies should have ready now: an AI system inventory with risk tiers and ownership, updated incident response plans for AI-specific risks, and a charter for an AI governance committee.

    Within the first year, organizations should develop AI policies and standards, create written protocols for testing and validation (especially for systems affecting employment, housing, or vulnerable populations), and establish vendor diligence questionnaires. For systems affecting employment or housing, testing protocols may need implementation sooner rather than later.

    The guidance emphasizes that operational documentation ensures companies have sufficient knowledge to act when necessary. Organizations building AI governance programs in 2026 should begin with infrastructure mapping and governance chartering to position themselves ahead of evolving requirements and ensure their AI tools remain reliable and compliant.

    As AI continues moving quickly, the message from legal experts is clear: the time to build robust governance frameworks is now, not when regulators come knocking.