A PYMNTS Company

California Is Cracking Down on Lawyers Who Let AI Do Their Homework  

 |  February 12, 2026

Fake court cases. Made-up legal citations. A prosecutor who blamed it all on typos. California’s highest court just drew a line in the sand and lawyers who rely on AI without checking its work are now squarely in the crosshairs.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The California Supreme Court recently issued a unanimous ruling in Kjoller v. Superior Court of Nevada County that could reshape how attorneys across the state use artificial intelligence. The case involved a Nevada County prosecutor who submitted a legal brief citing eight court cases. Three of those cases were completely fabricated. Three more existed but had nothing to do with the points the prosecutor was making. Even a reference to the state constitution turned out to be irrelevant.

    When the other side caught the errors and pushed for penalties, the prosecutor’s explanations only made things worse. First, she said she had been “going too fast in her research.” Then her office called the wholesale fabrications “scrivener’s errors” — a fancy legal term for clerical mistakes. The Supreme Court didn’t buy it.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    In its ruling, the Court directed a lower court to explain why the prosecutor should not face formal sanctions. It also pointed to a process that would allow a judge to launch a full investigation into whether the attorney had relied on AI-generated content without verifying it. In short, the Court opened the door to real accountability.

    Just two weeks after the Kjoller decision, the California Senate passed SB 574, a bill that would require lawyers to take “reasonable steps” to verify anything produced by AI tools. It would also ban attorneys from feeding confidential client information into AI systems and stop arbitrators from handing decisions off to AI.

    According to an analysis by law firm Jenner & Block, the implications go well beyond one bad brief. It noted that “a fabricated case is misconduct regardless of which platform generated it,” adding “the glossy marketing materials and brand recognition of premium vendors don’t change that fundamental reality.”

    That point is worth lingering on. Many lawyers assume that paid AI tools from big-name legal research companies are safe to trust. The data suggest otherwise. As Jenner & Block point out, research cited in the Kjoller case found that AI products from LexisNexis and Thomson Reuters — two of the most respected names in legal research — produce false citations between 17% and 33% of the time. General-purpose tools like ChatGPT fare even worse, hallucinating on legal questions up to 88% of the time.

    The Court’s reasoning also is not limited to criminal law, even though Kjoller arose in that context. The message applies to every area of legal practice. Submitting AI-generated work to any court without verifying it could trigger sanctions, ethics investigations, and lasting damage to a lawyer’s career.

    What happens next could set the tone for AI regulation in courtrooms nationwide. If SB 574 becomes law, California would be among the first states to put formal AI verification requirements on the books for attorneys. Even if it stalls, the Kjoller ruling already establishes a new baseline. Courts are watching, and the standard of care is shifting in real time.

    The era of blaming the machine is over.