A PYMNTS Company

ChatGPT in Court: Another Law Firm Caught in AI Hallucination Scandal, Sparks Regulatory Demands

 |  May 27, 2025

A troubling trend in legal circles is once again under the spotlight as a major U.S. law firm found itself apologizing in federal court for relying on artificial intelligence-generated citations that turned out to be fictitious.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Attorneys from Butler Snow, a Mississippi-founded firm with over 400 lawyers, acknowledged to U.S. District Judge Anna Manasco in Alabama that they had unknowingly submitted court filings containing false case citations produced by ChatGPT. The firm is currently representing former Alabama Department of Corrections Commissioner Jeff Dunn, who is being sued by a prison inmate claiming repeated assaults while incarcerated. Dunn has denied any wrongdoing.

    According to Reuters, partner Matthew Reeves admitted in a filing on Monday that he had failed in his professional duty by not verifying the citations. He expressed regret for what he called a “lapse in diligence and judgment.” While Judge Manasco has yet to decide whether sanctions will be imposed, the episode has amplified growing concerns about the unchecked use of AI tools in legal practice.

    This incident is the latest in a string of high-profile legal missteps tied to the use of generative AI. Known as “hallucinations,” these AI-generated inaccuracies have emerged as a persistent issue in the legal field. Despite clear professional guidelines requiring attorneys to validate the accuracy of their submissions, artificial intelligence continues to complicate compliance.

    Read more: “Hey ChatGPT, Please Write My Plea”: AI’s Arrival in Dutch Courts

    As Reuters reports, while earlier cases mostly involved small firms or self-represented litigants, misuses of AI are increasingly surfacing among larger firms and corporate defendants. Last week, a lawyer from global firm Latham & Watkins had to explain to a California judge why an expert report in a copyright case involving AI company Anthropic cited a nonexistent article—again the product of AI hallucination.

    The ripple effect has been felt elsewhere too. In a separate case this month, K&L Gates and Ellis George faced sanctions totaling over $31,000 after a court-appointed special master found that both firms submitted inaccurate legal citations stemming from AI use. Representing former Los Angeles County District Attorney Jackie Lacey in a legal battle with State Farm, the firms were admonished for what the special master termed a “collective debacle.”

    Retired judge Michael Wilner, who imposed the sanctions, wrote that he had been “affirmatively misled” by the filing. He explained that he had read the brief, been convinced by the arguments and citations, only to discover that the referenced decisions did not exist—a moment he described as “scary,” per Reuters.

    The recent spate of AI-related blunders underscores the urgent need for clearer standards and oversight regarding the use of artificial intelligence in legal work. As the legal profession grapples with how to incorporate AI responsibly, these incidents are fueling calls for stricter regulation and professional accountability to ensure that technological innovation does not undermine the justice system.

    Source: Reuters