A PYMNTS Company

Anthropic Ordered to Respond After AI Allegedly Fabricates Citation in Legal Filing

 |  May 14, 2025

A federal judge has instructed artificial intelligence company Anthropic to formally address claims that it included an AI-generated false citation in a court document submitted in a high-profile copyright infringement case. The order was issued Tuesday by U.S. Magistrate Judge Susan van Keulen during a hearing in San Jose, California, according to Reuters.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The dispute centers on a lawsuit brought by a coalition of music publishers, including Universal Music Group, Concord, and ABKCO, who allege that Anthropic improperly used copyrighted lyrics to train its AI chatbot, Claude. As part of its defense, Anthropic submitted expert testimony from a data scientist, Olivia Chen, which cited a non-existent academic article in support of the company’s arguments about the frequency with which Claude reproduces song lyrics—a phenomenon it described as rare.

    Per Reuters, the plaintiffs’ attorney, Matt Oppenheim of Oppenheim + Zebrak, told the court that he had verified with both the academic journal American Statistician and one of the purported authors that the cited paper did not exist. He called the reference a “complete fabrication” but did not suggest Chen had acted intentionally. Instead, he said it was “likely” that Chen had relied on Claude itself to help generate the content and supporting sources for her analysis.

    Read more: The AI Agent Revolution in Crypto: Looming Risks and Regulatory Opportunities

    Anthropic has until Thursday to respond to the accusation. Judge van Keulen declined to allow immediate questioning of Chen but emphasized the seriousness of the situation, noting that it raised a “very serious and grave issue.” She drew a distinction between a simple citation mistake and a false reference potentially generated by AI, according to Reuters.

    Anthropic’s attorney, Sy Damle of Latham & Watkins, argued that the plaintiffs were unfairly surprising them with the allegation. He acknowledged the citation was incorrect but contended that it likely referred to a real article, albeit with an erroneous reference. The link provided in the court filing directed to a different article in the same journal but with unrelated authors and a different title.

    At the time of the hearing, representatives for Anthropic did not offer further comment, and Chen could not be reached. The case adds to a growing list of legal challenges facing AI developers over the use of copyrighted material, with courts increasingly scrutinizing the reliability of AI-assisted legal work. Several attorneys have recently faced criticism or sanctions for similar missteps involving “hallucinated” AI citations in court filings.

    The case is Concord Music Group Inc. v. Anthropic PBC, U.S. District Court for the Northern District of California, No. 3:24-cv-03811.

    Source: Reuters