A PYMNTS Company

Anthropic’s Legal Team Blames AI “Hallucination” for Citation Error in Copyright Lawsuit

 |  May 18, 2025

A lawyer representing artificial intelligence firm Anthropic in a federal copyright case acknowledged this week that an erroneous citation in a legal filing stemmed from an AI-generated mistake, highlighting growing concerns over the reliability of generative AI tools in legal practice.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    According to Reuters, Ivana Dukanovic of Latham & Watkins admitted in a court filing that the mistake originated when she used Anthropic’s own AI chatbot, Claude, to help generate a citation for an expert report. While the expert in question had relied on a legitimate article published in The American Statistician, Claude fabricated the article’s title and authors, leading to a misleading footnote.

    “This was an embarrassing and unintentional mistake,” Dukanovic stated in the filing, per Reuters. She noted that although the AI-generated reference included the correct publication year and a valid URL, the fabricated details compromised the citation’s accuracy. Dukanovic clarified that the underlying research cited by Anthropic’s data scientist, Olivia Chen, was real and appropriately supported the company’s position in the ongoing dispute.

    Read more: Anthropic Ordered to Respond After AI Allegedly Fabricates Citation in Legal Filing

    The case, brought by music publishers Universal Music Group, Concord, and ABKCO, accuses Anthropic of improperly using copyrighted lyrics to train its AI models. The lawsuit is one of several prominent legal battles testing how copyright law applies to the training of artificial intelligence systems.

    During a court hearing earlier in the week, the plaintiffs’ attorney, Matt Oppenheim of Oppenheim + Zebrak, suggested that Anthropic had relied on an AI-generated, and potentially fictitious, source to defend its position. U.S. Magistrate Judge Susan van Keulen expressed concern about the implications of such errors, stating the issue was “very serious and grave” and emphasized the distinction between an overlooked citation and a completely fabricated one generated by AI, Reuters reported.

    In her response, Dukanovic underscored that while the AI tool inserted incorrect metadata, the legal team had missed the error during review. She said the law firm has since taken steps to improve its internal procedures, introducing “multiple levels of additional review to work to ensure that this does not occur again.”

    While the plaintiffs have declined to comment on the new developments, the incident marks yet another instance of legal professionals encountering challenges with the integration of AI in case preparation. Courts have previously sanctioned attorneys for submitting filings that included fictitious case law and other AI-fabricated content.

    Source: Reuters