A PYMNTS Company

AI in Litigation Series: An Update on AI Copyright Cases in 2026

 |  March 31, 2026

By: Stephanie Schmidt, Marc B. Collier, Annmarie Giblin, Logan Woodward & Ethan Glenn (Norton Rose Fulbright)

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In this piece, authors Stephanie Schmidt, Marc B. Collier, Annmarie Giblin, Logan Woodward & Ethan Glenn (Norton Rose Fulbright) dive into how the rapid expansion of artificial intelligence is testing the foundations of copyright law, particularly around authorship, ownership, and infringement. They explain that U.S. copyright law remains rooted in the principle of human authorship, meaning that fully AI-generated works are generally not eligible for protection, even as AI-assisted works may still qualify depending on the level of human contribution.

    The authors highlight a growing body of litigation addressing whether training AI models on copyrighted materials constitutes fair use and whether AI-generated outputs infringe existing rights. Cases such as Thaler v. Perlmutter reaffirm the necessity of human authorship, while others like Thomson Reuters v. Ross Intelligence and Bartz v. Anthropic explore the boundaries of fair use, with courts reaching differing conclusions depending on the nature of the AI system, the use of copyrighted content, and the impact on underlying markets.

    A key theme is the divergence in judicial approaches, particularly regarding fair use and the treatment of training data. Some courts have found AI training to be highly transformative and permissible, while others emphasize risks such as market harm or improper use of protected materials. The source of training data—whether lawfully obtained or pirated—also plays a critical role, with liability potentially arising from how data is acquired and stored, even if training itself is deemed lawful.

    Finally, the authors note emerging trends that may shape the future of AI and copyright, including increased reliance on licensing agreements between content owners and AI developers, as well as ongoing disputes over liability for AI-generated outputs. With courts still grappling with these novel issues and inconsistent rulings across cases, businesses are advised to closely monitor legal developments, implement compliance safeguards, and consider licensing strategies to mitigate risk in an evolving regulatory landscape…

    CONTINUE READING…