Anthropic Says Claude Code Leak Did Not Expose Customer Data

AI model

Anthropic has reportedly said no sensitive customer data was exposed in an accidental leak.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” the company said in a message to Seeking Alpha published Thursday (April 2). “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

    Another report on the incident from the Wall Street Journal (WSJ) said as of Wednesday (April 1) morning, Anthropic had used a copyright takedown request to remove over 8,000 copies and adaptations of the raw Claude Code instructions—known as source code—that developers had shared on programming platform GitHub.

    A programmer then used separate AI tools to rewrite Claude Code’s functionality in other programming languages so the information stayed publicly accessible without triggering further takedowns. The reworked version has become widely circulated on the platform.

    The WSJ also said that Anthropic later narrowed its takedown request to cover just 96 copies and adaptations, and that its initial ask had reached more GitHub accounts than planned.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    According to the Seeking Alpha report, the leak exposed commercially sensitive information, like Anthropic’s proprietary techniques, tools and instructions for allowing its AI models to work as coding agents.

    Advertisement: Scroll to Continue

    And as PYMNTS wrote earlier this week, the leak struck at Anthropic’s most commercially significant product at a critical moment.

    Claude Code’s run-rate revenue had exceeded $2.5 billion as of February, and its viral adoption among developers has been critical to the company’s momentum as it pursues a possible public offering. Claude Code’s growth helped Anthropic complete a new funding round, which valued the startup at $380 billion.

    “That success has already prompted OpenAI, Google and xAI to pour resources into competing offerings,” PYMNTS wrote. “The source code exposure now hands those rivals a detailed map of the design logic underlying a product they have been racing to replicate, removing the need to reverse-engineer capabilities that took Anthropic years to build.”

    This was the second such incident in less than a week involving Anthropic. A recent configuration error in the company’s content management system left nearly 3,000 unpublished documents in a publicly searchable data store, including a draft blog post describing “the most powerful AI model we’ve ever developed.”

    That forced the company to confirm the existence of that model, known as Claude Mythos, telling Fortune that it represents a step change and the most capable system the company has developed, with meaningful reasoning, coding and cybersecurity improvements.