Anthropic’s internal source code for Claude Code was accidentally leaked, the second such incident in a week. The problem came down to a simple human error, and set off a scramble to contain the fallout, according to a Tuesday (March 31) report from CNBC.
“This was a release packaging issue caused by human error, not a security breach,” an Anthropic spokesperson said in a statement to the outlet. “We’re rolling out measures to prevent this from happening again.”
According to Axios, security researcher Chaofan Shou identified the vulnerability early Tuesday morning in a post on X, comprising nearly 2,000 files and more than 512,000 lines of code. Following his post, the codebase was mirrored and dissected across GitHub.
By Wednesday (April 1), as reported by The Wall Street Journal (WSJ), Anthropic had used copyright takedown requests to force the removal of more than 8,000 copies and adaptations of the exposed material from GitHub. A programmer subsequently used separate AI tools to rewrite Claude Code’s functionality in other programming languages to keep the information publicly accessible without triggering further takedowns. That rewritten version has itself become widely circulated on the platform.
In an update, the WSJ later reported that Anthropic later narrowed its takedown request to cover just 96 copies and adaptations, saying its initial ask had reached more GitHub accounts than intended.
Advertisement: Scroll to Continue
According to CNBC, the Anthropic spokesperson confirmed that no sensitive customer data or credentials were involved.
The Commercial Stakes
We’d love to be your preferred source for news.
Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!
The leak strikes at Anthropic’s most commercially significant product at a critical moment. Claude Code’s run-rate revenue had reached more than $2.5 billion as of February, and its viral adoption among developers has been central to the company’s momentum as it pursues a possible public offering.
Claude Code’s growth helped Anthropic close a new funding round, which valued the company at $380 billion.
That success has already prompted OpenAI, Google and xAI to pour resources into competing offerings. The source code exposure now hands those rivals a detailed map of the design logic underlying a product they have been racing to replicate, removing the need to reverse-engineer capabilities that took Anthropic years to build.
What the Leak Revealed
The disclosed material goes to the heart of what makes Claude Code commercially distinctive. As the WSJ reported, the leak exposed the proprietary techniques and instructions that direct Claude’s underlying AI models to function as a useful coding assistant.
As covered by The Hacker News, developers who examined the code found details of how Claude Code manages long-running tasks, handles complex multi-step work and connects its interface to code editing tools. According to Axios, the leaked material also surfaced a roadmap of capabilities that are fully built but not yet publicly available, including a mode that allows Claude Code to keep working in the background even when a user was idle.
As reported by VentureBeat, the leak hands competitors a clear guide for replicating a production-grade AI coding agent, including the memory management approach Anthropic spent significant engineering effort developing.
The Wall Street Journal noted additional details surfaced by developers: a memory process the code refers to internally as dreaming; instructions that appear to direct Claude Code to avoid identifying itself as an AI when publishing code to third-party platforms in certain contexts, and a Tamagotchi-style interactive feature called Buddy embedded in the codebase.
In a market where the underlying AI model is increasingly available to any well-funded competitor, how a company builds around that model, and what it plans to build next, has become the primary source of competitive advantage. This leak therefore comes as a major setback to Anthropic.