Anthropic Accidentally Leaks Claude Code Source Code in Second Data Blunder Within a Week

Anthropic Accidentally Leaks Claude Code Source Code in Second Data Blunder Within a Week
Anthropic Accidentally Leaks Claude Code Source Code in Second Data Blunder Within a Week

Artificial intelligence startup Anthropic has confirmed that it accidentally exposed the internal source code of its popular coding assistant, Claude Code, in what the company described as a human error rather than a security breach — marking the second major data blunder the company has faced in under a week.

What Happened

The leak occurred on March 31, 2026, when Anthropic published Claude Code version 2.1.88 to the npm software registry with a 59.8-megabyte source map file accidentally attached. Source maps are developer tools that convert compressed, minified code back into its original readable form. By inadvertently including this file in a public release, Anthropic effectively handed anyone who downloaded the package full visibility into the internal architecture of one of the AI industry’s most closely watched coding tools.

The exposed data reportedly included Claude Code’s complete internal architecture, details of unreleased features, and internal model performance benchmarks — information that competitors and developers worldwide could now freely access.

Anthropic’s Response

An Anthropic spokesperson moved quickly to clarify the nature of the incident. “No sensitive customer data or credentials were involved or exposed,” the spokesperson said in a statement. “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

Despite the reassurance, the competitive exposure was already significant. A post on X sharing a link to the leaked code accumulated more than 21 million views within hours, signalling the scale of public and industry interest.

A Second Blow in One Week

The timing compounds an already difficult stretch for the San Francisco-based company. Just days earlier, on March 26, internal documents describing Anthropic’s upcoming AI model — codenamed Mythos — were discovered in a publicly accessible data cache, providing the wider AI community with an early look at Anthropic’s next major model before any official announcement.

Together, the two incidents have raised questions about internal data handling practices at one of the AI industry’s most prominent players, which competes directly with OpenAI, Google DeepMind, and Meta AI.

Supply Chain Warning

Security researchers flagged an additional concern tied to the timing of the release. Users who installed or updated Claude Code via npm on March 31 between 00:21 and 03:29 UTC may have inadvertently pulled a version bundled with a trojanized copy of the Axios HTTP client — a widely used JavaScript library — as part of what appears to be a concurrent supply chain attack. Anthropic has not publicly commented on this angle, and the full scope remains under investigation.

What It Means

The unintentional exposure of Claude Code’s architecture is a reputational setback that underscores the risks of rapid software release cycles in the AI sector. Anthropic says it is implementing safeguards to prevent similar packaging errors in future releases.

Latest from Blog