On March 31, 2026, Anthropic released version 2.1.88 of its Claude Code CLI tool via npm. Due to a packaging mistake, the release included a 59.8 MB .map source map file that exposed nearly 1,900 TypeScript files—totaling approximately 500,000 lines of internal source code. The leak was quickly discovered by security researcher Chaofan Shou and mirrored across GitHub, where it became one of the fastest‑forked repositories in history.(blockchain-council.org)

Anthropic confirmed the incident was not a security breach but a human error during release packaging. The exposed code did not include model weights, customer data, or credentials. However, it revealed internal architecture, feature flags for unreleased capabilities, agent orchestration logic, telemetry systems, and even hidden features like “Buddy” (an AI pet), “KAIROS” (always‑on agent), “Dream mode,” and “Undercover mode.”(theguardian.com)

The leak has raised serious questions about Anthropic’s operational security, especially given its positioning as a safety‑first AI lab. Competitors now have a detailed blueprint of Claude Code’s internal design, and developers are poring over the code to understand its orchestration patterns and guardrail implementations.(theguardian.com)

Anthropic responded by issuing takedown notices, deprecating the affected npm package, and working with npm to remove it. The company emphasized that no sensitive user data was exposed and that measures are being implemented to prevent similar incidents in the future.(theguardian.com)

This marks the second leak in just over a year, following a prior incident in early 2025 where internal files—including references to upcoming models like “Mythos” and “Capybara”—were inadvertently exposed. The recurrence underscores persistent vulnerabilities in Anthropic’s release processes.(theguardian.com)

For the AI industry, the incident is a stark reminder that even companies with strong safety reputations can suffer from basic operational oversights. As AI tools become more autonomous and agentic, securing the delivery pipeline is as critical as securing the models themselves.