Anthropic’s Repeated Data Leaks Raise Questions About Internal Security

18

Anthropic, the AI company that heavily promotes its safety-first approach to artificial intelligence development, has suffered two significant public data leaks in the span of a week. These incidents call into question the firm’s operational security despite its reputation as an industry leader in AI safety.

First Leak: Unannounced Model Details

Last Thursday, Fortune reported that Anthropic accidentally exposed around 3,000 internal files. These included a draft blog post detailing a new, unreleased AI model. The leak provided an early glimpse into the company’s future product pipeline.

Second Leak: Claude Code Source Code Exposed

On Tuesday, Anthropic released version 2.1.88 of its Claude Code software package, which inadvertently included nearly 2,000 source code files and over 512,000 lines of code. Researcher Chaofan Shou quickly identified the leak and posted about it on X. Anthropic dismissed the incident as a “release packaging issue caused by human error,” but the extent of the exposure is substantial.

Why Claude Code Matters

Claude Code is not a trivial product. It’s a command-line tool that allows developers to leverage Anthropic’s AI for code generation and editing. Its increasing popularity has prompted rivals, like OpenAI, to shift their strategies. OpenAI reportedly halted its Sora video generation product after only six months, in part to refocus on competing with Claude Code in the developer market.

The leaked source code reveals the software framework surrounding the AI model itself—the core instructions that define how the tool functions. Developers have already begun dissecting the exposed architecture, describing it as a “production-grade developer experience.”

Implications and Concerns

While the long-term impact of these leaks remains unclear, competitors could benefit from this exposed architecture. The rapid pace of AI development means the information may quickly become outdated, but it still represents a significant oversight for a company that prioritizes security.

Internally, the incidents likely led to immediate scrutiny of engineering teams. Anthropic will need to address these operational failures to maintain trust with partners and investors. The repeated nature of these leaks raises questions about the firm’s internal processes and whether its public image aligns with its actual security practices.

These incidents serve as a harsh reminder that even companies with strong public commitments to safety can fall victim to basic operational errors. The real test will be how Anthropic responds to prevent future leaks and maintain its credibility.