Connect with us

Hi, what are you looking for?

Top Stories

Anthropic Confirms Claude Code Source Code Leak; 1,900 Files Exposed on GitHub

Anthropic’s Claude Code source code leak exposes 1,900 TypeScript files on GitHub, raising competitive stakes in the AI landscape amid security concerns.

Anthropic confirmed on Tuesday that a misconfigured software package led to the leak of much of the source code of its prominent product, Claude Code. This incident follows a separate leak reported last week, where thousands of files were inadvertently made public.

The leak was uncovered by security researcher Chaofan Shou, who discovered that the official npm package for Claude Code contained a map file that referenced an unobfuscated TypeScript source. Shou subsequently shared his findings on X, generating considerable attention within the tech community.

The problematic file pointed to a zip archive stored on Anthropic’s Cloudflare R2 storage bucket, which was accessible for anyone to download. This archive reportedly included approximately 1,900 TypeScript files, amounting to over 512,000 lines of code. Among the contents were full libraries of slash commands and built-in tools.

Within hours of its discovery, the exposed code was uploaded to GitHub, where it was forked more than 41,500 times, according to reports from The Register. This rapid dissemination effectively ensured that the exposure of the code could not be easily reversed.

In response to the incident, an Anthropic spokesperson stated, “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

This leak comes just days after Fortune reported that Anthropic had inadvertently made thousands of files publicly accessible, including a draft blog post detailing an upcoming model known internally as “Mythos” or “Capybara,” which reportedly raises cybersecurity concerns.

Software engineer Gabriel Anhaia, who published an analysis of the leaked code, emphasized the importance of the incident for development teams. He noted that a source map file included in the npm package was intended for debugging, mapping minified code back to the original source. “Including one in a production npm publish effectively ships your entire codebase in readable form,” Anhaia wrote. He urged engineering teams to ensure that such files are excluded from their publish configurations, warning that a single misconfigured .npmignore or files field in package.json can expose sensitive information.

As experts examined the newly available source code, many expressed admiration for the quality of the work. Prominent tech blogger Robert Scoble remarked on social media, “Notice no one said the code is slop. In every painful moment, there are always gifts. The gift is that we all know now that Anthropic’s code is pretty damn good.”

However, the leak presents a significant advantage to Anthropic’s competitors, who can now gain insights into the workings of one of the company’s most successful products. In a rapidly evolving AI landscape, this exposure could give rival firms a clearer view of the features and capabilities that make Claude Code appealing to users.

The incident serves as a stark reminder of the vulnerabilities inherent in software development, particularly in the fast-paced world of artificial intelligence. With the stakes higher than ever, companies must exercise heightened vigilance in their development processes to prevent similar breaches. As Anthropic moves forward, it will need to reinforce its protocols to safeguard against future leaks while maintaining its competitive edge in the burgeoning AI market.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

Claude AI revolutionizes email marketing automation, achieving open rates up to 52% and reply rates of 21% in 2026 through advanced segmentation and personalization...

AI Government

Anthropic partners with Australia to enhance AI safety and provide economic index data, shaping policies on AI's impact across key industries.

Top Stories

Anthropic integrates its new Computer use feature into Claude Code, allowing direct computer interface interaction and enhancing AI functionality for autonomous operations.

Top Stories

Anthropic's Claude autonomously developed a full software project, including a digital audio workstation, in under four hours for just $124, setting new standards in...

AI Cybersecurity

AI cybersecurity risks escalate as breaches at Anthropic, Amazon, and Meta underscore urgent need for improved security measures amid evolving regulations.

Top Stories

Microsoft enhances Copilot with dual integration of OpenAI's GPT and Anthropic's Claude, boosting research capabilities to a benchmark score of 57.4.

Top Stories

Anthropic secures $25B funding, elevating its valuation to $350B after overcoming 21 VC rejections, reshaping the AI investment landscape.

AI Tools

Apple enhances Siri with third-party chatbot integrations via a new AI App Store in iOS 27, leveraging Google’s Gemini for a competitive edge.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.