Connect with us

Hi, what are you looking for?

AI Regulation

Claude Code Flaws Exposed: Safety Rules Bypass Enables Prompt Injection Attacks

Security flaws in Anthropic’s Claude Code expose a bypass for safety protocols, enabling unauthorized curl command execution through prompt injection attacks.

Security vulnerabilities in Claude Code, a coding agent by Anthropic, have been exposed following the leak of its source code. The discovery, made by Tel Aviv-based security firm Adversa, reveals that Claude Code may bypass its security protocols under certain conditions, specifically through a method known as prompt injection attacks. This flaw allows the AI model to ignore deny rules that are designed to block risky actions if it is presented with a sufficiently lengthy chain of subcommands.

Claude Code employs various mechanisms to control access to specific commands, such as curl, which enables network requests from the command line. This can present a significant security risk if wielded by an overly permissive AI model. For instance, to prevent Claude from executing curl commands, a user could modify the settings file ~/.claude/settings.json to include a deny rule.

However, the effectiveness of these deny rules is limited. According to a comment in the source file bashPermissions.ts, there is a hard limit of 50 on security subcommands, imposed by the variable MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50. Beyond this threshold, the AI will default to asking the user for permission, a mechanism that, while intended to safeguard against unauthorized actions, fails to account for the complexities introduced by AI-generated commands. Adversa’s AI Red Team noted that this oversight undermines the original design, which assumed that human-authored commands would remain within safe parameters.

The Adversa team demonstrated a proof-of-concept attack by creating a bash command that included 50 no-op “true” subcommands followed by a curl command. Rather than blocking the curl execution, Claude prompted for authorization to proceed. This vulnerability is particularly concerning in environments where developers routinely grant blanket permissions or reflexively approve multiple actions during extended coding sessions. Such scenarios are analogous to Continuous Integration/Continuous Deployment (CI/CD) pipelines that run Claude Code in a non-interactive mode, increasing the risk of unauthorized actions being executed.

Interestingly, Anthropic has already developed a fix for this vulnerability, known as “tree-sitter.” Although this parser is functional internally, it has not yet been implemented in public builds. Adversa argues that this oversight represents a significant flaw in the security policy enforcement of Claude Code, with potential regulatory and compliance ramifications if not rectified. They suggest that a straightforward fix is available; a minor code adjustment to switch the “behavior” key from “ask” to “deny” within the bashPermissions.ts file would effectively address the vulnerability.

As of now, Anthropic has not publicly commented on the situation. The implications of this discovery extend beyond mere technical flaws, highlighting the ongoing challenges in AI safety and security as these systems become increasingly integrated into coding practices. With AI tools gaining traction in various sectors, the need for robust security measures remains paramount.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

ASU researcher warns that overtrust in AI led to a U.S. military strike on an Iranian school, killing 170, predominantly children.

Top Stories

Hugging Face unveils TRL v1.0, a game-changing framework for LLM post-training that streamlines processes, enhancing model alignment with unprecedented efficiency.

AI Government

California Governor Gavin Newsom signs an executive order to regulate state AI usage, boosting ethical guidelines and vetting tools amid federal challenges to Anthropic's...

AI Cybersecurity

As 28 million AI-driven cyberattacks are projected for 2026, security leaders must pivot to proactive strategies to safeguard their organizations against evolving threats.

AI Tools

Machine learning revolutionizes QA engineering by automating test generation and predictive bug detection, enabling teams to accelerate release cycles and enhance software quality.

AI Education

SMMUSD launches comprehensive AI literacy training for staff and a high school pilot program with Google Gemini to enhance responsible AI use in education.

Top Stories

Malaysia targets 900 AI start-ups as it strengthens its governance framework, positioning itself as a regional digital hub amid global tech investments.

AI Technology

Jabil's shares surged 80%, driven by a 23% revenue increase to $8.3 billion, while raising fiscal 2026 guidance with a potential 53% stock upside.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.