Dec 06, 2025Ravie LakshmananAI Security / Vulnerability
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs), posing significant threats through data exfiltration and remote code execution. Security researcher Ari Marzouk (MaccariTA) has collectively named these issues IDEsaster, affecting popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline. Among these, 24 vulnerabilities have been assigned CVE identifiers.
Marzouk expressed his surprise at the research findings, noting, “All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model.” He emphasized that the integration of AI agents complicates the security landscape, as previously trusted features can now be weaponized for attacks.
The vulnerabilities exploit a combination of three vectors common to AI-driven IDEs: the ability to bypass a large language model’s (LLM) guardrails, execute actions autonomously through approved tool calls, and leverage legitimate IDE features to leak sensitive data or execute arbitrary commands. This multifaceted approach significantly deviates from traditional attack chains, which usually relied on modifying an AI agent’s configuration.
The core of the IDEsaster vulnerabilities lies in exploiting prompt injection techniques. These can be initiated through seemingly innocuous user-added context, such as pasted URLs or invisible characters. Attackers can also pollute the context by using Model Context Protocol (MCP) servers that have been compromised.
Specific attacks enabled by this exploit chain include reading sensitive files or executing commands via legitimate IDE features. Notable CVEs include CVE-2025-49150 affecting Cursor and CVE-2025-53097 concerning Roo Code, both allowing adversaries to exfiltrate data when an IDE makes a GET request to a malicious domain.
Other examples involve modifying IDE settings files to achieve code execution, such as CVE-2025-53773 impacting GitHub Copilot and CVE-2025-54130 targeting Cursor. These attacks leverage the auto-approval feature of AI agents, allowing malicious actors to write harmful workspace settings without user interaction.
Marzouk’s recommendations for developers include using AI IDEs only with trusted projects and files, connecting to reliable MCP servers, and manually scrutinizing context references for hidden instructions. He advocates for a security-first approach in the development of AI agents, emphasizing the need to apply the principle of least privilege, minimize potential injection vectors, and perform rigorous security testing.
The disclosure of these vulnerabilities aligns with the discovery of multiple issues in other AI coding tools. For instance, OpenAI Codex CLI has a command injection flaw (CVE-2025-61260) that could allow unauthorized command execution due to the program’s reliance on unverified commands from MCP server entries. Similarly, Google Antigravity has been found to possess vulnerabilities that may lead to credential harvesting and data exfiltration through indirect prompt injections.
As AI tools increasingly gain traction in enterprise environments, these findings underscore the evolving threat landscape. According to Aikido researcher Rein Daelman, any repository utilizing AI for tasks like issue triage or automated responses is exposed to risks such as prompt injection and command execution. Marzouk highlighted the importance of adopting a “Secure for AI” framework to address these emerging vulnerabilities, advocating for security measures that prioritize AI components from the outset.
This emerging paradigm underscores the necessity of developing AI tools that are secure by design, as they increasingly contribute to the attack surface of development environments. “Connecting AI agents to existing applications creates new emerging risks,” Marzouk warned, reiterating the critical need for vigilance in an era where AI technologies are rapidly evolving.
See also
AI Predictions for 2026: Custom Malware, Hallucination Management, and Cybersecurity Challenges
AI Coding Tools Like GitHub Copilot Expose 30+ Security Vulnerabilities, Researchers Warn



















































