Connect with us

Hi, what are you looking for?

AI Cybersecurity

Researchers Identify 30+ Vulnerabilities in AI IDEs, Exposing Data Theft Risks

Over 30 critical vulnerabilities, collectively dubbed “IDEsaster,” have been uncovered in AI IDEs like GitHub Copilot and Cursor, risking severe data theft and remote code execution.

Dec 06, 2025Ravie LakshmananAI Security / Vulnerability

Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs), posing significant threats through data exfiltration and remote code execution. Security researcher Ari Marzouk (MaccariTA) has collectively named these issues IDEsaster, affecting popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline. Among these, 24 vulnerabilities have been assigned CVE identifiers.

Marzouk expressed his surprise at the research findings, noting, “All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model.” He emphasized that the integration of AI agents complicates the security landscape, as previously trusted features can now be weaponized for attacks.

The vulnerabilities exploit a combination of three vectors common to AI-driven IDEs: the ability to bypass a large language model’s (LLM) guardrails, execute actions autonomously through approved tool calls, and leverage legitimate IDE features to leak sensitive data or execute arbitrary commands. This multifaceted approach significantly deviates from traditional attack chains, which usually relied on modifying an AI agent’s configuration.

The core of the IDEsaster vulnerabilities lies in exploiting prompt injection techniques. These can be initiated through seemingly innocuous user-added context, such as pasted URLs or invisible characters. Attackers can also pollute the context by using Model Context Protocol (MCP) servers that have been compromised.

Specific attacks enabled by this exploit chain include reading sensitive files or executing commands via legitimate IDE features. Notable CVEs include CVE-2025-49150 affecting Cursor and CVE-2025-53097 concerning Roo Code, both allowing adversaries to exfiltrate data when an IDE makes a GET request to a malicious domain.

Other examples involve modifying IDE settings files to achieve code execution, such as CVE-2025-53773 impacting GitHub Copilot and CVE-2025-54130 targeting Cursor. These attacks leverage the auto-approval feature of AI agents, allowing malicious actors to write harmful workspace settings without user interaction.

Marzouk’s recommendations for developers include using AI IDEs only with trusted projects and files, connecting to reliable MCP servers, and manually scrutinizing context references for hidden instructions. He advocates for a security-first approach in the development of AI agents, emphasizing the need to apply the principle of least privilege, minimize potential injection vectors, and perform rigorous security testing.

The disclosure of these vulnerabilities aligns with the discovery of multiple issues in other AI coding tools. For instance, OpenAI Codex CLI has a command injection flaw (CVE-2025-61260) that could allow unauthorized command execution due to the program’s reliance on unverified commands from MCP server entries. Similarly, Google Antigravity has been found to possess vulnerabilities that may lead to credential harvesting and data exfiltration through indirect prompt injections.

As AI tools increasingly gain traction in enterprise environments, these findings underscore the evolving threat landscape. According to Aikido researcher Rein Daelman, any repository utilizing AI for tasks like issue triage or automated responses is exposed to risks such as prompt injection and command execution. Marzouk highlighted the importance of adopting a “Secure for AI” framework to address these emerging vulnerabilities, advocating for security measures that prioritize AI components from the outset.

This emerging paradigm underscores the necessity of developing AI tools that are secure by design, as they increasingly contribute to the attack surface of development environments. “Connecting AI agents to existing applications creates new emerging risks,” Marzouk warned, reiterating the critical need for vigilance in an era where AI technologies are rapidly evolving.

Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Microsoft plans to terminate its partnership with OpenAI, investing in independent AI models set for release by 2026, amidst rising financial pressures on OpenAI.

Top Stories

Nvidia triples code output to 300% with AI-powered Cursor, now utilized by 30,000 engineers to streamline software development and enhance productivity.

Top Stories

Microsoft promotes four executives to accelerate its enterprise AI strategy amid a 15% stock decline, highlighting urgent shifts in leadership to enhance growth.

AI Tools

Generative AI accelerates software development by over 55%, prompting a pivotal shift as engineers evolve into AI orchestrators or builders in high-demand roles.

AI Tools

GitHub Copilot revolutionizes software development by enhancing collaboration and automating tasks, significantly reducing coding time and improving productivity.

Top Stories

EPAM Systems partners with Cursor to revolutionize AI-native engineering, leveraging tools for its 50,000 engineers to transform enterprise software development.

Top Stories

Anthropic blocks xAI's access to its Claude models amid allegations of misuse, highlighting rising tensions and competition within the AI startup ecosystem.

AI Technology

GitHub Copilot enhances AI-assisted coding with context engineering, enabling developers to implement custom instructions and reusable prompts for improved code quality and efficiency.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.