Cybersecurity researchers have discovered a serious vulnerability in an Amazon Web Services (AWS) tool that could enable attackers to exfiltrate sensitive company data. This finding, reported by Phantom Labs, the research division of identity security firm BeyondTrust, centers on the AWS Bedrock AgentCore Code Interpreter, a component of the AWS Bedrock platform designed for creating artificial intelligence applications.
The AgentCore Code Interpreter facilitates chatbots’ ability to write and execute code for tasks like data analysis and calculations. To safeguard these systems, AWS employs a Sandbox mode, intended to isolate the AI’s code from external communication, effectively creating a digital barrier. However, this isolation appears less robust than many companies might assume. According to lead researcher Kinnaird McQuade, while the Sandbox successfully blocks most traffic, it still permits DNS queries, specifically A and AAAA records.
Cybersecurity experts have illustrated that a determined attacker can conceal stolen data or covert commands within these DNS requests. To demonstrate this risk, the research team successfully established a system that facilitated a two-way conversation with the isolated AI, thereby circumventing the security measures AWS had put in place. This approach effectively allowed for the transmission of sensitive information despite AWS’s assurances of isolation.
The situation evolved further when Phantom Labs disclosed that the vulnerability was first reported to AWS in September 2025. By November, AWS had rolled out a fix to address the issue, only to retract the update two weeks later due to technical complications. Eventually, in late December, AWS opted not to pursue another patch. Instead, the company chose to enhance its documentation to outline the risks associated with the existing Sandbox mode.
“We would like to thank researcher Kinnaird McQuade for their report, which prompted us to update our documentation to provide additional clarity regarding Sandbox Mode functionality,” an AWS spokesperson stated.
The flaw was assigned a high-risk severity score of 7.5 out of 10, and McQuade received a $100 gift card to the AWS Gear Shop as part of the responsible disclosure process. Experts in cybersecurity are cautioning that direct access to systems is not necessary for attackers to exploit these vulnerabilities. Chatbots can be manipulated through techniques like prompt injection, where misleading phrases coerce the AI into executing harmful code. Additionally, the Code Interpreter depends on over 270 third-party packages, meaning that a single compromised library could serve as a backdoor into the system.
Even seemingly innocuous AI-generated code can be designed to extract data without raising immediate alarms. These tools commonly have extensive access to AWS resources, such as Amazon S3 storage and Secrets Manager, which store sensitive files and passwords. If an attacker successfully triggers the DNS leak, they can send out sensitive information undetected, which could potentially result in data breaches involving confidential customer details or even undermine a company’s infrastructure.
In light of these findings, AWS recommends switching to VPC mode for enhanced security and ensuring that AI tools operate with the minimum necessary permissions. The situation raises critical questions about the security frameworks surrounding AI technologies, especially within cloud environments.
Industry experts have weighed in on the implications of this vulnerability. Ram Varadarajan, CEO of Acalvio, indicated that the failure reflects a fundamental flaw in the sandbox’s design. “AWS Bedrock’s sandbox isolation failed at the most fundamental layer, DNS,” he said, suggesting that traditional perimeter controls are insufficient for AI environments. He advocates for a shift in strategy, proposing that organizations should implement deception artifacts and monitoring mechanisms within their execution environments.
Jason Soroko, Senior Fellow at Sectigo, emphasized the need for organizations to take proactive measures. “Organizations must understand that the ‘Sandbox’ network mode does not provide complete isolation,” he cautioned. Since AWS has opted to amend documentation rather than issue a new patch, he urges administrators to inventory all active AgentCore Code Interpreter instances and transition those handling critical data from Sandbox mode to VPC mode. He also stressed the importance of thorough audits of IAM roles to uphold the principle of least privilege.
As the cybersecurity landscape continues to evolve, the findings concerning AWS’s Bedrock AgentCore Code Interpreter highlight the pressing need for robust security measures in AI applications. The incident serves as a reminder that while technology advances, so do the methods employed by malicious actors, necessitating continuous vigilance and adaptation in security protocols.
See also
AI Study Reveals Generated Faces Indistinguishable from Real Photos, Erodes Trust in Visual Media
Gen AI Revolutionizes Market Research, Transforming $140B Industry Dynamics
Researchers Unlock Light-Based AI Operations for Significant Energy Efficiency Gains
Tempus AI Reports $334M Earnings Surge, Unveils Lymphoma Research Partnership
Iaroslav Argunov Reveals Big Data Methodology Boosting Construction Profits by Billions




















































