Security researchers at Sysdig have issued a stark warning about the rising threat of cloud-based attacks facilitated by large language models (LLMs). Their analysis reveals that attackers can leverage AI to automate, accelerate, and obscure cloud assaults, posing significant risks to organizations using Amazon Web Services (AWS).
The findings stem from an incident that occurred on November 28, 2025, where an attacker rapidly gained full administrative control of an AWS account in under ten minutes. Sysdig’s Threat Research Team meticulously reconstructed the attack chain, linking their insights to actionable detection and mitigation strategies for businesses aiming to strengthen their cloud security.
The assault originated from login credentials that were inadvertently left exposed in publicly accessible S3 buckets. These buckets contained RAG data for AI models associated with an IAM user possessing sufficient Lambda permissions. The attacker exploited these privileges to alter the code of an existing Lambda function, subsequently generating access keys for an admin user and relaying them directly through the Lambda response.
Sysdig’s researchers noted that the structure of the malicious code—marked by Serbian comment lines and elaborate error handling—strongly indicates the involvement of a language model. Notably, because the Lambda function operated under a role with extensive permissions, the attacker circumvented traditional privilege escalation methods associated with IAM roles, thereby obtaining administrative access more efficiently.
Once inside, the attacker spread access across nineteen different AWS principals, leveraging existing IAM users and creating new access keys. Additionally, a new admin user was established to ensure continued access. Alarmingly, the attacker attempted to assume roles in external accounts, a tactic that researchers link to behaviors typical of AI-generated actions.
As the attack progressed, the focus shifted to AWS’s Amazon Bedrock service. The attacker first assessed whether model logging was enabled before invoking multiple AI models. This aligns with tactics previously identified by Sysdig, referred to as LLMjacking, where cloud models are exploited for illicit gains. A Terraform script was even uploaded, designed to deploy a public Lambda backdoor capable of generating Bedrock credentials.
In a further escalation, the attacker sought to launch large GPU instances intended for machine learning tasks. This culminated in the deployment of a costly p4d instance that included a publicly accessible JupyterLab server as an alternative access point. The installation script referenced a nonexistent GitHub repository, further indicating the possible use of a language model to craft the attack.
According to Sysdig, this incident illustrates a significant evolution in the threat landscape. Attackers are increasingly relying on language models to automate tasks that previously required extensive knowledge of a target environment. This shift underscores the necessity for organizations to remain vigilant, particularly regarding unusual model calls, large-scale resource enumeration, and the misuse of Lambda permissions.
The researchers concluded that while AI serves as a valuable ally for defenders in cybersecurity, it has simultaneously become a potent weapon for attackers. As the capabilities of LLMs continue to advance, organizations must adapt their security strategies to combat these emerging threats effectively. The parallels between AI’s dual role in cybersecurity highlight a critical juncture in the ongoing battle to protect sensitive cloud environments.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature
















































