Connect with us

Hi, what are you looking for?

AI Generative

Sysdig Reveals Attackers Use LLMs to Escalate AWS Access in Under 10 Minutes

Sysdig warns that attackers can gain full administrative AWS access in under 10 minutes by leveraging large language models to automate cloud assaults.

Security researchers at Sysdig have issued a stark warning about the rising threat of cloud-based attacks facilitated by large language models (LLMs). Their analysis reveals that attackers can leverage AI to automate, accelerate, and obscure cloud assaults, posing significant risks to organizations using Amazon Web Services (AWS).

The findings stem from an incident that occurred on November 28, 2025, where an attacker rapidly gained full administrative control of an AWS account in under ten minutes. Sysdig’s Threat Research Team meticulously reconstructed the attack chain, linking their insights to actionable detection and mitigation strategies for businesses aiming to strengthen their cloud security.

The assault originated from login credentials that were inadvertently left exposed in publicly accessible S3 buckets. These buckets contained RAG data for AI models associated with an IAM user possessing sufficient Lambda permissions. The attacker exploited these privileges to alter the code of an existing Lambda function, subsequently generating access keys for an admin user and relaying them directly through the Lambda response.

Sysdig’s researchers noted that the structure of the malicious code—marked by Serbian comment lines and elaborate error handling—strongly indicates the involvement of a language model. Notably, because the Lambda function operated under a role with extensive permissions, the attacker circumvented traditional privilege escalation methods associated with IAM roles, thereby obtaining administrative access more efficiently.

Once inside, the attacker spread access across nineteen different AWS principals, leveraging existing IAM users and creating new access keys. Additionally, a new admin user was established to ensure continued access. Alarmingly, the attacker attempted to assume roles in external accounts, a tactic that researchers link to behaviors typical of AI-generated actions.

As the attack progressed, the focus shifted to AWS’s Amazon Bedrock service. The attacker first assessed whether model logging was enabled before invoking multiple AI models. This aligns with tactics previously identified by Sysdig, referred to as LLMjacking, where cloud models are exploited for illicit gains. A Terraform script was even uploaded, designed to deploy a public Lambda backdoor capable of generating Bedrock credentials.

In a further escalation, the attacker sought to launch large GPU instances intended for machine learning tasks. This culminated in the deployment of a costly p4d instance that included a publicly accessible JupyterLab server as an alternative access point. The installation script referenced a nonexistent GitHub repository, further indicating the possible use of a language model to craft the attack.

According to Sysdig, this incident illustrates a significant evolution in the threat landscape. Attackers are increasingly relying on language models to automate tasks that previously required extensive knowledge of a target environment. This shift underscores the necessity for organizations to remain vigilant, particularly regarding unusual model calls, large-scale resource enumeration, and the misuse of Lambda permissions.

The researchers concluded that while AI serves as a valuable ally for defenders in cybersecurity, it has simultaneously become a potent weapon for attackers. As the capabilities of LLMs continue to advance, organizations must adapt their security strategies to combat these emerging threats effectively. The parallels between AI’s dual role in cybersecurity highlight a critical juncture in the ongoing battle to protect sensitive cloud environments.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

AI-assisted cyber intrusion compromises AWS in under 10 minutes, using LLMs for privilege escalation and data extraction from sensitive resources.

Top Stories

Amazon plans to invest $50 billion in OpenAI, potentially reshaping its AI strategy as it cuts 16,000 jobs and shifts focus from Nvidia to...

Top Stories

Amazon shares rise 0.5% to $240.51 after Citizens JMP boosts the price target to $315 ahead of key earnings report on AI and cloud...

AI Finance

Amazon CloudFront experiences widespread outages, blocking access to numerous essential apps and websites, raising concerns over digital reliability and business impacts.

AI Tools

Microsoft's Azure revenue skyrockets 40% year-over-year, positioning the tech giant as a stable investment in the volatile AI landscape.

Top Stories

Perplexity secures a $750 million deal with Microsoft to leverage Azure's Foundry platform, enhancing access to diverse AI models for its search engine solutions

Top Stories

Amazon is in advanced talks to invest $50 billion in OpenAI, potentially reshaping the AI landscape and solidifying AWS's role as a key cloud...

Top Stories

AWS launches Transform, enabling enterprises to modernize operations 80x faster while cutting costs by 90%, tackling $2.41 trillion in technical debt.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.