Connect with us

Hi, what are you looking for?

AI Generative

Sysdig Reveals Attackers Use LLMs to Escalate AWS Access in Under 10 Minutes

Sysdig warns that attackers can gain full administrative AWS access in under 10 minutes by leveraging large language models to automate cloud assaults.

Security researchers at Sysdig have issued a stark warning about the rising threat of cloud-based attacks facilitated by large language models (LLMs). Their analysis reveals that attackers can leverage AI to automate, accelerate, and obscure cloud assaults, posing significant risks to organizations using Amazon Web Services (AWS).

The findings stem from an incident that occurred on November 28, 2025, where an attacker rapidly gained full administrative control of an AWS account in under ten minutes. Sysdig’s Threat Research Team meticulously reconstructed the attack chain, linking their insights to actionable detection and mitigation strategies for businesses aiming to strengthen their cloud security.

The assault originated from login credentials that were inadvertently left exposed in publicly accessible S3 buckets. These buckets contained RAG data for AI models associated with an IAM user possessing sufficient Lambda permissions. The attacker exploited these privileges to alter the code of an existing Lambda function, subsequently generating access keys for an admin user and relaying them directly through the Lambda response.

Sysdig’s researchers noted that the structure of the malicious code—marked by Serbian comment lines and elaborate error handling—strongly indicates the involvement of a language model. Notably, because the Lambda function operated under a role with extensive permissions, the attacker circumvented traditional privilege escalation methods associated with IAM roles, thereby obtaining administrative access more efficiently.

Once inside, the attacker spread access across nineteen different AWS principals, leveraging existing IAM users and creating new access keys. Additionally, a new admin user was established to ensure continued access. Alarmingly, the attacker attempted to assume roles in external accounts, a tactic that researchers link to behaviors typical of AI-generated actions.

As the attack progressed, the focus shifted to AWS’s Amazon Bedrock service. The attacker first assessed whether model logging was enabled before invoking multiple AI models. This aligns with tactics previously identified by Sysdig, referred to as LLMjacking, where cloud models are exploited for illicit gains. A Terraform script was even uploaded, designed to deploy a public Lambda backdoor capable of generating Bedrock credentials.

In a further escalation, the attacker sought to launch large GPU instances intended for machine learning tasks. This culminated in the deployment of a costly p4d instance that included a publicly accessible JupyterLab server as an alternative access point. The installation script referenced a nonexistent GitHub repository, further indicating the possible use of a language model to craft the attack.

According to Sysdig, this incident illustrates a significant evolution in the threat landscape. Attackers are increasingly relying on language models to automate tasks that previously required extensive knowledge of a target environment. This shift underscores the necessity for organizations to remain vigilant, particularly regarding unusual model calls, large-scale resource enumeration, and the misuse of Lambda permissions.

The researchers concluded that while AI serves as a valuable ally for defenders in cybersecurity, it has simultaneously become a potent weapon for attackers. As the capabilities of LLMs continue to advance, organizations must adapt their security strategies to combat these emerging threats effectively. The parallels between AI’s dual role in cybersecurity highlight a critical juncture in the ongoing battle to protect sensitive cloud environments.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

GoodVision AI unveils intelligent compute scheduling to optimize token usage, targeting a 400,000 GPU capacity across global inference clusters and cutting costs.

Top Stories

Meta's SEV1 breach highlights risks of AI autonomy, as 20% of developers let AI agents auto-approve actions, leading to significant security lapses.

Top Stories

Amazon's new AI tool sparks a 4.3% drop in U.S. software stocks, with UiPath and HubSpot plunging nearly 9% amid rising disruption fears.

Top Stories

Amazon shares dip 1.4% to $207.24 amid $200B AI investment plans, exacerbated by AWS disruptions tied to drone activity in Bahrain impacting investor confidence.

AI Education

OpenAI appoints Nikita Le Messurier from Google Cloud to accelerate generative AI adoption among startups in Australia and New Zealand, enhancing its regional strategy.

Top Stories

Microsoft considers legal action over Amazon's $50 billion cloud deal with OpenAI, raising stakes in the fierce AI competition and cloud dominance battle.

Top Stories

Microsoft considers legal action against Amazon and OpenAI over a $50 billion deal that threatens its Azure exclusivity with OpenAI's Frontier product.

Top Stories

Nebius Group secures $24.4B in contracts with Microsoft and Meta, projecting a 2026 ARR run rate of up to $9B, positioning it as a...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.