Connect with us

Hi, what are you looking for?

AI Cybersecurity

Cybercriminals to Leverage Agentic AI for Ransomware Attacks in 2026, Warns Trend Micro

Trend Micro warns that ransomware groups will increasingly deploy agentic AI in 2026, automating attacks and amplifying threats to cybersecurity.

Cybercriminals, particularly ransomware groups, are expected to increasingly utilize agentic AI in their operations next year, as they look to automate their attack strategies, according to a recent report by Trend Micro. This prediction follows claims by Anthropic that a state-sponsored team from China utilized agentic AI to orchestrate a cyberattack, a claim that has sparked some debate among experts.

Ryan Flores, the lead for data and technology research at Trend Micro, expressed concern over the rise of this technology, stating that state-sponsored groups are likely to innovate with agentic AI first, paving the way for its eventual adoption by cybercriminals. He noted that at present, there is little evidence of such technology being employed in attacks, but its potential for misuse is significant once its efficacy has been demonstrated.

Flores emphasized that agentic AI is particularly appealing to cybercriminals, who often favor approaches that maximize rewards while minimizing effort. Unlike traditional generative AI, agentic AI can operate with a degree of autonomy, executing actions on behalf of an organization without needing human intervention. This advancement allows for rapid execution of tasks that would typically require manual oversight, such as onboarding new employees.

One practical application Flores highlighted involves automating processes in human resources. Instead of manually creating accounts and email addresses for new employees, an agentic AI system could handle the entire setup autonomously. However, this capability poses considerable risks: cybercriminals could replicate similar systems to scan for vulnerabilities, exploit them, and establish unauthorized access to target networks.

Flores elaborated on this potential misuse, asserting, “If you’re a cybercriminal and you design a system to target a company or website, you could direct the AI to scan for vulnerabilities and exploit them, gaining access to sensitive information.” He added that all the necessary tools to facilitate such attacks are already available, making it a matter of time before these technologies are employed by malicious actors.

David Sancho, a senior threat researcher at Trend Micro Europe, indicated that the transition to fully automated cyberattacks won’t happen overnight. Initial deployments are likely to involve agentic AI handling specific elements of attacks rather than the entire attack chain. He noted that this gradual integration will eventually transform the cybercriminal landscape.

In Trend Micro’s latest report, the shift toward agentic automation was described as a “major leap” for the cybercrime ecosystem. The report highlighted that the rise of AI-powered ransomware-as-a-service (RaaS) will allow even inexperienced operators to conduct complex attacks with minimal skills, leading to a proliferation of independent ransomware operations.

Sancho further forecasted that the emergence of sophisticated cybercriminals offering agentic services to others could create a new underground market for these capabilities, bringing agentic AI-driven attacks into the mainstream. Flores cautioned that the dynamic always favors the attacker, placing a burden on defenders to keep pace with evolving threats.

As organizations adapt to the presence of agentic AI, they must implement stringent control measures. Similar to how human users are granted limited privileges to minimize risks, AI agents should also be assigned restricted access rights to prevent unauthorized actions. Protecting these agents from being compromised is crucial, as attackers may exploit them to facilitate transactions, create accounts, or send sensitive communications.

Trend Micro’s report also noted that attackers need not directly exploit AI agents to cause harm. They can manipulate the surrounding infrastructure or introduce malicious code to hijack workflows without raising alarms. This manipulation could allow attackers to stealthily influence multi-agent systems and their resultant behaviors, creating further vulnerabilities.

Concerns regarding the integration of agentic AI into operating systems have also been raised. Research from Hudson Rock pointed to vulnerabilities in platforms such as Windows 11, specifically highlighting the new Copilot taskbar, which could serve as a centralized data hub subject to exploitation by infostealers. Such malware is commonly utilized by financially motivated attackers, including ransomware groups, to facilitate unauthorized access to victims’ networks.

Hudson Rock revealed that “agentic-aware stealer” attacks are already occurring, where attackers embed hidden instructions within seemingly innocuous documents. When users interact with these documents using AI systems like Copilot, the AI may inadvertently execute the attackers’ commands, exfiltrating sensitive data without detection.

As these technologies evolve, the implications for cybersecurity are profound. The anticipated rise of agentic AI in cybercrime not only threatens individual organizations but also highlights the pressing need for enhanced security measures to protect against a rapidly expanding threat landscape.

Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

MAGA Republicans express fears that Trump's AI expansion push could trigger a jobs apocalypse, threatening blue-collar workers amid rising tech layoffs.

AI Government

U.S. government unveils $10B 'Genesis Mission' to build a robust AI supply chain, boosting firms like Intel and MP Materials with stock surges of...

Top Stories

US-China AI rivalry escalates as China's DeepSeek R1 model achieves advanced training cost of $2.9M, challenging US innovation dynamics and supply chain control

AI Generative

Icaro Lab's study reveals that poetic phrasing enables a 62% success rate in bypassing safety measures in major LLMs from OpenAI, Google, and Anthropic.

AI Regulation

MAGA Republicans, led by Trump, express fears of massive job losses from AI push, warning that corporations could benefit at workers' expense amidst looming...

Top Stories

Morgan Stanley, Citigroup, and Goldman Sachs predict a robust Indian market recovery by 2026, driven by 8.2% GDP growth and stabilizing earnings amid AI...

AI Research

Researchers find that 62% of AI models from firms like Google and OpenAI bypass safety measures using poetic prompts to elicit harmful content.

Top Stories

Moonshot AI's Kimi K2 Thinking outperforms OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5, signaling China's rise in global AI competitiveness.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.