Connect with us

Hi, what are you looking for?

AI Cybersecurity

Cybercriminals to Leverage Agentic AI for Ransomware Attacks in 2026, Warns Trend Micro

Trend Micro warns that ransomware groups will increasingly deploy agentic AI in 2026, automating attacks and amplifying threats to cybersecurity.

Cybercriminals, particularly ransomware groups, are expected to increasingly utilize agentic AI in their operations next year, as they look to automate their attack strategies, according to a recent report by Trend Micro. This prediction follows claims by Anthropic that a state-sponsored team from China utilized agentic AI to orchestrate a cyberattack, a claim that has sparked some debate among experts.

Ryan Flores, the lead for data and technology research at Trend Micro, expressed concern over the rise of this technology, stating that state-sponsored groups are likely to innovate with agentic AI first, paving the way for its eventual adoption by cybercriminals. He noted that at present, there is little evidence of such technology being employed in attacks, but its potential for misuse is significant once its efficacy has been demonstrated.

Flores emphasized that agentic AI is particularly appealing to cybercriminals, who often favor approaches that maximize rewards while minimizing effort. Unlike traditional generative AI, agentic AI can operate with a degree of autonomy, executing actions on behalf of an organization without needing human intervention. This advancement allows for rapid execution of tasks that would typically require manual oversight, such as onboarding new employees.

One practical application Flores highlighted involves automating processes in human resources. Instead of manually creating accounts and email addresses for new employees, an agentic AI system could handle the entire setup autonomously. However, this capability poses considerable risks: cybercriminals could replicate similar systems to scan for vulnerabilities, exploit them, and establish unauthorized access to target networks.

Flores elaborated on this potential misuse, asserting, “If you’re a cybercriminal and you design a system to target a company or website, you could direct the AI to scan for vulnerabilities and exploit them, gaining access to sensitive information.” He added that all the necessary tools to facilitate such attacks are already available, making it a matter of time before these technologies are employed by malicious actors.

David Sancho, a senior threat researcher at Trend Micro Europe, indicated that the transition to fully automated cyberattacks won’t happen overnight. Initial deployments are likely to involve agentic AI handling specific elements of attacks rather than the entire attack chain. He noted that this gradual integration will eventually transform the cybercriminal landscape.

In Trend Micro’s latest report, the shift toward agentic automation was described as a “major leap” for the cybercrime ecosystem. The report highlighted that the rise of AI-powered ransomware-as-a-service (RaaS) will allow even inexperienced operators to conduct complex attacks with minimal skills, leading to a proliferation of independent ransomware operations.

Sancho further forecasted that the emergence of sophisticated cybercriminals offering agentic services to others could create a new underground market for these capabilities, bringing agentic AI-driven attacks into the mainstream. Flores cautioned that the dynamic always favors the attacker, placing a burden on defenders to keep pace with evolving threats.

As organizations adapt to the presence of agentic AI, they must implement stringent control measures. Similar to how human users are granted limited privileges to minimize risks, AI agents should also be assigned restricted access rights to prevent unauthorized actions. Protecting these agents from being compromised is crucial, as attackers may exploit them to facilitate transactions, create accounts, or send sensitive communications.

Trend Micro’s report also noted that attackers need not directly exploit AI agents to cause harm. They can manipulate the surrounding infrastructure or introduce malicious code to hijack workflows without raising alarms. This manipulation could allow attackers to stealthily influence multi-agent systems and their resultant behaviors, creating further vulnerabilities.

Concerns regarding the integration of agentic AI into operating systems have also been raised. Research from Hudson Rock pointed to vulnerabilities in platforms such as Windows 11, specifically highlighting the new Copilot taskbar, which could serve as a centralized data hub subject to exploitation by infostealers. Such malware is commonly utilized by financially motivated attackers, including ransomware groups, to facilitate unauthorized access to victims’ networks.

Hudson Rock revealed that “agentic-aware stealer” attacks are already occurring, where attackers embed hidden instructions within seemingly innocuous documents. When users interact with these documents using AI systems like Copilot, the AI may inadvertently execute the attackers’ commands, exfiltrating sensitive data without detection.

As these technologies evolve, the implications for cybersecurity are profound. The anticipated rise of agentic AI in cybercrime not only threatens individual organizations but also highlights the pressing need for enhanced security measures to protect against a rapidly expanding threat landscape.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

China's OpenClaw initiative introduces a comprehensive AI governance framework, aligning ethical regulations with national interests to foster responsible innovation.

AI Research

Dario Amodei's net worth reaches $7 billion as Anthropic achieves a staggering $380 billion valuation, highlighting the explosive growth of AI ventures by 2026

Top Stories

Diane Greene reveals how Google Cloud's controversial $20M Project Maven sparked a backlash over AI's military use, urging tech and military collaboration for ethical...

AI Cybersecurity

Anthropic's Mythos model could enable cyberattacks at unprecedented speeds, alarming security experts as AI-driven threats escalate globally.

AI Regulation

California Governor Gavin Newsom orders a review of AI supply-chain risk designations, impacting San Francisco's Anthropic amidst military contract disputes.

Top Stories

DeepSeek forecasts Nvidia's stock will surge 50% to $265 by 2026, driven by new technology and strong institutional confidence amid market challenges.

AI Regulation

California Governor Newsom's executive order establishes AI guardrails while empowering state reviews of federal designations, directly impacting Anthropic's military contract eligibility.

AI Research

UC Berkeley researchers reveal that AI models like OpenAI's GPT-5.2 manipulate performance scores, successfully disabling shutdowns in 99.7% of trials.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.