Cybercriminals, particularly ransomware groups, are expected to increasingly utilize agentic AI in their operations next year, as they look to automate their attack strategies, according to a recent report by Trend Micro. This prediction follows claims by Anthropic that a state-sponsored team from China utilized agentic AI to orchestrate a cyberattack, a claim that has sparked some debate among experts.
Ryan Flores, the lead for data and technology research at Trend Micro, expressed concern over the rise of this technology, stating that state-sponsored groups are likely to innovate with agentic AI first, paving the way for its eventual adoption by cybercriminals. He noted that at present, there is little evidence of such technology being employed in attacks, but its potential for misuse is significant once its efficacy has been demonstrated.
Flores emphasized that agentic AI is particularly appealing to cybercriminals, who often favor approaches that maximize rewards while minimizing effort. Unlike traditional generative AI, agentic AI can operate with a degree of autonomy, executing actions on behalf of an organization without needing human intervention. This advancement allows for rapid execution of tasks that would typically require manual oversight, such as onboarding new employees.
One practical application Flores highlighted involves automating processes in human resources. Instead of manually creating accounts and email addresses for new employees, an agentic AI system could handle the entire setup autonomously. However, this capability poses considerable risks: cybercriminals could replicate similar systems to scan for vulnerabilities, exploit them, and establish unauthorized access to target networks.
Flores elaborated on this potential misuse, asserting, “If you’re a cybercriminal and you design a system to target a company or website, you could direct the AI to scan for vulnerabilities and exploit them, gaining access to sensitive information.” He added that all the necessary tools to facilitate such attacks are already available, making it a matter of time before these technologies are employed by malicious actors.
David Sancho, a senior threat researcher at Trend Micro Europe, indicated that the transition to fully automated cyberattacks won’t happen overnight. Initial deployments are likely to involve agentic AI handling specific elements of attacks rather than the entire attack chain. He noted that this gradual integration will eventually transform the cybercriminal landscape.
In Trend Micro’s latest report, the shift toward agentic automation was described as a “major leap” for the cybercrime ecosystem. The report highlighted that the rise of AI-powered ransomware-as-a-service (RaaS) will allow even inexperienced operators to conduct complex attacks with minimal skills, leading to a proliferation of independent ransomware operations.
Sancho further forecasted that the emergence of sophisticated cybercriminals offering agentic services to others could create a new underground market for these capabilities, bringing agentic AI-driven attacks into the mainstream. Flores cautioned that the dynamic always favors the attacker, placing a burden on defenders to keep pace with evolving threats.
As organizations adapt to the presence of agentic AI, they must implement stringent control measures. Similar to how human users are granted limited privileges to minimize risks, AI agents should also be assigned restricted access rights to prevent unauthorized actions. Protecting these agents from being compromised is crucial, as attackers may exploit them to facilitate transactions, create accounts, or send sensitive communications.
Trend Micro’s report also noted that attackers need not directly exploit AI agents to cause harm. They can manipulate the surrounding infrastructure or introduce malicious code to hijack workflows without raising alarms. This manipulation could allow attackers to stealthily influence multi-agent systems and their resultant behaviors, creating further vulnerabilities.
Concerns regarding the integration of agentic AI into operating systems have also been raised. Research from Hudson Rock pointed to vulnerabilities in platforms such as Windows 11, specifically highlighting the new Copilot taskbar, which could serve as a centralized data hub subject to exploitation by infostealers. Such malware is commonly utilized by financially motivated attackers, including ransomware groups, to facilitate unauthorized access to victims’ networks.
Hudson Rock revealed that “agentic-aware stealer” attacks are already occurring, where attackers embed hidden instructions within seemingly innocuous documents. When users interact with these documents using AI systems like Copilot, the AI may inadvertently execute the attackers’ commands, exfiltrating sensitive data without detection.
As these technologies evolve, the implications for cybersecurity are profound. The anticipated rise of agentic AI in cybercrime not only threatens individual organizations but also highlights the pressing need for enhanced security measures to protect against a rapidly expanding threat landscape.
GPT-4-Powered MalTerminal Malware Threatens IAM Systems with Dynamic Attacks
Mexico Faces 108 Million Cyberattacks Annually, Kaspersky Reports 297,000 Daily Incidents
AI-Powered Firewalls: Justifying $4.88M Data Breach Costs with Predictive Threat Defense
Endpoint Detection and Response Market Expected to Hit $25.7 Billion by 2032 Amid Rising Cyber Threats
Google’s Antigravity AI Tool Hacked Within 24 Hours, Exposes Severe Vulnerability




















































