AI tools have become an integral component of the cybercriminal landscape, fundamentally shifting the dynamics of cyber attacks, according to Rik Ferguson, VP of security intelligence at Forescout. Speaking to media at the company’s Vedere Labs research hub in Eindhoven, Ferguson emphasized that cybercriminals are increasingly adopting AI to enhance their attacks, with many now utilizing mainstream commercial AI models instead of exclusively relying on underground options.
This shift follows recent research from Forescout that reveals substantial advancements in AI’s capabilities for offensive cyber activities. A comparison of 50 AI models conducted in mid-2025 showed that over half (55%) failed to meet basic vulnerability detection standards. In contrast, a follow-up study released this month found that all models excelled in this area, signaling a significant leap in AI’s effectiveness for both detection and exploitation of vulnerabilities.
Ferguson noted that the cybercriminal community’s approach to AI is moving toward the mainstream. Previously, hackers gravitated towards specialized underground large language models (LLMs) like WormGPT for their illicit activities. However, with the emergence of more capable commercial models, these underground options have largely been abandoned. “When it comes to the criminal community, the behavior there is changing around AI,” Ferguson explained, underscoring the transition to established models as preferred tools among attackers.
One such tool is Anthropic’s Claude model, which has gained traction among threat actors due to its accessibility and capabilities. Observations from underground forums indicated that Claude has become highly sought-after, while newer iterations of ChatGPT are losing appeal owing to their more stringent guardrails. Both Anthropic and OpenAI are aware of this misuse and are actively taking steps to counteract it. Last September, Anthropic issued warnings about its tools being weaponized for cyber attacks, while OpenAI reported in late 2024 that it had disrupted numerous operations utilizing its chatbot for malicious purposes.
The perception of AI among cybercriminals has also transformed. Ferguson pointed out that skepticism has faded, making way for a more enthusiastic embrace of AI technologies. Conversations in underground forums, which once derided AI, are now filled with recommendations for its use, tutorials, and guidance on effective implementation. “AI is now recommended, and more experienced forum members are offering knowledge transfer,” he stated, highlighting how the technology has become standard in the attacker toolkit.
As AI technologies become integrated into cyber operations, they raise significant concerns for defenders. Ferguson noted that the use of agentic AI increases the speed and scale of attacks. For instance, the median time for initial access brokers to hand off compromised access within a network has drastically reduced from over eight hours in 2022 to just 22 seconds today. This automation represents a new era of heightened risk, as AI agents operate continuously, without the limitations of human intervention.
“AI is not constrained by the way we consider the world,” Ferguson cautioned, suggesting that traditional strategies for defending against attacks may soon become outdated. He pointed out that the rise of automated reconnaissance and lateral movement capabilities means that organizations could face perpetual threats. “If it becomes 24/7, 365, not only is that much more difficult to defend against, it’s actually much more difficult to attribute using those characteristics,” he explained.
Looking ahead, Ferguson indicated that both attackers and defenders will increasingly rely on AI agents against each other. Some organizations are already deploying AI to automate device isolation and quarantine practices, while attackers utilize bots for various operations. However, the disparity in usage policies presents challenges, as defenders operate within strict regulations while attackers do not. “It’s not an unbalanced equation; we are all using it,” Ferguson said, emphasizing the ethical and operational dilemmas facing cybersecurity professionals.
In light of this evolving landscape, the implications extend beyond immediate threats. The rapid integration of AI into both offensive and defensive strategies necessitates a reevaluation of how cybersecurity risks and responses are conceptualized. As AI continues to shape cyber operations, understanding its dual-edged nature will be crucial for stakeholders navigating this complex terrain.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































