Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI Becomes Standard Tool for Cybercriminals, Forescout Reveals Claude’s Rise

Forescout reveals that over 80% of cybercriminals now prefer mainstream AI tools like Anthropic’s Claude, drastically reducing attack response times to 22 seconds.

AI tools have become an integral component of the cybercriminal landscape, fundamentally shifting the dynamics of cyber attacks, according to Rik Ferguson, VP of security intelligence at Forescout. Speaking to media at the company’s Vedere Labs research hub in Eindhoven, Ferguson emphasized that cybercriminals are increasingly adopting AI to enhance their attacks, with many now utilizing mainstream commercial AI models instead of exclusively relying on underground options.

This shift follows recent research from Forescout that reveals substantial advancements in AI’s capabilities for offensive cyber activities. A comparison of 50 AI models conducted in mid-2025 showed that over half (55%) failed to meet basic vulnerability detection standards. In contrast, a follow-up study released this month found that all models excelled in this area, signaling a significant leap in AI’s effectiveness for both detection and exploitation of vulnerabilities.

Ferguson noted that the cybercriminal community’s approach to AI is moving toward the mainstream. Previously, hackers gravitated towards specialized underground large language models (LLMs) like WormGPT for their illicit activities. However, with the emergence of more capable commercial models, these underground options have largely been abandoned. “When it comes to the criminal community, the behavior there is changing around AI,” Ferguson explained, underscoring the transition to established models as preferred tools among attackers.

One such tool is Anthropic’s Claude model, which has gained traction among threat actors due to its accessibility and capabilities. Observations from underground forums indicated that Claude has become highly sought-after, while newer iterations of ChatGPT are losing appeal owing to their more stringent guardrails. Both Anthropic and OpenAI are aware of this misuse and are actively taking steps to counteract it. Last September, Anthropic issued warnings about its tools being weaponized for cyber attacks, while OpenAI reported in late 2024 that it had disrupted numerous operations utilizing its chatbot for malicious purposes.

The perception of AI among cybercriminals has also transformed. Ferguson pointed out that skepticism has faded, making way for a more enthusiastic embrace of AI technologies. Conversations in underground forums, which once derided AI, are now filled with recommendations for its use, tutorials, and guidance on effective implementation. “AI is now recommended, and more experienced forum members are offering knowledge transfer,” he stated, highlighting how the technology has become standard in the attacker toolkit.

As AI technologies become integrated into cyber operations, they raise significant concerns for defenders. Ferguson noted that the use of agentic AI increases the speed and scale of attacks. For instance, the median time for initial access brokers to hand off compromised access within a network has drastically reduced from over eight hours in 2022 to just 22 seconds today. This automation represents a new era of heightened risk, as AI agents operate continuously, without the limitations of human intervention.

“AI is not constrained by the way we consider the world,” Ferguson cautioned, suggesting that traditional strategies for defending against attacks may soon become outdated. He pointed out that the rise of automated reconnaissance and lateral movement capabilities means that organizations could face perpetual threats. “If it becomes 24/7, 365, not only is that much more difficult to defend against, it’s actually much more difficult to attribute using those characteristics,” he explained.

Looking ahead, Ferguson indicated that both attackers and defenders will increasingly rely on AI agents against each other. Some organizations are already deploying AI to automate device isolation and quarantine practices, while attackers utilize bots for various operations. However, the disparity in usage policies presents challenges, as defenders operate within strict regulations while attackers do not. “It’s not an unbalanced equation; we are all using it,” Ferguson said, emphasizing the ethical and operational dilemmas facing cybersecurity professionals.

In light of this evolving landscape, the implications extend beyond immediate threats. The rapid integration of AI into both offensive and defensive strategies necessitates a reevaluation of how cybersecurity risks and responses are conceptualized. As AI continues to shape cyber operations, understanding its dual-edged nature will be crucial for stakeholders navigating this complex terrain.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.