The integration of artificial intelligence (AI) into cyberattacks against operational technology (OT) is reshaping the threat landscape, according to cybersecurity experts. While claims of fully autonomous AI-driven attacks might be overstated, AI is being used to amplify human-led efforts, speeding up processes such as reconnaissance, phishing, and exploit development. This acceleration means that what once required specialized teams can now be executed in mere minutes by threat actors, impacting critical industries and infrastructure.
Research from SANS indicates that AI is significantly increasing the speed and scale of phishing and exploit creation. Studies such as Check Point’s VoidLink reveal AI’s capability to assist in crafting advanced malware frameworks, producing complex code structures in days rather than weeks. Although fully autonomous weaponized AI has yet to dominate the field, the lower barriers to entry for high-complexity threats demonstrate a shift toward more sophisticated cyberattacks.
Data from ecrime.ch reported that ransomware incidents surged dramatically, with a total of 7,819 cases posted on data leak sites in 2025. The United States was the most targeted, experiencing nearly 4,000 incidents, followed by Canada and several European countries. Major ransomware groups included Qilin, Akira, Cl0p, PLAY, and SAFEPAY. This escalation underlines the critical need for robust cybersecurity measures across industries.
The zero trust security model offers some defense against these evolving threats, employing microsegmentation and strict authentication to slow lateral movement and reduce exposure. However, many OT environments are hampered by legacy systems that prioritize safety over security, leaving gaps that AI-assisted attackers can exploit. Experts warn that accountability gaps arise when defenders cannot match the speed of attackers, necessitating a redefinition of defense strategies that emphasize adaptability and continuous learning.
Understanding the Real Threat of AI in Cyberattacks
Fernando Guerrero Bautista, an OT security expert at Airbus Protect, noted that AI currently acts as a sophisticated force multiplier rather than an autonomous adversary. He highlighted its practical applications in reverse-engineering industrial protocols and generating targeted spear-phishing campaigns that mimic the language used by operators in the energy and manufacturing sectors.
Moreover, Paul Lukoskie, senior director of threat intelligence at Dragos, emphasized AI’s role in lowering entry barriers for less sophisticated attackers. AI’s ability to automate reconnaissance and optimize attack paths greatly enhances the efficacy of initial intrusion tactics. He cited examples from 2025 where adversaries employed AI tools such as Anthropic’s Claude Code to facilitate complex attack phases like credential theft and vulnerability scanning.
Eric Knapp, product manager at Nozomi Networks, stressed that AI’s influence spans the entire attack lifecycle, from reconnaissance to execution. He warned that attackers increasingly exploit human vulnerabilities, capitalizing on AI’s analytical capabilities to discover new weaknesses at an unprecedented scale.
In conversations with multiple industry experts, concerns were raised about AI’s potential to cause subtle operational degradation rather than outright disruption. Steve Mustard, an independent consultant, noted that AI could manipulate operational parameters in ways that evade traditional control systems, inflicting economic harm over time without triggering immediate alarms.
Dennis Hackney, vice-chairperson of the ISA Global Cybersecurity Alliance, remarked that while AI has yet to dismantle OT environments entirely, its applications in data exfiltration and reconnaissance cannot be ignored. He pointed to alarming scenarios where AI could assist in exploiting vulnerabilities in critical infrastructure through automated attacks.
Despite the potential for enhanced defenses through zero trust principles, experts agree that many OT environments struggle to adopt this model due to legacy systems and the unique nature of industrial processes. Lukoskie emphasized that while segmentation and strict authentication can mitigate risks, they may also impact operational efficiency, complicating the implementation of zero trust.
Looking toward the future, industry leaders call for a shift in mindset regarding cybersecurity resilience in an AI-influenced landscape. Bautista advocated for “graceful degradation,” emphasizing the need for systems to maintain operational integrity even when digital layers are compromised. This reflects a growing consensus that traditional security measures alone are insufficient in the face of rapidly evolving AI-assisted threats.
As organizations grapple with these challenges, collaboration and continuous investment in security technology become paramount. The increasing sophistication of AI-driven cyberattacks necessitates a holistic approach to cybersecurity that not only fortifies defenses but also adopts adaptive strategies for incident response and recovery. The evolving threat landscape indicates that meaningful resilience in OT must be redefined, prioritizing proactive measures over reactive ones.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































