Artificial intelligence (AI)-enabled cyber attacks surged by 47 percent globally in 2025, according to DeepStrike’s analysis titled “AI Cyber Attack Statistics 2025, Trends, Costs, Defense.” The report highlights that cyber criminals are leveraging AI throughout the attack lifecycle, enhancing their ability to identify and exploit vulnerabilities rapidly and to expand intrusions within compromised environments.
Businesses are increasingly evaluated against a ‘reasonable security’ standard, which is fluid and necessitates the integration of AI into security frameworks, backed by human oversight. Effective employee training on AI-enabled threats, alongside robust defensive AI tools that are well-configured and consistently monitored, has become essential for safeguarding digital assets.
In 2025, the top ten vulnerabilities exploited by cyber criminals either had publicly available exploit code or were actively used in attacks. Notably, approximately 60 percent of these vulnerabilities became exploitable within two weeks of their public disclosure, as detailed in IBM’s “X-Force 2025 Threat Intelligence Index.” Cyber criminals are employing AI tools to automate internet searches and accelerate the identification of these vulnerabilities, particularly in internet-facing applications and application programming interfaces (APIs).
The landscape of cyber attacks is evolving rapidly. Data exfiltration, which took cyber criminals an average of nine days in 2021, has been reduced to just two days by 2024. Security firms have even demonstrated that a complete AI-driven ransomware attack chain—from initial breach to data exfiltration and encryption—can be conducted in as little as 25 minutes. Moreover, a proof of concept for AI-powered ransomware, dubbed ‘PromptLock,’ has been developed, showcasing the efficiency of fully autonomous attacks.
AI also facilitates sophisticated phishing techniques, including deepfake audio and video impersonations that appear to come from trusted sources. This advancement in social engineering tactics has made traditional indicators of deception, such as poor grammar, less effective, thereby increasing vulnerability among employees. In 2024, incidents of social engineering and fraud rose by 233 percent, with AI-driven deepfake attacks accounting for a 53 percent year-over-year increase in such incidents, according to Aon’s report on global risk management.
Reports of AI-generated deepfake attacks have surged. In one significant incident in 2024, criminals used deepfake video technology to impersonate senior executives during a video call, resulting in a fraudulent $25 million transaction. Additionally, suspected nation-state hackers utilized AI-generated identities to secure virtual employment, granting them access to corporate systems to steal sensitive data.
Microsoft’s “Cyber Signals” reported a 46 percent rise in AI-generated phishing content in 2025. These attacks often redirect users to credential-harvesting sites disguised as legitimate portals, and their sophistication enables them to bypass many conventional security filters, with success rates increasing by approximately 25 percent, as noted in DeepStrike’s findings.
In response to the rise of AI-enhanced attacks, security professionals are adopting AI-driven defensive strategies. Approximately 51 percent of organizations are now utilizing AI for security measures, saving an average of $1.8 million in breach costs compared to those lacking such capabilities, according to IBM. Deploying advanced systems for endpoint detection and response, as well as intrusion detection and prevention, organizations are harnessing machine learning to establish activity baselines, identify anomalies, and block attacks in real-time.
AI is also instrumental in expediting the detection of software vulnerabilities, thereby reducing the interval between identification and the implementation of patches. While these tools can significantly enhance detection and response capabilities, experts caution that AI systems may generate false positives and are heavily dependent on the quality of their training data. They tend to be more effective at managing incidents rather than outright prevention.
The dual role of AI in cybersecurity presents legal complexities for companies striving to comply with information security regulations and developing defensible security programs. Regulatory bodies emphasize the importance of asset inventories, access controls, logging and monitoring, vulnerability management, and well-tested incident response plans as essential components of a reasonable security framework.
As organizations encounter AI-fueled threats and adopt AI-centric defenses, their legal teams must collaborate with relevant stakeholders to understand these risks. As the landscape of automated AI attacks evolves, integrating robust governance structures, skilled personnel, and realistic employee training programs will be key to strengthening organizational resilience.
Moving forward, the growing prevalence of AI in both offensive and defensive capacities emphasizes the necessity for businesses to not only secure their operations but also to position themselves favorably in regulatory discussions. The potential for AI tools to assist in achieving a legally defensible security posture will likely become increasingly important as cyber threats continue to evolve.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks

















































