Connect with us

Hi, what are you looking for?

AI Cybersecurity

88% of Organizations Hit by AI-Powered Cyber Attacks, Legacy Email Security Fails

88% of organizations faced AI-driven cyber attacks last year, exposing a critical gap in security readiness as 60% lack confidence against deepfake threats.

In a stark warning for enterprise security, especially for firms that handle sensitive personal and financial information, a recent study indicates that 88% of organizations experienced at least one security incident that undermined trust in digital communications over the past year. The rise of AI-powered phishing attacks has catalyzed a troubling resurgence of threats that legacy security tools are ill-equipped to counter.

The research report, conducted by Osterman Research and commissioned by IRONSCALES, surveyed 128 cybersecurity decision-makers and uncovered a perilous gap in preparedness: while 82% of respondents noted an increase in threat actors looking to exploit trusted communications, 60% lack confidence in their ability to effectively counter deepfake attacks.

Michael Sampson, Principal Analyst at Osterman Research, remarked, “The threat curve just got reset. Even ‘solved’ attack types like phishing and business email compromise have become immature again. BEC attacks from 2025 bear little resemblance to those from 2020—they’re now hyper-personalized, multi-channel, and can be launched autonomously at scale.” This rising complexity in attacks further complicates an already dire landscape, as organizations grapple with high breach rates.

Despite these challenges, respondents believe that the maturity of AI-enhanced attacks is still developing. Specifically, 28% indicate that AI-generated phishing is just beginning to emerge, alongside 25% who feel the same about deepfake audio attacks, and another 28% who consider deepfake video attacks to be in nascent stages. In essence, organizations are facing alarming breach rates with threats that have not yet reached their full potential.

Traditional indicators that employees and security systems previously relied on—such as grammar errors, suspicious sender addresses, and generic language—have been rendered ineffective by AI. Cybercriminals can now craft meticulous attacks in any language, with personalization occurring at scale. These attacks are increasingly delivered via multiple channels, including email, phone, video, and collaboration platforms.

The research highlights a “perfect storm” of vulnerability for finance departments, which are viewed as the highest-priority target for threat actors. A significant 59% of organizations classify finance teams as “high” or “extreme” priority targets, while the same percentage expresses substantial concern about these teams’ readiness to defend against trust-based attacks. Audian Paxson, Principal Technical Strategist at IRONSCALES, emphasized, “Finance teams control the money, so they’re priority number one for attackers. But cybersecurity leaders report the lowest confidence in these teams’ ability to spot sophisticated BEC and impersonation scams. That gap is getting exploited daily.”

Moreover, over 33% of organizations reported that threat actors successfully impersonated trusted vendors to steal funds or information over the past year, with vendor impersonation attacks showing a significant increase—13% of respondents noted major year-over-year growth in such incidents.

Perhaps most alarmingly, nearly one in five security leaders stated that security awareness training has proven ineffective against AI-enhanced threats. Current training methods aimed at preparing employees to detect trust-exploiting attacks are failing many organizations, particularly when it comes to detecting deepfake audio and video attacks. Respondents rated the effectiveness of their training as follows: 38% for detecting deepfake audio attacks, 39% for deepfake video attacks, and 43% for AI-generated phishing.

“The legacy email protections are too blunt an instrument to recognize the subtle indicators of modern AI-powered attacks,” noted Sampson. “Organizations can no longer trust these legacy solutions to protect against threats that didn’t exist when they were designed.”

The growing crisis is prompting a reassessment of security strategies across organizations. The research found that 70% of organizations now consider detecting deepfake audio impersonation attacks “extremely important,” marking the highest priority increase among respondents. Additionally, 70% are willing to integrate best-in-class point solutions to address existing gaps, 68% are open to changing vendors entirely, and 70% are prepared to replace their entire security technology stack.

The cost of inaction is becoming increasingly clear. Fifty-five percent of security leaders believe that failing to defend against trust-exploiting attacks significantly heightens the likelihood of data breaches, with the fallout extending to reduced productivity, compromised customer communications, and broader operational disruption. As organizations navigate this evolving threat landscape, the imperative for robust, future-ready security measures has never been more urgent.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Finance

Agentic AI transforms finance systems with real-time monitoring and error detection, enabling companies to proactively mitigate risks and enhance operational efficiency.

AI Cybersecurity

AI's integration into cybersecurity necessitates 30% human oversight to combat anticipated 2025 threats like automated phishing and advanced malware attacks.

AI Technology

Enterprises are adopting context engineering to enhance AI onboarding, reporting faster time to value and reduced errors by streamlining data management processes.

AI Cybersecurity

Gartner projects preemptive cybersecurity will account for over 50% of IT security spending by 2030, up from less than 5% in 2024, as threats...

AI Regulation

By 2026, 80% of organizations fail to implement effective AI governance strategies, risking confusion and stalled progress amid rapid technological integration.

Top Stories

Agentic AI reaches a pivotal inflection point in enterprises, driven by advanced machine learning that enhances predictive analytics and decision-making efficiency.

AI Marketing

Cognizant and Adobe expand their partnership to enhance generative AI in content creation, targeting a 7.1x ROI over three years for large enterprises.

AI Cybersecurity

AI-driven impersonation attacks surged 88% in organizations, with deepfakes becoming a top concern for 19.5% of cybersecurity leaders, highlighting urgent vulnerabilities.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.