Connect with us

Hi, what are you looking for?

AI Cybersecurity

88% of Organizations Hit by AI-Powered Cyber Attacks, Legacy Email Security Fails

88% of organizations faced AI-driven cyber attacks last year, exposing a critical gap in security readiness as 60% lack confidence against deepfake threats.

In a stark warning for enterprise security, especially for firms that handle sensitive personal and financial information, a recent study indicates that 88% of organizations experienced at least one security incident that undermined trust in digital communications over the past year. The rise of AI-powered phishing attacks has catalyzed a troubling resurgence of threats that legacy security tools are ill-equipped to counter.

The research report, conducted by Osterman Research and commissioned by IRONSCALES, surveyed 128 cybersecurity decision-makers and uncovered a perilous gap in preparedness: while 82% of respondents noted an increase in threat actors looking to exploit trusted communications, 60% lack confidence in their ability to effectively counter deepfake attacks.

Michael Sampson, Principal Analyst at Osterman Research, remarked, “The threat curve just got reset. Even ‘solved’ attack types like phishing and business email compromise have become immature again. BEC attacks from 2025 bear little resemblance to those from 2020—they’re now hyper-personalized, multi-channel, and can be launched autonomously at scale.” This rising complexity in attacks further complicates an already dire landscape, as organizations grapple with high breach rates.

Despite these challenges, respondents believe that the maturity of AI-enhanced attacks is still developing. Specifically, 28% indicate that AI-generated phishing is just beginning to emerge, alongside 25% who feel the same about deepfake audio attacks, and another 28% who consider deepfake video attacks to be in nascent stages. In essence, organizations are facing alarming breach rates with threats that have not yet reached their full potential.

Traditional indicators that employees and security systems previously relied on—such as grammar errors, suspicious sender addresses, and generic language—have been rendered ineffective by AI. Cybercriminals can now craft meticulous attacks in any language, with personalization occurring at scale. These attacks are increasingly delivered via multiple channels, including email, phone, video, and collaboration platforms.

The research highlights a “perfect storm” of vulnerability for finance departments, which are viewed as the highest-priority target for threat actors. A significant 59% of organizations classify finance teams as “high” or “extreme” priority targets, while the same percentage expresses substantial concern about these teams’ readiness to defend against trust-based attacks. Audian Paxson, Principal Technical Strategist at IRONSCALES, emphasized, “Finance teams control the money, so they’re priority number one for attackers. But cybersecurity leaders report the lowest confidence in these teams’ ability to spot sophisticated BEC and impersonation scams. That gap is getting exploited daily.”

Moreover, over 33% of organizations reported that threat actors successfully impersonated trusted vendors to steal funds or information over the past year, with vendor impersonation attacks showing a significant increase—13% of respondents noted major year-over-year growth in such incidents.

Perhaps most alarmingly, nearly one in five security leaders stated that security awareness training has proven ineffective against AI-enhanced threats. Current training methods aimed at preparing employees to detect trust-exploiting attacks are failing many organizations, particularly when it comes to detecting deepfake audio and video attacks. Respondents rated the effectiveness of their training as follows: 38% for detecting deepfake audio attacks, 39% for deepfake video attacks, and 43% for AI-generated phishing.

“The legacy email protections are too blunt an instrument to recognize the subtle indicators of modern AI-powered attacks,” noted Sampson. “Organizations can no longer trust these legacy solutions to protect against threats that didn’t exist when they were designed.”

The growing crisis is prompting a reassessment of security strategies across organizations. The research found that 70% of organizations now consider detecting deepfake audio impersonation attacks “extremely important,” marking the highest priority increase among respondents. Additionally, 70% are willing to integrate best-in-class point solutions to address existing gaps, 68% are open to changing vendors entirely, and 70% are prepared to replace their entire security technology stack.

The cost of inaction is becoming increasingly clear. Fifty-five percent of security leaders believe that failing to defend against trust-exploiting attacks significantly heightens the likelihood of data breaches, with the fallout extending to reduced productivity, compromised customer communications, and broader operational disruption. As organizations navigate this evolving threat landscape, the imperative for robust, future-ready security measures has never been more urgent.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Generative

Only 31% of organizations have fully integrated AI, with a mere 2% reporting meaningful returns, highlighting significant deployment challenges in Canada's tech landscape.

AI Cybersecurity

RSAC 2026 reveals 90% of organizations utilize AI in security, yet 75% apply it to less than 10% of their portfolio, highlighting significant integration...

AI Cybersecurity

Cybersecurity startups attract record venture capital investments as they deploy AI-driven, zero-trust solutions to combat evolving threats in 2026.

AI Generative

Demand for professionals skilled in large language model workflows is surging as companies seek to implement AI solutions, reshaping the job market by 2026.

AI Regulation

92% of organizations lack visibility into non-human identities, exposing critical vulnerabilities in AI governance and jeopardizing data security and compliance.

AI Regulation

Employers risk costly litigation as reliance on AI for wage decisions without bias oversight may perpetuate historical inequalities and violate labor laws.

AI Business

AWS Transform accelerates enterprise modernization by up to 80%, empowering firms like Experian to cut development time by 40% and reduce costs dramatically.

AI Regulation

Governments adopting AI in governance can enhance service delivery and transparency, but robust frameworks like the "7 C's" are essential for ethical compliance.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.