Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI-Driven Impersonation Attacks Surge 88% in Organizations, Deepfakes Top Concerns

AI-driven impersonation attacks surged 88% in organizations, with deepfakes becoming a top concern for 19.5% of cybersecurity leaders, highlighting urgent vulnerabilities.

AI-driven impersonation attacks are on the rise, becoming increasingly sophisticated as technological advancements continue. A recent survey conducted by cybersecurity solutions provider Ironscales highlights a significant uptick in these threats, with a staggering 88% of organizations reporting at least one security incident related to AI-driven fakery over the past year.

Among the various sectors, finance professionals emerged as the prime targets for such attacks, with 50% of respondents expressing high concern about their vulnerability. IT staff followed closely at 46.9%, while HR employees made up 38.3% of those considered at risk. The data underscore a growing trend: the more personalized and targeted these attacks become, the more challenging they are to detect.

The survey indicates that the landscape of impersonation tactics is evolving, with 39.1% of respondents noting a significant or moderate increase in AI-generated, highly personalized attacks aimed at employees. Other forms of impersonation, such as vendor imitation and deepfake audio, have also been reported to increase, with 23.6% and 32.7% of respondents, respectively, observing similar trends. Concurrently, 31.2% reported a rise in misleading social media posts about their companies, further complicating the detection of fraudulent activities.

As the sophistication of these attacks grows, the methods used have shifted dramatically. In the past, many phishing attempts featured poor grammar and obvious red flags, but modern techniques leverage AI to mimic trusted sources more convincingly. The Ironscales poll found that 60.9% of leaders believe it is becoming increasingly difficult to tell fact from fiction on social media, while 59.4% struggle to detect phishing emails. The challenge extends to verifying job applicants’ identities, with 57% expressing concern about the authenticity of applicants and 54.4% worried about fraudulent access to online meetings.

Deepfakes, in particular, have become a major concern. When leaders were asked what impersonation methods worried them most over the next year, deepfakes topped the list at 19.5%. This was followed by general impersonation at 18.8%, phishing at 13.3%, and other AI-related threats, illustrating a clear trend towards more complex and deceptive tactics.

Further analysis from Cybernews supports these findings, revealing that 179 out of 346 recorded “AI incidents” last year involved deepfake technology. In fraud cases specifically, deepfakes were responsible for 81% of incidents, highlighting the growing use of this technology in criminal activities.

The World Economic Forum also recently underscored this trend in its 2026 Global Cybersecurity Outlook, which surveyed 873 C-suite executives and cybersecurity leaders. The report indicated that 73% of respondents or someone in their networks had experienced cyber-enabled fraud over the past year, primarily through phishing and related attacks. Payment fraud, identity theft, and employee-led fraud were among other prevalent issues.

Leaders are increasingly aware of the risks posed by AI technologies. The survey showed that 87% of leaders perceive a heightened risk of AI-related vulnerabilities over the past year, with 77% noting an increase in cyber-enabled fraud and phishing threats. Interestingly, the perception of threats has evolved; a shift has occurred from ransomware being seen as the chief concern in 2025 to cyber-enabled fraud and phishing emerging as the primary threats in 2026.

While Chief Information Security Officers (CISOs) continue to identify ransomware as a leading threat, their concerns are justified given that ransomware attacks surged by 45% in 2025, with a record-breaking 1,004 incidents reported in December alone.

Despite these alarming trends, the report from the World Economic Forum indicated that many organizations are not standing idly by. There has been a noticeable increase in awareness and proactive measures against AI threats. The proportion of organizations with a security assessment process for AI tools rose from 37% in 2025 to 64% in the latest survey.

Furthermore, 77% of organizations are employing AI tools to bolster cybersecurity, with the majority using them for phishing and email threat detection. Other common applications include detecting and responding to intrusions and automating security operations.

“There are reasons for optimism,” the report concluded, emphasizing that organizations that integrate resilience into their leadership agendas and actively manage AI and supply chain risks are better positioned to navigate uncertainty. This shift towards intelligence-driven collaboration and regulatory harmonization may signal a more mature approach to collective defense against increasingly sophisticated cyber threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Cybersecurity

IBM's X-Force reveals that AI-generated malware Slopoly enables cybercriminals to automate attacks, shortening hacking lifecycles and complicating cybersecurity defenses.

Top Stories

Leanstral launches as the first open-source code agent for Lean 4, boasting 6 billion parameters and outperforming competitors with a score of 26.3 for...

AI Business

Oracle shares soared 9% after a blockbuster earnings report revealed a $553 billion backlog and raised 2027 revenue guidance to $90 billion amidst surging...

AI Government

Legal experts declare the Home Office's use of AI in asylum assessments likely unlawful, citing a 9% error rate and lack of transparency that...

AI Regulation

South Korea unveils the world's first comprehensive AI regulatory framework, the Basic AI Act, mandating a one-year guidance period for adapting high-impact AI technologies.

Top Stories

IIT Bombay alumnus Devendra Singh Chaplot joins Elon Musk's SpaceX and xAI to spearhead superintelligence projects, leveraging his expertise in AI and robotics.

AI Technology

AWS partners with Cerebras to integrate WSE chips, significantly boosting AI inference speed, enabling faster response times for complex workloads.

AI Generative

X enhances Grok, allowing X Premium users to generate videos from up to seven images, paving the way for AI-driven video content up to...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.