AI-driven impersonation attacks are on the rise, becoming increasingly sophisticated as technological advancements continue. A recent survey conducted by cybersecurity solutions provider Ironscales highlights a significant uptick in these threats, with a staggering 88% of organizations reporting at least one security incident related to AI-driven fakery over the past year.
Among the various sectors, finance professionals emerged as the prime targets for such attacks, with 50% of respondents expressing high concern about their vulnerability. IT staff followed closely at 46.9%, while HR employees made up 38.3% of those considered at risk. The data underscore a growing trend: the more personalized and targeted these attacks become, the more challenging they are to detect.
The survey indicates that the landscape of impersonation tactics is evolving, with 39.1% of respondents noting a significant or moderate increase in AI-generated, highly personalized attacks aimed at employees. Other forms of impersonation, such as vendor imitation and deepfake audio, have also been reported to increase, with 23.6% and 32.7% of respondents, respectively, observing similar trends. Concurrently, 31.2% reported a rise in misleading social media posts about their companies, further complicating the detection of fraudulent activities.
As the sophistication of these attacks grows, the methods used have shifted dramatically. In the past, many phishing attempts featured poor grammar and obvious red flags, but modern techniques leverage AI to mimic trusted sources more convincingly. The Ironscales poll found that 60.9% of leaders believe it is becoming increasingly difficult to tell fact from fiction on social media, while 59.4% struggle to detect phishing emails. The challenge extends to verifying job applicants’ identities, with 57% expressing concern about the authenticity of applicants and 54.4% worried about fraudulent access to online meetings.
Deepfakes, in particular, have become a major concern. When leaders were asked what impersonation methods worried them most over the next year, deepfakes topped the list at 19.5%. This was followed by general impersonation at 18.8%, phishing at 13.3%, and other AI-related threats, illustrating a clear trend towards more complex and deceptive tactics.
Further analysis from Cybernews supports these findings, revealing that 179 out of 346 recorded “AI incidents” last year involved deepfake technology. In fraud cases specifically, deepfakes were responsible for 81% of incidents, highlighting the growing use of this technology in criminal activities.
The World Economic Forum also recently underscored this trend in its 2026 Global Cybersecurity Outlook, which surveyed 873 C-suite executives and cybersecurity leaders. The report indicated that 73% of respondents or someone in their networks had experienced cyber-enabled fraud over the past year, primarily through phishing and related attacks. Payment fraud, identity theft, and employee-led fraud were among other prevalent issues.
Leaders are increasingly aware of the risks posed by AI technologies. The survey showed that 87% of leaders perceive a heightened risk of AI-related vulnerabilities over the past year, with 77% noting an increase in cyber-enabled fraud and phishing threats. Interestingly, the perception of threats has evolved; a shift has occurred from ransomware being seen as the chief concern in 2025 to cyber-enabled fraud and phishing emerging as the primary threats in 2026.
While Chief Information Security Officers (CISOs) continue to identify ransomware as a leading threat, their concerns are justified given that ransomware attacks surged by 45% in 2025, with a record-breaking 1,004 incidents reported in December alone.
Despite these alarming trends, the report from the World Economic Forum indicated that many organizations are not standing idly by. There has been a noticeable increase in awareness and proactive measures against AI threats. The proportion of organizations with a security assessment process for AI tools rose from 37% in 2025 to 64% in the latest survey.
Furthermore, 77% of organizations are employing AI tools to bolster cybersecurity, with the majority using them for phishing and email threat detection. Other common applications include detecting and responding to intrusions and automating security operations.
“There are reasons for optimism,” the report concluded, emphasizing that organizations that integrate resilience into their leadership agendas and actively manage AI and supply chain risks are better positioned to navigate uncertainty. This shift towards intelligence-driven collaboration and regulatory harmonization may signal a more mature approach to collective defense against increasingly sophisticated cyber threats.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks

















































