Social engineering has solidified its position as the primary initial access vector for cyberattacks in 2025, bolstered by advancements in artificial intelligence (AI), according to a report from ThreatDown. Researchers caution that AI is poised to become a fundamental component of social engineering tactics throughout 2026, raising concerns about the evolving landscape of cyber threats.
“Deepfake voice, image, and video impersonation now requires minimal expertise and only a handful of reference images or seconds of audio,” the report notes, highlighting the accessibility of sophisticated tools that enable attackers to easily replicate individuals’ identities.
Criminals are leveraging these capabilities across a variety of malicious activities. These range from creating fabricated identities for financial fraud to impersonating IT or helpdesk personnel in order to trick employees into divulging passwords, resetting multi-factor authentication (MFA), or approving unauthorized remote access. Additionally, executives are being impersonated in increasingly convincing CEO fraud schemes. ThreatDown predicts that AI-driven social engineering operations will scale significantly throughout 2026, likely emerging as the predominant form of social engineering employed by attackers.
AI’s impact on phishing attacks has already been observed, with threat actors utilizing generative AI tools to craft realistic phishing emails devoid of typos, even when the attacker lacks proficiency in the target’s language. “Phishing campaigns used familiar brands and believable lures like secure document downloads,” the report states, indicating that attackers are becoming more adept at creating polished and convincingly personalized messages at scale.
Attackers have adopted straightforward techniques, such as checking MX records, allowing them to serve victims with counterfeit versions of legitimate login screens for platforms like Google or OneDrive, tied to the victims’ own domains. In certain instances, victims were redirected to their authentic inboxes after their credentials were harvested, a tactic designed to minimize suspicion.
To counteract these growing threats, AI-powered security awareness training is being promoted as an essential measure for organizations. Such training empowers employees to adopt a healthy skepticism, enabling them to navigate the complexities of evolving social engineering attacks effectively. KnowBe4 has positioned itself as a leader in this domain, providing tools to enhance security culture and lower human risk. The company claims that over 70,000 organizations worldwide have integrated its HRM+ platform to bolster their defenses against cyber threats.
The implications of these trends are significant for businesses, as the risks associated with AI-driven social engineering attacks continue to grow. As organizations increasingly rely on digital infrastructures, the importance of robust cybersecurity measures becomes paramount. Companies must adapt to this changing landscape, recognizing that the sophistication of threats will likely escalate alongside advancements in technology.
Looking ahead, it is clear that both the methods used by cybercriminals and the tools available for defense will continue to evolve. Businesses must remain vigilant and proactive in their cybersecurity strategies, ensuring that they are equipped to respond to the challenges presented by AI-enhanced social engineering attacks.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































