In early 2026, cyber security researchers at Google identified a troubling new tactic in the evolving landscape of cyber crime. Hackers began employing a combination of AI-powered tools to create traps that are increasingly difficult to defend against. Utilizing Google’s Gemini AI, these attackers developed new tools, conducted operational research, and enhanced reconnaissance efforts, paving the way for sophisticated scams involving AI deepfakes. In one notable case, a group linked to North Korea used an AI-generated deepfake of a prominent CEO to deceive a victim into compromising their computer security.
This emerging attack method is part of what some experts are calling the fifth wave of cyber crime, which has led to unprecedented levels of scams, cyber attacks, and financial losses. The weaponization of AI has transformed traditionally human skills—such as persuasion, mimicry, and coding—into highly effective tools that are accessible on demand and can be tailored for specific targets. Consequently, this surge in AI-driven tactics has made the internet more perilous than ever.
Social engineering attacks, such as phishing, have existed for decades, but the introduction of generative AI tools has enabled a new level of personalization. Attackers can now create hyper-realistic impersonation attempts, mimicking the voices and appearances of friends, family members, or colleagues with remarkable accuracy. These tactics manifest in various forms, including realistic email scams, synthetic voice calls, and deepfake personas appearing on video calls. “AI-powered social engineering is alarmingly effective,” said Brian Sibley, chief technology officer at IT consultancy firm Espria, in a recent interview. “Attackers can now mimic colleagues, suppliers, or executives with near-perfect accuracy. The only effective defence is to monitor behaviour continuously, spotting the subtle indicators that something just isn’t right.”
A report from cyber security firm Group-IB revealed that cyber criminals could obtain phishing kits on the dark web for as little as the price of a Netflix subscription. These “synthetic identity kits” include AI video actors, cloned voices, and even biometric datasets. “From the frontlines of cyber crime, AI is giving criminals unprecedented reach,” stated Group-IB CEO Dmitry Volkov. “AI is enabling criminals to scale scams with ease and create hyper-personalisation and social engineering to a new standard.”
One notable example of AI’s impact on social engineering is the so-called “pig butchering” scams. In these schemes, criminals spend weeks or even months building an emotional connection with their targets. This period, referred to as “fattening the pig,” establishes trust, making victims less skeptical when presented with fake investment opportunities. The fraudster then “slaughters” the pig by vanishing with the funds. Generative AI has transformed this tactic from a niche fraud method into a significant source of deception, with scammers initiating contact through messaging apps, social media platforms, or dating sites. They often use applications like ChatGPT to nurture these relationships. Additionally, technologies such as face-swapping or deepfakes are employed to convince victims they are engaging with a genuine love interest, allowing criminals to lure individuals regardless of language barriers or technical skills.
Cyber criminals have also discovered a novel way to leverage artificial intelligence for spreading malware, a type of malicious software designed to steal data or damage computer systems. The latest iteration of this malware utilizes large language models (LLMs) like Google’s Gemini to mutate its code in real-time as it spreads, rendering it nearly invisible to traditional antivirus solutions. In a threat intelligence report released in November, Google researchers characterized this development as a “new operational phase of AI abuse, involving tools that dynamically alter behaviour mid-execution.” The autonomous malware, referred to as Promptflux, employs a “Thinking Robot” function that enables it to rewrite its entire source code hourly to evade detection. While Promptflux is still likely in research and development stages, this obfuscation technique signals how malicious operators will likely enhance their campaigns with AI in the future.
As cyber criminals rapidly adopt AI tools into their strategies, those tasked with defending against these attacks find themselves lagging behind. Research from cyber security firm Vectra AI indicated that AI-driven scams surged by 1,200 percent in 2025, a trend expected to continue into 2026. By 2027, projected losses from AI-driven fraud could reach $40 billion, up from $16.6 billion in 2024. Former Interpol Director of Cybercrime Craig Jones warned that AI has significantly accelerated the speed, scale, and sophistication of cyber criminal operations. “AI has industrialised cyber crime,” he noted, emphasizing that this shift marks a new era where speed, volume, and advanced impersonation methods fundamentally alter how crime is perpetrated and how challenging it has become to prevent it.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks





















































