Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI Cybercrime Surge: 1,200% Increase in 2025 Signals $40B Losses by 2027

AI-driven scams surged 1,200% in 2025, prompting projections of $40 billion in losses by 2027 as cyber criminals exploit advanced techniques.

In early 2026, cyber security researchers at Google identified a troubling new tactic in the evolving landscape of cyber crime. Hackers began employing a combination of AI-powered tools to create traps that are increasingly difficult to defend against. Utilizing Google’s Gemini AI, these attackers developed new tools, conducted operational research, and enhanced reconnaissance efforts, paving the way for sophisticated scams involving AI deepfakes. In one notable case, a group linked to North Korea used an AI-generated deepfake of a prominent CEO to deceive a victim into compromising their computer security.

This emerging attack method is part of what some experts are calling the fifth wave of cyber crime, which has led to unprecedented levels of scams, cyber attacks, and financial losses. The weaponization of AI has transformed traditionally human skills—such as persuasion, mimicry, and coding—into highly effective tools that are accessible on demand and can be tailored for specific targets. Consequently, this surge in AI-driven tactics has made the internet more perilous than ever.

Social engineering attacks, such as phishing, have existed for decades, but the introduction of generative AI tools has enabled a new level of personalization. Attackers can now create hyper-realistic impersonation attempts, mimicking the voices and appearances of friends, family members, or colleagues with remarkable accuracy. These tactics manifest in various forms, including realistic email scams, synthetic voice calls, and deepfake personas appearing on video calls. “AI-powered social engineering is alarmingly effective,” said Brian Sibley, chief technology officer at IT consultancy firm Espria, in a recent interview. “Attackers can now mimic colleagues, suppliers, or executives with near-perfect accuracy. The only effective defence is to monitor behaviour continuously, spotting the subtle indicators that something just isn’t right.”

A report from cyber security firm Group-IB revealed that cyber criminals could obtain phishing kits on the dark web for as little as the price of a Netflix subscription. These “synthetic identity kits” include AI video actors, cloned voices, and even biometric datasets. “From the frontlines of cyber crime, AI is giving criminals unprecedented reach,” stated Group-IB CEO Dmitry Volkov. “AI is enabling criminals to scale scams with ease and create hyper-personalisation and social engineering to a new standard.”

One notable example of AI’s impact on social engineering is the so-called “pig butchering” scams. In these schemes, criminals spend weeks or even months building an emotional connection with their targets. This period, referred to as “fattening the pig,” establishes trust, making victims less skeptical when presented with fake investment opportunities. The fraudster then “slaughters” the pig by vanishing with the funds. Generative AI has transformed this tactic from a niche fraud method into a significant source of deception, with scammers initiating contact through messaging apps, social media platforms, or dating sites. They often use applications like ChatGPT to nurture these relationships. Additionally, technologies such as face-swapping or deepfakes are employed to convince victims they are engaging with a genuine love interest, allowing criminals to lure individuals regardless of language barriers or technical skills.

Cyber criminals have also discovered a novel way to leverage artificial intelligence for spreading malware, a type of malicious software designed to steal data or damage computer systems. The latest iteration of this malware utilizes large language models (LLMs) like Google’s Gemini to mutate its code in real-time as it spreads, rendering it nearly invisible to traditional antivirus solutions. In a threat intelligence report released in November, Google researchers characterized this development as a “new operational phase of AI abuse, involving tools that dynamically alter behaviour mid-execution.” The autonomous malware, referred to as Promptflux, employs a “Thinking Robot” function that enables it to rewrite its entire source code hourly to evade detection. While Promptflux is still likely in research and development stages, this obfuscation technique signals how malicious operators will likely enhance their campaigns with AI in the future.

As cyber criminals rapidly adopt AI tools into their strategies, those tasked with defending against these attacks find themselves lagging behind. Research from cyber security firm Vectra AI indicated that AI-driven scams surged by 1,200 percent in 2025, a trend expected to continue into 2026. By 2027, projected losses from AI-driven fraud could reach $40 billion, up from $16.6 billion in 2024. Former Interpol Director of Cybercrime Craig Jones warned that AI has significantly accelerated the speed, scale, and sophistication of cyber criminal operations. “AI has industrialised cyber crime,” he noted, emphasizing that this shift marks a new era where speed, volume, and advanced impersonation methods fundamentally alter how crime is perpetrated and how challenging it has become to prevent it.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Business

Red Hat advances enterprise AI with Small Language Models that achieve over 98% validity in structured tasks, prioritizing reliability and data sovereignty.

AI Government

US Department of Defense partners with tech giants including SpaceX and OpenAI to launch an "AI-first" initiative aimed at enhancing military decision-making efficiency.

AI Research

OpenAI's o1 model achieves 81.6% diagnostic accuracy in emergency situations, surpassing human doctors and signaling a major shift in medical practice.

AI Marketing

BusySeed unveils Rankxa, a tool tracking brand visibility across AI-generated responses, revealing 90% of brands lack meaningful presence in this new landscape.

AI Regulation

Korea Venture Investment Corp. unveils AI-driven fund management systems by integrating Nvidia H200 GPUs to enhance efficiency and support unicorn growth.

AI Technology

Apple raises Mac mini starting price to $799 amid AI-driven inventory shortages, eliminating the $599 model in response to surging demand for advanced computing.

AI Research

IBM launches a Chicago Quantum Hub to create 750 AI jobs and expands its MIT partnership to advance quantum computing and AI integration.

AI Government

71% of Australian employees use generative AI daily, but only 36% trust its implementation, highlighting urgent calls for better policy frameworks and safeguards.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.