Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI Cybercrime Surge: 1,200% Increase in 2025 Signals $40B Losses by 2027

AI-driven scams surged 1,200% in 2025, prompting projections of $40 billion in losses by 2027 as cyber criminals exploit advanced techniques.

In early 2026, cyber security researchers at Google identified a troubling new tactic in the evolving landscape of cyber crime. Hackers began employing a combination of AI-powered tools to create traps that are increasingly difficult to defend against. Utilizing Google’s Gemini AI, these attackers developed new tools, conducted operational research, and enhanced reconnaissance efforts, paving the way for sophisticated scams involving AI deepfakes. In one notable case, a group linked to North Korea used an AI-generated deepfake of a prominent CEO to deceive a victim into compromising their computer security.

This emerging attack method is part of what some experts are calling the fifth wave of cyber crime, which has led to unprecedented levels of scams, cyber attacks, and financial losses. The weaponization of AI has transformed traditionally human skills—such as persuasion, mimicry, and coding—into highly effective tools that are accessible on demand and can be tailored for specific targets. Consequently, this surge in AI-driven tactics has made the internet more perilous than ever.

Social engineering attacks, such as phishing, have existed for decades, but the introduction of generative AI tools has enabled a new level of personalization. Attackers can now create hyper-realistic impersonation attempts, mimicking the voices and appearances of friends, family members, or colleagues with remarkable accuracy. These tactics manifest in various forms, including realistic email scams, synthetic voice calls, and deepfake personas appearing on video calls. “AI-powered social engineering is alarmingly effective,” said Brian Sibley, chief technology officer at IT consultancy firm Espria, in a recent interview. “Attackers can now mimic colleagues, suppliers, or executives with near-perfect accuracy. The only effective defence is to monitor behaviour continuously, spotting the subtle indicators that something just isn’t right.”

A report from cyber security firm Group-IB revealed that cyber criminals could obtain phishing kits on the dark web for as little as the price of a Netflix subscription. These “synthetic identity kits” include AI video actors, cloned voices, and even biometric datasets. “From the frontlines of cyber crime, AI is giving criminals unprecedented reach,” stated Group-IB CEO Dmitry Volkov. “AI is enabling criminals to scale scams with ease and create hyper-personalisation and social engineering to a new standard.”

One notable example of AI’s impact on social engineering is the so-called “pig butchering” scams. In these schemes, criminals spend weeks or even months building an emotional connection with their targets. This period, referred to as “fattening the pig,” establishes trust, making victims less skeptical when presented with fake investment opportunities. The fraudster then “slaughters” the pig by vanishing with the funds. Generative AI has transformed this tactic from a niche fraud method into a significant source of deception, with scammers initiating contact through messaging apps, social media platforms, or dating sites. They often use applications like ChatGPT to nurture these relationships. Additionally, technologies such as face-swapping or deepfakes are employed to convince victims they are engaging with a genuine love interest, allowing criminals to lure individuals regardless of language barriers or technical skills.

Cyber criminals have also discovered a novel way to leverage artificial intelligence for spreading malware, a type of malicious software designed to steal data or damage computer systems. The latest iteration of this malware utilizes large language models (LLMs) like Google’s Gemini to mutate its code in real-time as it spreads, rendering it nearly invisible to traditional antivirus solutions. In a threat intelligence report released in November, Google researchers characterized this development as a “new operational phase of AI abuse, involving tools that dynamically alter behaviour mid-execution.” The autonomous malware, referred to as Promptflux, employs a “Thinking Robot” function that enables it to rewrite its entire source code hourly to evade detection. While Promptflux is still likely in research and development stages, this obfuscation technique signals how malicious operators will likely enhance their campaigns with AI in the future.

As cyber criminals rapidly adopt AI tools into their strategies, those tasked with defending against these attacks find themselves lagging behind. Research from cyber security firm Vectra AI indicated that AI-driven scams surged by 1,200 percent in 2025, a trend expected to continue into 2026. By 2027, projected losses from AI-driven fraud could reach $40 billion, up from $16.6 billion in 2024. Former Interpol Director of Cybercrime Craig Jones warned that AI has significantly accelerated the speed, scale, and sophistication of cyber criminal operations. “AI has industrialised cyber crime,” he noted, emphasizing that this shift marks a new era where speed, volume, and advanced impersonation methods fundamentally alter how crime is perpetrated and how challenging it has become to prevent it.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Research

A Medichecks survey reveals that 23.8% of Brits would trust AI over their doctor for medical advice, highlighting a significant shift in healthcare perceptions.

Top Stories

TeamGroup unveils the T-Create Classic H514 Gen 5 SSD, achieving 14,200 MB/s speeds to enhance AI workload efficiency amid rising storage demands.

AI Technology

AI infrastructure advancements enhance Kubernetes performance, achieving 95% gains in resource efficiency and stabilizing TPOT metrics for scalable AI deployments.

Top Stories

Apple has appointed ex-Google executive Lilian Rincon as VP of AI product marketing, aiming to enhance Siri's capabilities and compete in the AI landscape.

AI Marketing

Google's TurboQuant algorithm slashes indexing time to near-zero, promising instant searches and a transformative impact on search accuracy and personalization.

AI Government

Korea unveils a $529 billion expansionary fiscal policy aimed at AI transformation, boosting spending by 5% to enhance innovation and regional development.

AI Tools

Apple enhances Siri with third-party chatbot integrations via a new AI App Store in iOS 27, leveraging Google’s Gemini for a competitive edge.

AI Generative

Google introduces seven free AI tools, including Gemini for productivity and creative tasks, revolutionizing user experiences and enhancing workflows across Google services.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.