Connect with us

Hi, what are you looking for?

AI Cybersecurity

State-Sponsored Hackers Use AI Tools Like Google’s Gemini to Enhance Cyberattacks

State-sponsored hackers from Iran, North Korea, China, and Russia are now leveraging Google’s Gemini AI to enhance cyberattacks, complicating enterprise defenses across sectors.

State-sponsored hackers are increasingly using advanced artificial intelligence tools, including Google’s Gemini, to enhance their cyberattack strategies. A recent report from Google’s Threat Intelligence Group (GTIG) indicates that these actors, primarily from Iran, North Korea, China, and Russia, have integrated AI into various stages of the attack lifecycle, including reconnaissance, social engineering, and malware development. The findings are part of GTIG’s quarterly report, which highlights the evolving landscape of cyber threats as of the fourth quarter of 2025.

The report underscores that for state-sponsored threat actors, large language models have become vital for conducting technical research, targeting, and quickly generating sophisticated phishing attempts. GTIG researchers noted, “For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures.”

Among the groups leveraging these sophisticated tools is the Iranian hacking group APT42, which has reportedly utilized Gemini to enhance its reconnaissance and targeted social engineering strategies. The group created seemingly legitimate email addresses and conducted thorough research to establish credible narratives for engaging their targets. APT42’s tactics involved crafting believable personas and scenarios that improved their chances of success, effectively circumventing traditional phishing detection measures by employing natural language variations.

Similarly, North Korean actor UNC2970, which specializes in targeting the defense sector and impersonating corporate recruiters, has utilized Gemini for profiling high-value targets. The group’s reconnaissance efforts included gathering information about major cybersecurity and defense firms, mapping specific technical job roles, and even collecting salary data. GTIG remarked that this activity complicates the line between routine professional research and malicious intent, as the group assembles tailored phishing personas with high fidelity.

In addition to operational misuse, the report highlighted a rise in model extraction attempts, or “distillation attacks,” aimed at stealing intellectual property from AI models. One notable campaign targeting Gemini’s reasoning capabilities involved over 100,000 prompts designed to elicit responses from the model. This suggests an effort to replicate Gemini’s reasoning in non-English languages, thereby broadening the potential impact of such attacks.

While GTIG has not observed direct attacks on advanced models by persistent threat actors, it has identified numerous model extraction attempts from private sector entities globally, as well as researchers aiming to clone proprietary logic. Google’s systems have managed to detect these attacks in real-time and have implemented defenses to protect internal reasoning processes.

Furthermore, the report highlighted the emergence of AI-integrated malware, specifically identified as HONESTCUE, which utilizes Gemini’s API for functionality generation. This malware employs a complex obfuscation technique to evade traditional network detection methods. HONESTCUE operates as a downloader and launcher framework, using Gemini’s API to send prompts and receive C# source code as responses. The second stage of the attack compiles and executes payloads directly in memory, leaving no trace on disk.

Another threat identified by GTIG was COINBAIT, a phishing kit likely accelerated by AI code generation tools. This kit masquerades as a major cryptocurrency exchange for credential harvesting, built using the AI-powered platform Lovable AI.

A novel social engineering campaign, observed in December 2025, saw threat actors exploit the public sharing features of generative AI services, including Gemini and ChatGPT, to host deceptive content. This campaign involved embedding malicious command-line scripts within seemingly benign instructions for common computer tasks, thus manipulating AI models to facilitate the distribution of ATOMIC malware targeting macOS systems.

In the realm of the underground marketplace, GTIG found a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals often struggle to develop their own AI models, instead relying on commercial products accessed through stolen credentials. One kit, dubbed Xanthorox, was marketed as a custom AI for autonomous malware generation but was actually based on several commercial AI products, including Gemini.

In response to these escalating threats, Google has taken proactive measures against identified malicious actors by disabling accounts and assets associated with harmful activities. The company has also enhanced its defensive intelligence, improving classifiers and models to mitigate future misuse. “We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activities,” the report stated.

As the landscape of cyber threats continues to evolve, the findings serve as a critical reminder for enterprise security teams—particularly in the Asia-Pacific region, where active threats from Chinese and North Korean state-sponsored actors persist—to strengthen defenses against AI-augmented social engineering and reconnaissance operations. The intersection of AI and cybersecurity will likely remain a focal point as both defenders and attackers vie for the upper hand in this rapidly changing environment.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Research

ByteDance's Seedance 2.0 launches to viral success, producing cinematic video from multimodal inputs, propelling COL Group shares up 20% and reshaping content creation.

Top Stories

Montage Technology's IPO on the Hong Kong Stock Exchange raised $902 million, soaring 64% on its first day and reinforcing investor confidence in China's...

AI Education

Over 200 students urge schools to prioritize critical thinking over AI reliance, proposing a delay in access to tools like ChatGPT until ninth grade.

Top Stories

Gold soars past $5,100 and silver exceeds $112 as AI demand and geopolitical tensions reshape global markets and investor strategies.

AI Cybersecurity

Sanctioned Chinese hacking group APT31 exploits Google’s Gemini AI for planning cyberattacks on U.S. organizations, raising urgent cybersecurity concerns.

AI Marketing

Google unveils AI-driven advertising innovations, including AI answer ads and Direct Offers, enhancing targeting and consumer engagement in 2023.

AI Business

Ant Group pivots to AI-driven healthcare, targeting the $69 billion market to tackle China's healthcare challenges amid regulatory shifts and rising chronic diseases.

AI Regulation

Mrinank Sharma resigns from Anthropic, citing AI safety concerns, ahead of the AI Impact Summit 2026 in New Delhi, amplifying urgent discussions on ethical...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.