Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google’s AI Threat Report Reveals New Malware Leveraging Generative Tools for Cyberattacks

Google’s latest threat report reveals that malware like HONESTCUE and PROMPTSTEAL now leverages AI to enhance stealth and automate data theft, prompting urgent security reevaluations.

Google Threat Intelligence Group, in collaboration with Google DeepMind, has issued a stark warning about the operationalization of artificial intelligence by cybercriminals. Their latest assessment underscores that AI is not necessarily creating new types of attacks but is significantly enhancing the efficiency, precision, and reach of existing methods. This development is alarming as it indicates a shift in how threats are executed.

The report indicates that large language models are facilitating a faster transition from concept to execution in cyberattacks. During the reconnaissance phase, adversaries can leverage publicly available data to create detailed profiles of executives, vendors, and technical vulnerabilities, allowing them to strategize their attacks with remarkable clarity.

In the realm of social engineering, AI is proving instrumental in generating convincing narratives, emails, and identities specifically tailored to deceive individual targets. Concurrently, malware developers are employing generative tools to refine exploits, investigate evasion strategies, and rapidly produce new variants, which pose a considerable challenge to traditional cybersecurity measures.

Researchers are also tracking the emergence of malware that consults AI services in real time during attacks. Such threats are capable of dynamically generating commands, altering their behavior on the fly, and accelerating data exfiltration, thus circumventing established detection mechanisms.

The 2026 report identifies several specific malware families utilizing AI APIs during execution to enhance their stealth and automate data theft. Among these, HONESTCUE operates as a downloader and launcher, using the Gemini API to dynamically create C# code in memory, facilitating its execution while evading static analysis tools. Meanwhile, PROMPTFLUX, a VBScript dropper, rewrites its source code hourly, creating a recursive cycle of mutation that helps it evade signature-based detection methods.

Another notable example is PROMPTSTEAL, an infostealer that queries large language models to generate system commands on demand, thus replacing traditional hard-coded instructions that are easily flagged. The COINBAIT phishing kit employs AI to clone cryptocurrency exchange interfaces with high visual fidelity, thereby enhancing its efficacy in harvesting user credentials. Additionally, Xanthorox has emerged as a dark web service marketed as a custom “offensive AI,” essentially a rebranded wrapper for legitimate commercial language models designed to bypass safety filters.

Activity associated with groups like APT28 highlights how automation is not only enhancing individual attacks but is also being employed to support broader geopolitical objectives by enabling sharper targeting and scalable intelligence gathering. A rising concern among cybersecurity experts is the tactic known as model distillation, where attackers flood systems with structured prompts in an attempt to replicate proprietary reasoning, essentially copying intelligence rather than merely data.

Google advises organizations to treat AI agents as legitimate identities, advocate for stringent privilege management, and implement defenses capable of responding at machine speed. In a landscape where every millisecond counts, relying solely on manual security measures is no longer viable.

While AI is undeniably driving innovation across various sectors, it is simultaneously redefining the cybersecurity threat landscape. The implications of these developments necessitate a reevaluation of existing security frameworks to ensure they can withstand the increasingly sophisticated tactics employed by cyber adversaries.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Google DeepMind's Demis Hassabis predicts AI could enable Isomorphic Labs to discover dozens of drugs annually, revolutionizing global healthcare in the next decade.

AI Cybersecurity

Google's Threat Intelligence Group reveals cybercriminals are exploiting its Gemini AI models for real-time malware development, complicating detection and raising security alarms.

AI Cybersecurity

Sanctioned Chinese hacking group APT31 exploits Google’s Gemini AI for planning cyberattacks on U.S. organizations, raising urgent cybersecurity concerns.

Top Stories

Global tech leaders, including Sundar Pichai and Sam Altman, will converge at India’s first AI Impact Summit, set for February 16-20, to shape global...

Top Stories

Sara Hooker's Adaption Labs secures $50M seed funding to revolutionize AI with adaptive, cost-effective models that reduce reliance on large-scale training.

Top Stories

Google DeepMind introduces Reinforced Attention Learning, a breakthrough model that enhances AI memory retention, outperforming traditional systems in long-term tasks.

Top Stories

Google DeepMind introduces two new benchmarks for AI decision-making in poker and Werewolf, evaluating agent performance in uncertainty with 900,000 Texas Hold'em hands.

AI Education

Google and Google DeepMind launch Project Genie, enabling real-time interactive environments for AI Ultra subscribers, revolutionizing education and training.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.