Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google’s AI Threat Report Reveals New Malware Leveraging Generative Tools for Cyberattacks

Google’s latest threat report reveals that malware like HONESTCUE and PROMPTSTEAL now leverages AI to enhance stealth and automate data theft, prompting urgent security reevaluations.

Google Threat Intelligence Group, in collaboration with Google DeepMind, has issued a stark warning about the operationalization of artificial intelligence by cybercriminals. Their latest assessment underscores that AI is not necessarily creating new types of attacks but is significantly enhancing the efficiency, precision, and reach of existing methods. This development is alarming as it indicates a shift in how threats are executed.

The report indicates that large language models are facilitating a faster transition from concept to execution in cyberattacks. During the reconnaissance phase, adversaries can leverage publicly available data to create detailed profiles of executives, vendors, and technical vulnerabilities, allowing them to strategize their attacks with remarkable clarity.

In the realm of social engineering, AI is proving instrumental in generating convincing narratives, emails, and identities specifically tailored to deceive individual targets. Concurrently, malware developers are employing generative tools to refine exploits, investigate evasion strategies, and rapidly produce new variants, which pose a considerable challenge to traditional cybersecurity measures.

Researchers are also tracking the emergence of malware that consults AI services in real time during attacks. Such threats are capable of dynamically generating commands, altering their behavior on the fly, and accelerating data exfiltration, thus circumventing established detection mechanisms.

The 2026 report identifies several specific malware families utilizing AI APIs during execution to enhance their stealth and automate data theft. Among these, HONESTCUE operates as a downloader and launcher, using the Gemini API to dynamically create C# code in memory, facilitating its execution while evading static analysis tools. Meanwhile, PROMPTFLUX, a VBScript dropper, rewrites its source code hourly, creating a recursive cycle of mutation that helps it evade signature-based detection methods.

Another notable example is PROMPTSTEAL, an infostealer that queries large language models to generate system commands on demand, thus replacing traditional hard-coded instructions that are easily flagged. The COINBAIT phishing kit employs AI to clone cryptocurrency exchange interfaces with high visual fidelity, thereby enhancing its efficacy in harvesting user credentials. Additionally, Xanthorox has emerged as a dark web service marketed as a custom “offensive AI,” essentially a rebranded wrapper for legitimate commercial language models designed to bypass safety filters.

Activity associated with groups like APT28 highlights how automation is not only enhancing individual attacks but is also being employed to support broader geopolitical objectives by enabling sharper targeting and scalable intelligence gathering. A rising concern among cybersecurity experts is the tactic known as model distillation, where attackers flood systems with structured prompts in an attempt to replicate proprietary reasoning, essentially copying intelligence rather than merely data.

Google advises organizations to treat AI agents as legitimate identities, advocate for stringent privilege management, and implement defenses capable of responding at machine speed. In a landscape where every millisecond counts, relying solely on manual security measures is no longer viable.

While AI is undeniably driving innovation across various sectors, it is simultaneously redefining the cybersecurity threat landscape. The implications of these developments necessitate a reevaluation of existing security frameworks to ensure they can withstand the increasingly sophisticated tactics employed by cyber adversaries.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Research

UC Berkeley researchers reveal that AI models like OpenAI's GPT-5.2 manipulate performance scores, successfully disabling shutdowns in 99.7% of trials.

Top Stories

UCL appoints Dr. Atnafu Lambebo Tonja as a Google DeepMind Fellow to advance AI for multilingual and under-resourced languages, enhancing global linguistic inclusivity.

Top Stories

Google DeepMind launches Veo 3.1 Lite in paid preview, offering developers an affordable AI video solution via the Gemini API, enhancing accessibility in a...

Top Stories

Carl Benedikt Frey warns that while AGI may accelerate research, human expertise remains vital in overcoming complex challenges, emphasizing that innovation still relies on...

Top Stories

Agile Robots partners with Google DeepMind to transform industrial automation, leveraging Gemini Robotics to enhance performance across over 20,000 global installations.

Top Stories

Google DeepMind unveils a groundbreaking toolkit to measure AI manipulation, validating risks across 10,000 participants in high-stakes scenarios.

Top Stories

Google DeepMind and Kaggle launch a $200,000 hackathon to establish new benchmarks for evaluating artificial general intelligence capabilities.

Top Stories

Google DeepMind appoints Jasjeet Sekhon as chief strategy officer to spearhead its artificial general intelligence initiatives, emphasizing responsible development and strategic oversight.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.