A report released today by the Google Threat Intelligence Group raises alarms about the evolving strategies of cybercriminals, who are increasingly integrating artificial intelligence into their attack methodologies. This development marks a shift from mere experimentation with AI to its direct application in operational workflows, underscoring a growing concern within the cybersecurity landscape.
The report highlights the misuse and targeting of Google’s own Gemini models, revealing that generative AI systems are being probed and incorporated into malicious tools. Researchers observed malware families making real-time application programming calls to the Gemini model during attacks, dynamically requesting generated source code to perform specific tasks, rather than embedding all malicious functions within the code itself.
One notable example cited in the report is a malware variant known as HONESTCUE, which utilized prompts to retrieve C# code that was subsequently executed as part of its attack chain. This technique allows operators to move complex logic outside the static malware binary, potentially complicating detection methods that rely on traditional signatures or predefined behavioral indicators.
Additionally, the report details ongoing efforts to conduct model extraction—also referred to as distillation attacks. In these attacks, threat actors issue a high volume of structured queries to the model in an attempt to infer its internal logic, behavior, and response patterns. By carefully analyzing the outputs, they can approximate the capabilities of proprietary models, effectively training alternative systems without the steep costs associated with developing them from scratch.
Google states that it has identified and disrupted multiple campaigns focused on high-volume prompt activity aimed at extracting knowledge from the Gemini model. The findings also reveal that both state-aligned and financially motivated groups are integrating AI tools into various phases of cyber operations, including reconnaissance, vulnerability research, script development, and phishing content generation. Generative AI models are noted for their ability to produce convincing lures, refine malicious code snippets, and expedite technical research targeting specific technologies.
Moreover, the report indicates that adversaries are exploring the use of agentic AI capabilities, which can execute multistep tasks with minimal human input. This raises concerns over the potential for future malware to incorporate more autonomous decision-making elements. However, there is currently no evidence of widespread deployment of agentic AI among cybercriminals, with Google emphasizing that most observed uses of AI remain as enhancements rather than replacements for human operators.
Despite the alarming findings, some experts are skeptical about the implications of the report. Dr. Ilia Kolochenko, chief executive at ImmuniWeb SA, expressed doubts regarding the advancements of generative AI in cybersecurity. He commented via email to SiliconANGLE that the report appears to be a poorly orchestrated public relations effort by Google, aimed at revitalizing interest in its AI technology amidst dwindling enthusiasm from investors in generative AI.
Kolochenko further stated that while advanced persistent threats may utilize generative AI in their cyberattacks, it does not imply that generative AI has reached a level of sophistication sufficient to independently create complex malware or execute the full cyber kill chain of an attack. He acknowledged that generative AI could accelerate and automate simpler processes—even for APT groups—but dismissed the sensationalized conclusions regarding the perceived omnipotence of generative AI in hacking.
He raised another concern regarding potential legal ramifications for Google, suggesting that the company may be setting itself up for liability by acknowledging that nation-state groups and cyber-terrorists are actively exploiting its AI technology for malicious purposes. Kolochenko argued that implementing guardrails and enhanced customer due diligence could have mitigated the reported abuses, leaving open the question of accountability for damages caused by these cyber-threat actors.
As the cybersecurity landscape continues to evolve, the integration of AI into malicious operations presents a complex challenge for organizations and security experts alike. The findings from Google’s report serve as a crucial reminder of the necessity for vigilance and proactive measures in the face of rapidly advancing technology.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks



















































