Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google Warns Cybercriminals Integrating AI into Live Attacks, Explores Gemini Exploitation

Google’s Threat Intelligence Group reveals cybercriminals are exploiting its Gemini AI models for real-time malware development, complicating detection and raising security alarms.

A report released today by the Google Threat Intelligence Group raises alarms about the evolving strategies of cybercriminals, who are increasingly integrating artificial intelligence into their attack methodologies. This development marks a shift from mere experimentation with AI to its direct application in operational workflows, underscoring a growing concern within the cybersecurity landscape.

The report highlights the misuse and targeting of Google’s own Gemini models, revealing that generative AI systems are being probed and incorporated into malicious tools. Researchers observed malware families making real-time application programming calls to the Gemini model during attacks, dynamically requesting generated source code to perform specific tasks, rather than embedding all malicious functions within the code itself.

One notable example cited in the report is a malware variant known as HONESTCUE, which utilized prompts to retrieve C# code that was subsequently executed as part of its attack chain. This technique allows operators to move complex logic outside the static malware binary, potentially complicating detection methods that rely on traditional signatures or predefined behavioral indicators.

Additionally, the report details ongoing efforts to conduct model extraction—also referred to as distillation attacks. In these attacks, threat actors issue a high volume of structured queries to the model in an attempt to infer its internal logic, behavior, and response patterns. By carefully analyzing the outputs, they can approximate the capabilities of proprietary models, effectively training alternative systems without the steep costs associated with developing them from scratch.

Google states that it has identified and disrupted multiple campaigns focused on high-volume prompt activity aimed at extracting knowledge from the Gemini model. The findings also reveal that both state-aligned and financially motivated groups are integrating AI tools into various phases of cyber operations, including reconnaissance, vulnerability research, script development, and phishing content generation. Generative AI models are noted for their ability to produce convincing lures, refine malicious code snippets, and expedite technical research targeting specific technologies.

Moreover, the report indicates that adversaries are exploring the use of agentic AI capabilities, which can execute multistep tasks with minimal human input. This raises concerns over the potential for future malware to incorporate more autonomous decision-making elements. However, there is currently no evidence of widespread deployment of agentic AI among cybercriminals, with Google emphasizing that most observed uses of AI remain as enhancements rather than replacements for human operators.

Despite the alarming findings, some experts are skeptical about the implications of the report. Dr. Ilia Kolochenko, chief executive at ImmuniWeb SA, expressed doubts regarding the advancements of generative AI in cybersecurity. He commented via email to SiliconANGLE that the report appears to be a poorly orchestrated public relations effort by Google, aimed at revitalizing interest in its AI technology amidst dwindling enthusiasm from investors in generative AI.

Kolochenko further stated that while advanced persistent threats may utilize generative AI in their cyberattacks, it does not imply that generative AI has reached a level of sophistication sufficient to independently create complex malware or execute the full cyber kill chain of an attack. He acknowledged that generative AI could accelerate and automate simpler processes—even for APT groups—but dismissed the sensationalized conclusions regarding the perceived omnipotence of generative AI in hacking.

He raised another concern regarding potential legal ramifications for Google, suggesting that the company may be setting itself up for liability by acknowledging that nation-state groups and cyber-terrorists are actively exploiting its AI technology for malicious purposes. Kolochenko argued that implementing guardrails and enhanced customer due diligence could have mitigated the reported abuses, leaving open the question of accountability for damages caused by these cyber-threat actors.

As the cybersecurity landscape continues to evolve, the integration of AI into malicious operations presents a complex challenge for organizations and security experts alike. The findings from Google’s report serve as a crucial reminder of the necessity for vigilance and proactive measures in the face of rapidly advancing technology.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Nvidia's stock at $186.81 and Micron's at $413.48 present compelling buys as AI demand surges amid tight supply, promising robust growth ahead.

AI Generative

Researchers reveal 80.9% success rate in bypassing AI image editing filters using in-image text, exposing critical vulnerabilities in leading models like GPT-Image 1.5.

Top Stories

OpenAI launches Codex-Spark, achieving 1,000 tokens per second on Cerebras chips, as it accelerates efforts to outpace competitors like Google and Anthropic.

Top Stories

Global AI leaders, including Sundar Pichai and Sam Altman, will convene at India's AI Impact Summit 2026 to forge strategic partnerships in a $700B...

AI Technology

DOE launches 26 AI challenges to cut nuclear deployment timelines by 50% and reduce operational costs by over 50% in a revolutionary energy initiative.

AI Tools

Spotify explores AI remix capabilities amid artist revenue potential, urging industry partners to establish licensing frameworks after removing 75 million spam tracks.

Top Stories

Quantum AI integration promises to cut AI deployment costs by up to 50% while enhancing energy efficiency, positioning leaders to secure competitive advantages.

Top Stories

Microsoft AI CEO Mustafa Suleyman warns that white-collar jobs, including lawyers and accountants, could be fully automated within 12 to 18 months.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.