Connect with us

Hi, what are you looking for?

AI Cybersecurity

Google Warns Cybercriminals Integrating AI into Live Attacks, Explores Gemini Exploitation

Google’s Threat Intelligence Group reveals cybercriminals are exploiting its Gemini AI models for real-time malware development, complicating detection and raising security alarms.

A report released today by the Google Threat Intelligence Group raises alarms about the evolving strategies of cybercriminals, who are increasingly integrating artificial intelligence into their attack methodologies. This development marks a shift from mere experimentation with AI to its direct application in operational workflows, underscoring a growing concern within the cybersecurity landscape.

The report highlights the misuse and targeting of Google’s own Gemini models, revealing that generative AI systems are being probed and incorporated into malicious tools. Researchers observed malware families making real-time application programming calls to the Gemini model during attacks, dynamically requesting generated source code to perform specific tasks, rather than embedding all malicious functions within the code itself.

One notable example cited in the report is a malware variant known as HONESTCUE, which utilized prompts to retrieve C# code that was subsequently executed as part of its attack chain. This technique allows operators to move complex logic outside the static malware binary, potentially complicating detection methods that rely on traditional signatures or predefined behavioral indicators.

Additionally, the report details ongoing efforts to conduct model extraction—also referred to as distillation attacks. In these attacks, threat actors issue a high volume of structured queries to the model in an attempt to infer its internal logic, behavior, and response patterns. By carefully analyzing the outputs, they can approximate the capabilities of proprietary models, effectively training alternative systems without the steep costs associated with developing them from scratch.

Google states that it has identified and disrupted multiple campaigns focused on high-volume prompt activity aimed at extracting knowledge from the Gemini model. The findings also reveal that both state-aligned and financially motivated groups are integrating AI tools into various phases of cyber operations, including reconnaissance, vulnerability research, script development, and phishing content generation. Generative AI models are noted for their ability to produce convincing lures, refine malicious code snippets, and expedite technical research targeting specific technologies.

Moreover, the report indicates that adversaries are exploring the use of agentic AI capabilities, which can execute multistep tasks with minimal human input. This raises concerns over the potential for future malware to incorporate more autonomous decision-making elements. However, there is currently no evidence of widespread deployment of agentic AI among cybercriminals, with Google emphasizing that most observed uses of AI remain as enhancements rather than replacements for human operators.

Despite the alarming findings, some experts are skeptical about the implications of the report. Dr. Ilia Kolochenko, chief executive at ImmuniWeb SA, expressed doubts regarding the advancements of generative AI in cybersecurity. He commented via email to SiliconANGLE that the report appears to be a poorly orchestrated public relations effort by Google, aimed at revitalizing interest in its AI technology amidst dwindling enthusiasm from investors in generative AI.

Kolochenko further stated that while advanced persistent threats may utilize generative AI in their cyberattacks, it does not imply that generative AI has reached a level of sophistication sufficient to independently create complex malware or execute the full cyber kill chain of an attack. He acknowledged that generative AI could accelerate and automate simpler processes—even for APT groups—but dismissed the sensationalized conclusions regarding the perceived omnipotence of generative AI in hacking.

He raised another concern regarding potential legal ramifications for Google, suggesting that the company may be setting itself up for liability by acknowledging that nation-state groups and cyber-terrorists are actively exploiting its AI technology for malicious purposes. Kolochenko argued that implementing guardrails and enhanced customer due diligence could have mitigated the reported abuses, leaving open the question of accountability for damages caused by these cyber-threat actors.

As the cybersecurity landscape continues to evolve, the integration of AI into malicious operations presents a complex challenge for organizations and security experts alike. The findings from Google’s report serve as a crucial reminder of the necessity for vigilance and proactive measures in the face of rapidly advancing technology.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Marketing

Clever AI Humanizer tops a review of 20 email tools, scoring 9.5/10 for transforming AI-generated content into engaging, human-like communications.

AI Regulation

California Governor Gavin Newsom orders a review of AI supply-chain risk designations, impacting San Francisco's Anthropic amidst military contract disputes.

AI Marketing

Criteo launches Criteo GO, a generative AI tool enabling SMBs to create ad campaigns in five clicks, achieving over 20% higher ROI than traditional...

AI Government

Microsoft commits $10 billion to Japan's AI and cybersecurity sectors by 2029, aiming to train one million engineers and enhance data security and infrastructure.

AI Technology

Harvard study reveals that 94% of professionals see AI as crucial for cybersecurity, yet many firms risk reputational damage by neglecting strategic training.

Top Stories

Microsoft shifts to independent AI development, targeting state-of-the-art models by 2027, fueled by Nvidia chips and a new strategic focus.

AI Finance

AI banking experts highlight JPMorgan Chase and Bank of America's automation success, driving operational efficiency and customer loyalty amid rising cyber threats.

AI Education

Vietnamese universities are restructuring curricula to integrate AI as a core competency, addressing the 40% job impact from AI by 2030 and enhancing student...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.