Artificial intelligence (AI) has transformed industries by automating tasks, accelerating research, and enhancing communication. However, the same technology has been co-opted by cybercriminals to facilitate activities such as password theft and financial fraud. In a recent blog post, the Google Threat Intelligence Group (GTIG) shed light on how malicious actors are leveraging AI programs, including Google’s own Gemini, to launch cyberattacks aimed at stealing sensitive information or deceiving victims into divulging it. GTIG’s findings highlight a troubling trend where AI is being employed for intellectual property theft, surveillance, and the creation of advanced malware, prompting the group to identify various “threat actors” who have attempted to exploit Gemini for nefarious purposes.
One significant advantage of AI is its capability to rapidly scour the internet for information based on a defined prompt. GTIG noted that this feature enables hackers to quickly gather profiles on potential targets, providing insights into their industries, roles, and organizational positions. This streamlined reconnaissance allows for more efficient planning of attacks compared to traditional methods. For instance, hackers identified as “UNC6418” utilized Gemini to seek sensitive information about individuals within Ukraine’s defense sector as part of a phishing scheme.
Moreover, AI’s ability to generate convincing content has made phishing attempts increasingly sophisticated. Once hackers compile a list of potential victims, they can use AI tools to craft emails that closely mimic legitimate correspondence, overcoming traditional red flags such as poor grammar and awkward phrasing. GTIG cited the case of “UNC2970,” a threat actor with links to the North Korean government, who employed AI to pose as recruiters targeting cybersecurity professionals. One phishing kit uncovered by GTIG, known as COINBAIT, was designed to extract credentials from cryptocurrency investors, showcasing the potential for AI-driven scams.
In addition to crafting scams, hackers are also using AI to develop malware. GTIG reported that cybercriminals have discovered ways to exploit coding tools, allowing them to generate malicious software. By leveraging what they call “agentic AI capabilities,” hackers can create complex tasks with minimal human intervention. For example, the threat actor “UNC795” attempted to use Gemini to produce an AI-integrated code auditing tool, suggesting an interest in more adaptable and autonomous malware development. Though many of these examples are still considered proofs of concept without resulting in significant attacks, they signal a shift towards novel malware capabilities.
One particularly alarming instance mentioned in the GTIG report is HONESTCUE, a malware sample designed as a backdoor trojan capable of employing sophisticated obfuscation techniques. Once activated, HONESTCUE could utilize Gemini to retrieve additional malicious code without leaving traces on a victim’s hard drive. While this specific malware has not yet been linked to any confirmed cyberattacks, its development by amateur coders raises concerns about what seasoned hackers might achieve with the same capabilities.
The implications of these findings are significant, as they suggest a growing sophistication and resourcefulness among cybercriminals who are increasingly adopting AI technologies for malicious purposes. As AI tools become more accessible, the landscape of cybersecurity is evolving, requiring a reassessment of defense strategies. The use of AI in cybercrime not only highlights the vulnerabilities of existing systems but also emphasizes the necessity for ongoing vigilance and innovation in cybersecurity measures. Looking forward, the trends identified by GTIG could shape the future of both AI applications and cybersecurity, underscoring the dual-edged nature of technological advancements.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































