Microsoft has uncovered a troubling trend in cybercrime, with threat actors increasingly leveraging artificial intelligence (AI) to enhance the effectiveness and reach of their operations. According to a recent Microsoft Threat Intelligence report, malicious actors are now employing generative AI tools for a variety of tasks, including reconnaissance, phishing, malware creation, and post-compromise activities. This shift highlights a significant evolution in the tactics employed by cybercriminals, as AI serves to lower technical barriers and expedite the execution of attacks across the cyberattack lifecycle.
The report details that AI is specifically being utilized to draft phishing emails, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configuration. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media,” the report states. This allows attackers to produce more sophisticated phishing lures and tailor their approaches to deceive victims effectively.
Notably, threat groups such as North Korean actors identified by the code names Jasper Sleet and Coral Sleet have incorporated AI into their cyber operations, particularly in schemes targeting remote IT workers. These actors utilize AI tools to generate realistic identities, resumes, and communications, facilitating their entry into Western companies and ensuring continued access once hired. For example, Jasper Sleet has been known to prompt AI platforms to create culturally relevant name lists and email formats suitable for specific identity profiles.
Jasper Sleet’s operations extend further, as they employ generative AI to analyze job postings in the software development and IT fields. By extracting and summarizing the required skills from postings on professional platforms, the group can customize fake identities that align with the specific roles they intend to exploit.
The Microsoft report also outlines how AI is being harnessed to aid malware development and infrastructure creation. Threat actors are using AI coding tools to produce and refine malicious code, troubleshoot issues, or adapt malware to different programming languages. Some experimental malware displays signs of being AI-enabled, capable of dynamically generating scripts or altering its behavior based on runtime conditions.
Coral Sleet has also been observed utilizing AI tools to rapidly create fake company websites, provision necessary infrastructure, and test their deployments—all critical tasks that enhance the effectiveness of their attacks. When faced with AI safeguards designed to thwart misuse, threat actors are employing jailbreaking techniques to manipulate large language models (LLMs) into producing harmful code or content.
While the report indicates that AI is mainly being used for decision-making rather than executing attacks autonomously, Microsoft has noted a growing experimentation with “agentic AI.” This burgeoning trend suggests a potential shift toward more sophisticated, self-sufficient attack methodologies in the future.
Given that many of these IT worker campaigns exploit legitimate access to systems, Microsoft recommends that organizations regard such schemes as insider risks. As these AI-powered attacks increasingly resemble traditional cyber threats, security professionals are advised to focus on detecting unusual credential usage, fortifying identity systems against phishing, and securing AI systems that could be targeted in forthcoming attacks.
This trend is not limited to Microsoft. Google has reported similar observations, noting that threat actors are misusing its Gemini AI across the various phases of cyberattacks. Amazon has corroborated these findings, detailing an incident where multiple generative AI services were used in a campaign that breached over 600 FortiGate firewalls. As cybercriminals continue to innovate, the implications for cybersecurity measures become increasingly significant, underscoring the need for vigilance and adaptation in the face of evolving threats.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks


















































