Microsoft has issued a stark warning about the growing use of artificial intelligence (AI) by cybercriminals throughout various phases of cyberattacks. This insight comes from a recent Microsoft Threat Intelligence report, which details how hackers are harnessing AI tools to streamline their operations, broaden the scope of their malicious campaigns, and lower the technical expertise required to execute sophisticated attacks.
According to the report, the use of generative AI is pervasive among attackers, aiding in tasks such as reconnaissance, phishing, infrastructure development, malware creation, and activities following a compromise. Microsoft researchers noted that these advancements allow threat actors to employ large language models (LLMs) effectively to produce convincing phishing emails, translate content into multiple languages, summarize stolen data, develop or debug malware code, and construct scripts for configuring attack frameworks.
AI is currently functioning as a significant “force multiplier,” enabling attackers to act more swiftly and efficiently while keeping human oversight over targeting and decision-making processes. The report highlights specific threat groups that have integrated AI into their operations, including North Korean hacker collectives referred to as Jasper Sleet and Coral Sleet. These groups have been found to exploit AI in schemes where they impersonate legitimate employees to infiltrate Western companies.
Within these operations, AI aids in creating realistic identities, resumes, and communication messages designed to secure employment and sustain access within targeted organizations. For instance, attackers might instruct AI systems to generate culturally appropriate names or email formats that align with their fabricated personas.
In the realm of malware development, Microsoft researchers identified that cybercriminals are using AI coding tools to enhance their malicious code, troubleshoot programming issues, and convert malware components between different programming languages. Some preliminary experiments even suggest the emergence of AI-enabled malware capable of dynamically generating scripts or modifying its behavior during execution.
On the front of infrastructure creation, the threat group Coral Sleet has been noted for using AI to swiftly generate counterfeit company websites, establish attack frameworks, and troubleshoot their deployments. When AI platforms aim to curb such malicious applications, attackers often resort to “jailbreaking” techniques that trick AI models into generating harmful content.
Moreover, Microsoft has observed that some threat actors are beginning to experiment with agentic AI systems capable of performing tasks autonomously and adjusting their actions based on results. However, the company emphasizes that, at this stage, AI mainly serves to assist in decision-making rather than executing fully autonomous cyberattacks.
This troubling trend is not unique to Microsoft. Google has reported similar abuses of its Gemini AI across different stages of cyberattacks. In another instance, researchers from Amazon linked an AI-assisted campaign to a hacker responsible for compromising over 600 FortiGate firewalls in just five weeks.
In light of these developments, Microsoft advises organizations to approach AI-assisted attacks as scenarios involving insider risks, particularly since many of these threats exploit legitimate credentials or employee access. Recommendations from the company include monitoring unusual credential activities, fortifying identity systems against phishing attempts, and safeguarding AI systems that could be future attack targets. Cybersecurity experts point out that while AI is advancing productivity and innovation, it simultaneously becomes a formidable tool for cybercriminals, complicating modern cyber defense efforts.
See also
Proofpoint Reveals 90% of Cyberattacks Involve Human Error in AI Era
OpenAI Launches Codex Security for Context-Aware Vulnerability Detection, Cutting Noise by 84%
OpenAI Launches Codex Security, AI Agent That Identifies and Fixes Code Vulnerabilities
IBM Report Reveals 44% Surge in AI-Driven Cyberattacks Targeting Vulnerable Systems
24.3% of Companies Paid Ransoms in 2025 as AI-Driven Cyberattacks Escalate



















































