Connect with us

Hi, what are you looking for?

AI Cybersecurity

Microsoft Report Reveals AI’s Role in Accelerating Cyberattacks Across Industries

Microsoft warns that cybercriminals are increasingly using AI tools for sophisticated phishing, malware creation, and identity deception, enhancing attack efficacy across industries.

Microsoft has uncovered a troubling trend in cybercrime, with threat actors increasingly leveraging artificial intelligence (AI) to enhance the effectiveness and reach of their operations. According to a recent Microsoft Threat Intelligence report, malicious actors are now employing generative AI tools for a variety of tasks, including reconnaissance, phishing, malware creation, and post-compromise activities. This shift highlights a significant evolution in the tactics employed by cybercriminals, as AI serves to lower technical barriers and expedite the execution of attacks across the cyberattack lifecycle.

The report details that AI is specifically being utilized to draft phishing emails, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configuration. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media,” the report states. This allows attackers to produce more sophisticated phishing lures and tailor their approaches to deceive victims effectively.

Notably, threat groups such as North Korean actors identified by the code names Jasper Sleet and Coral Sleet have incorporated AI into their cyber operations, particularly in schemes targeting remote IT workers. These actors utilize AI tools to generate realistic identities, resumes, and communications, facilitating their entry into Western companies and ensuring continued access once hired. For example, Jasper Sleet has been known to prompt AI platforms to create culturally relevant name lists and email formats suitable for specific identity profiles.

Jasper Sleet’s operations extend further, as they employ generative AI to analyze job postings in the software development and IT fields. By extracting and summarizing the required skills from postings on professional platforms, the group can customize fake identities that align with the specific roles they intend to exploit.

The Microsoft report also outlines how AI is being harnessed to aid malware development and infrastructure creation. Threat actors are using AI coding tools to produce and refine malicious code, troubleshoot issues, or adapt malware to different programming languages. Some experimental malware displays signs of being AI-enabled, capable of dynamically generating scripts or altering its behavior based on runtime conditions.

Coral Sleet has also been observed utilizing AI tools to rapidly create fake company websites, provision necessary infrastructure, and test their deployments—all critical tasks that enhance the effectiveness of their attacks. When faced with AI safeguards designed to thwart misuse, threat actors are employing jailbreaking techniques to manipulate large language models (LLMs) into producing harmful code or content.

While the report indicates that AI is mainly being used for decision-making rather than executing attacks autonomously, Microsoft has noted a growing experimentation with “agentic AI.” This burgeoning trend suggests a potential shift toward more sophisticated, self-sufficient attack methodologies in the future.

Given that many of these IT worker campaigns exploit legitimate access to systems, Microsoft recommends that organizations regard such schemes as insider risks. As these AI-powered attacks increasingly resemble traditional cyber threats, security professionals are advised to focus on detecting unusual credential usage, fortifying identity systems against phishing, and securing AI systems that could be targeted in forthcoming attacks.

This trend is not limited to Microsoft. Google has reported similar observations, noting that threat actors are misusing its Gemini AI across the various phases of cyberattacks. Amazon has corroborated these findings, detailing an incident where multiple generative AI services were used in a campaign that breached over 600 FortiGate firewalls. As cybercriminals continue to innovate, the implications for cybersecurity measures become increasingly significant, underscoring the need for vigilance and adaptation in the face of evolving threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Business

Red Hat advances enterprise AI with Small Language Models that achieve over 98% validity in structured tasks, prioritizing reliability and data sovereignty.

AI Cybersecurity

Anthropic's Mythos exposes thousands of critical vulnerabilities in major systems, prompting $100M in defensive action from tech giants and U.S. banks.

AI Government

US Department of Defense partners with tech giants including SpaceX and OpenAI to launch an "AI-first" initiative aimed at enhancing military decision-making efficiency.

AI Research

OpenAI's o1 model achieves 81.6% diagnostic accuracy in emergency situations, surpassing human doctors and signaling a major shift in medical practice.

AI Regulation

Korea Venture Investment Corp. unveils AI-driven fund management systems by integrating Nvidia H200 GPUs to enhance efficiency and support unicorn growth.

AI Technology

Apple raises Mac mini starting price to $799 amid AI-driven inventory shortages, eliminating the $599 model in response to surging demand for advanced computing.

AI Research

IBM launches a Chicago Quantum Hub to create 750 AI jobs and expands its MIT partnership to advance quantum computing and AI integration.

AI Government

71% of Australian employees use generative AI daily, but only 36% trust its implementation, highlighting urgent calls for better policy frameworks and safeguards.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.