Connect with us

Hi, what are you looking for?

AI Cybersecurity

Microsoft Report Reveals AI’s Role in Accelerating Cyberattacks Across Industries

Microsoft warns that cybercriminals are increasingly using AI tools for sophisticated phishing, malware creation, and identity deception, enhancing attack efficacy across industries.

Microsoft has uncovered a troubling trend in cybercrime, with threat actors increasingly leveraging artificial intelligence (AI) to enhance the effectiveness and reach of their operations. According to a recent Microsoft Threat Intelligence report, malicious actors are now employing generative AI tools for a variety of tasks, including reconnaissance, phishing, malware creation, and post-compromise activities. This shift highlights a significant evolution in the tactics employed by cybercriminals, as AI serves to lower technical barriers and expedite the execution of attacks across the cyberattack lifecycle.

The report details that AI is specifically being utilized to draft phishing emails, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configuration. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media,” the report states. This allows attackers to produce more sophisticated phishing lures and tailor their approaches to deceive victims effectively.

Notably, threat groups such as North Korean actors identified by the code names Jasper Sleet and Coral Sleet have incorporated AI into their cyber operations, particularly in schemes targeting remote IT workers. These actors utilize AI tools to generate realistic identities, resumes, and communications, facilitating their entry into Western companies and ensuring continued access once hired. For example, Jasper Sleet has been known to prompt AI platforms to create culturally relevant name lists and email formats suitable for specific identity profiles.

Jasper Sleet’s operations extend further, as they employ generative AI to analyze job postings in the software development and IT fields. By extracting and summarizing the required skills from postings on professional platforms, the group can customize fake identities that align with the specific roles they intend to exploit.

The Microsoft report also outlines how AI is being harnessed to aid malware development and infrastructure creation. Threat actors are using AI coding tools to produce and refine malicious code, troubleshoot issues, or adapt malware to different programming languages. Some experimental malware displays signs of being AI-enabled, capable of dynamically generating scripts or altering its behavior based on runtime conditions.

Coral Sleet has also been observed utilizing AI tools to rapidly create fake company websites, provision necessary infrastructure, and test their deployments—all critical tasks that enhance the effectiveness of their attacks. When faced with AI safeguards designed to thwart misuse, threat actors are employing jailbreaking techniques to manipulate large language models (LLMs) into producing harmful code or content.

While the report indicates that AI is mainly being used for decision-making rather than executing attacks autonomously, Microsoft has noted a growing experimentation with “agentic AI.” This burgeoning trend suggests a potential shift toward more sophisticated, self-sufficient attack methodologies in the future.

Given that many of these IT worker campaigns exploit legitimate access to systems, Microsoft recommends that organizations regard such schemes as insider risks. As these AI-powered attacks increasingly resemble traditional cyber threats, security professionals are advised to focus on detecting unusual credential usage, fortifying identity systems against phishing, and securing AI systems that could be targeted in forthcoming attacks.

This trend is not limited to Microsoft. Google has reported similar observations, noting that threat actors are misusing its Gemini AI across the various phases of cyberattacks. Amazon has corroborated these findings, detailing an incident where multiple generative AI services were used in a campaign that breached over 600 FortiGate firewalls. As cybercriminals continue to innovate, the implications for cybersecurity measures become increasingly significant, underscoring the need for vigilance and adaptation in the face of evolving threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Finance

Tech giants Nvidia, Amazon, and Microsoft see valuations dip to lows not seen in years amid a $618B surge in AI spending, prompting investor...

AI Marketing

Google's search revenue surged 16% to $102.3 billion amid AI advancements, as user queries grow, despite a 54.6% drop in organic click-through rates.

AI Regulation

Custom Legal Marketing's study reveals AI content has no significant impact on Google rankings for law firms, showing only a 0.065 correlation across 2,435...

AI Generative

ETH Zurich and Anthropic reveal AI can unmask 66% of pseudonymous users online, challenging assumptions about digital privacy and anonymity.

AI Cybersecurity

U.S. deploys AI and satellites to monitor Iranian military operations, reshaping regional security while enhancing tactical advantages amid rising tensions.

AI Regulation

AI adoption is set to surge cloud costs by 50%, pushing organizations to implement automated controls or face escalating expenditures and risks.

AI Research

Google's new chain-of-thought prompting boosts AI reasoning accuracy by 20%, optimizing complex tasks and driving a 30% reduction in operational costs.

AI Government

Ipsos survey reveals 37% of Brits see AI as a risk to public services, with 51% fearing reduced human interaction and 50% anticipating job...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.