Connect with us

Hi, what are you looking for?

AI Cybersecurity

Microsoft Report Reveals AI’s Role in Accelerating Cyberattacks Across Industries

Microsoft warns that cybercriminals are increasingly using AI tools for sophisticated phishing, malware creation, and identity deception, enhancing attack efficacy across industries.

Microsoft has uncovered a troubling trend in cybercrime, with threat actors increasingly leveraging artificial intelligence (AI) to enhance the effectiveness and reach of their operations. According to a recent Microsoft Threat Intelligence report, malicious actors are now employing generative AI tools for a variety of tasks, including reconnaissance, phishing, malware creation, and post-compromise activities. This shift highlights a significant evolution in the tactics employed by cybercriminals, as AI serves to lower technical barriers and expedite the execution of attacks across the cyberattack lifecycle.

The report details that AI is specifically being utilized to draft phishing emails, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configuration. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media,” the report states. This allows attackers to produce more sophisticated phishing lures and tailor their approaches to deceive victims effectively.

Notably, threat groups such as North Korean actors identified by the code names Jasper Sleet and Coral Sleet have incorporated AI into their cyber operations, particularly in schemes targeting remote IT workers. These actors utilize AI tools to generate realistic identities, resumes, and communications, facilitating their entry into Western companies and ensuring continued access once hired. For example, Jasper Sleet has been known to prompt AI platforms to create culturally relevant name lists and email formats suitable for specific identity profiles.

Jasper Sleet’s operations extend further, as they employ generative AI to analyze job postings in the software development and IT fields. By extracting and summarizing the required skills from postings on professional platforms, the group can customize fake identities that align with the specific roles they intend to exploit.

The Microsoft report also outlines how AI is being harnessed to aid malware development and infrastructure creation. Threat actors are using AI coding tools to produce and refine malicious code, troubleshoot issues, or adapt malware to different programming languages. Some experimental malware displays signs of being AI-enabled, capable of dynamically generating scripts or altering its behavior based on runtime conditions.

Coral Sleet has also been observed utilizing AI tools to rapidly create fake company websites, provision necessary infrastructure, and test their deployments—all critical tasks that enhance the effectiveness of their attacks. When faced with AI safeguards designed to thwart misuse, threat actors are employing jailbreaking techniques to manipulate large language models (LLMs) into producing harmful code or content.

While the report indicates that AI is mainly being used for decision-making rather than executing attacks autonomously, Microsoft has noted a growing experimentation with “agentic AI.” This burgeoning trend suggests a potential shift toward more sophisticated, self-sufficient attack methodologies in the future.

Given that many of these IT worker campaigns exploit legitimate access to systems, Microsoft recommends that organizations regard such schemes as insider risks. As these AI-powered attacks increasingly resemble traditional cyber threats, security professionals are advised to focus on detecting unusual credential usage, fortifying identity systems against phishing, and securing AI systems that could be targeted in forthcoming attacks.

This trend is not limited to Microsoft. Google has reported similar observations, noting that threat actors are misusing its Gemini AI across the various phases of cyberattacks. Amazon has corroborated these findings, detailing an incident where multiple generative AI services were used in a campaign that breached over 600 FortiGate firewalls. As cybercriminals continue to innovate, the implications for cybersecurity measures become increasingly significant, underscoring the need for vigilance and adaptation in the face of evolving threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Government

Canada's AI Minister Evan Solomon proposes "airtight" regulations to combat bias and hate in AI, emphasizing inclusivity as a competitive advantage in the tech...

AI Generative

AI-driven advertising technology is set to surpass $800 billion by 2025, as platforms like Amazon and Google refine user journeys through advanced machine learning.

AI Marketing

AI marketing automation tools are projected to drive a 13.20% growth by 2032, enabling businesses to boost customer engagement and conversions significantly.

AI Finance

NAB reports that 42% of Australian SMEs adopt AI, with property services leading at 69%, highlighting significant sector disparities and opportunities for growth.

AI Education

Educators urge a shift from electronics to critical thinking in classrooms, as AI tools like ChatGPT risk diminishing students' analytical skills.

AI Generative

Veo 4 Video Generator launches, enabling instant cinematic video creation from text prompts, revolutionizing content production for marketers and businesses.

AI Regulation

AI integration in investigations raises critical UK GDPR compliance issues, necessitating robust governance frameworks to mitigate legal risks and ensure accountability.

AI Technology

Alpha Compute appoints Tom Richer, a 30-year AI infrastructure veteran, to its Advisory Board to enhance secure, sovereign AI compute solutions and GPUaaS offerings.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.