Connect with us

Hi, what are you looking for?

AI Cybersecurity

Microsoft Report Reveals AI’s Role in Accelerating Cyberattacks Across Industries

Microsoft warns that cybercriminals are increasingly using AI tools for sophisticated phishing, malware creation, and identity deception, enhancing attack efficacy across industries.

Microsoft has uncovered a troubling trend in cybercrime, with threat actors increasingly leveraging artificial intelligence (AI) to enhance the effectiveness and reach of their operations. According to a recent Microsoft Threat Intelligence report, malicious actors are now employing generative AI tools for a variety of tasks, including reconnaissance, phishing, malware creation, and post-compromise activities. This shift highlights a significant evolution in the tactics employed by cybercriminals, as AI serves to lower technical barriers and expedite the execution of attacks across the cyberattack lifecycle.

The report details that AI is specifically being utilized to draft phishing emails, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configuration. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media,” the report states. This allows attackers to produce more sophisticated phishing lures and tailor their approaches to deceive victims effectively.

Notably, threat groups such as North Korean actors identified by the code names Jasper Sleet and Coral Sleet have incorporated AI into their cyber operations, particularly in schemes targeting remote IT workers. These actors utilize AI tools to generate realistic identities, resumes, and communications, facilitating their entry into Western companies and ensuring continued access once hired. For example, Jasper Sleet has been known to prompt AI platforms to create culturally relevant name lists and email formats suitable for specific identity profiles.

Jasper Sleet’s operations extend further, as they employ generative AI to analyze job postings in the software development and IT fields. By extracting and summarizing the required skills from postings on professional platforms, the group can customize fake identities that align with the specific roles they intend to exploit.

The Microsoft report also outlines how AI is being harnessed to aid malware development and infrastructure creation. Threat actors are using AI coding tools to produce and refine malicious code, troubleshoot issues, or adapt malware to different programming languages. Some experimental malware displays signs of being AI-enabled, capable of dynamically generating scripts or altering its behavior based on runtime conditions.

Coral Sleet has also been observed utilizing AI tools to rapidly create fake company websites, provision necessary infrastructure, and test their deployments—all critical tasks that enhance the effectiveness of their attacks. When faced with AI safeguards designed to thwart misuse, threat actors are employing jailbreaking techniques to manipulate large language models (LLMs) into producing harmful code or content.

While the report indicates that AI is mainly being used for decision-making rather than executing attacks autonomously, Microsoft has noted a growing experimentation with “agentic AI.” This burgeoning trend suggests a potential shift toward more sophisticated, self-sufficient attack methodologies in the future.

Given that many of these IT worker campaigns exploit legitimate access to systems, Microsoft recommends that organizations regard such schemes as insider risks. As these AI-powered attacks increasingly resemble traditional cyber threats, security professionals are advised to focus on detecting unusual credential usage, fortifying identity systems against phishing, and securing AI systems that could be targeted in forthcoming attacks.

This trend is not limited to Microsoft. Google has reported similar observations, noting that threat actors are misusing its Gemini AI across the various phases of cyberattacks. Amazon has corroborated these findings, detailing an incident where multiple generative AI services were used in a campaign that breached over 600 FortiGate firewalls. As cybercriminals continue to innovate, the implications for cybersecurity measures become increasingly significant, underscoring the need for vigilance and adaptation in the face of evolving threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

Tennessee's AI Public Safety Act mandates $500M companies to disclose child protection policies while addressing catastrophic risks, following White House input.

AI Finance

Google unveils TPU 8t and TPU 8i AI processors, achieving a 2.8x price-to-performance boost, intensifying competition with Nvidia and AMD in AI chip market.

Top Stories

TSMC targets $311.5 billion in revenue by 2030, solidifying its role as a key manufacturer in the AI chip market alongside Nvidia's dominance.

AI Tools

PolyAI's Agent Development Kit enables rapid AI agent creation, cutting development time from weeks to hours, empowering teams with 60% autonomous workflow efficiency.

AI Regulation

Ambrosia Behavioral Health highlights that the rise of AI search tools in Florida is transforming mental health treatment decisions, emphasizing the need for professional...

AI Marketing

AI in B2B sales enhances efficiency by automating tasks and providing predictive insights, potentially generating trillions in value but risking buyer trust if mismanaged.

AI Technology

HKUST's PRET system achieves 100% accuracy in colorectal cancer diagnosis, revolutionizing AI pathology with minimal sample requirements and no extensive retraining.

AI Generative

OpenAI's ChatGPT Images 2.0 launches, achieving a 3840 x 2160 pixel resolution with improved image generation quality, surpassing competitors like Gemini.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.