Connect with us

Hi, what are you looking for?

AI Cybersecurity

Microsoft Report Reveals Cybercriminals Use AI for Phishing, Malware, and Attacks

Microsoft warns that cybercriminals are leveraging AI to enhance phishing and malware attacks, exploiting legitimate credentials and reducing technical barriers to entry.

Microsoft has issued a stark warning about the growing use of artificial intelligence (AI) by cybercriminals throughout various phases of cyberattacks. This insight comes from a recent Microsoft Threat Intelligence report, which details how hackers are harnessing AI tools to streamline their operations, broaden the scope of their malicious campaigns, and lower the technical expertise required to execute sophisticated attacks.

According to the report, the use of generative AI is pervasive among attackers, aiding in tasks such as reconnaissance, phishing, infrastructure development, malware creation, and activities following a compromise. Microsoft researchers noted that these advancements allow threat actors to employ large language models (LLMs) effectively to produce convincing phishing emails, translate content into multiple languages, summarize stolen data, develop or debug malware code, and construct scripts for configuring attack frameworks.

AI is currently functioning as a significant “force multiplier,” enabling attackers to act more swiftly and efficiently while keeping human oversight over targeting and decision-making processes. The report highlights specific threat groups that have integrated AI into their operations, including North Korean hacker collectives referred to as Jasper Sleet and Coral Sleet. These groups have been found to exploit AI in schemes where they impersonate legitimate employees to infiltrate Western companies.

Within these operations, AI aids in creating realistic identities, resumes, and communication messages designed to secure employment and sustain access within targeted organizations. For instance, attackers might instruct AI systems to generate culturally appropriate names or email formats that align with their fabricated personas.

In the realm of malware development, Microsoft researchers identified that cybercriminals are using AI coding tools to enhance their malicious code, troubleshoot programming issues, and convert malware components between different programming languages. Some preliminary experiments even suggest the emergence of AI-enabled malware capable of dynamically generating scripts or modifying its behavior during execution.

On the front of infrastructure creation, the threat group Coral Sleet has been noted for using AI to swiftly generate counterfeit company websites, establish attack frameworks, and troubleshoot their deployments. When AI platforms aim to curb such malicious applications, attackers often resort to “jailbreaking” techniques that trick AI models into generating harmful content.

Moreover, Microsoft has observed that some threat actors are beginning to experiment with agentic AI systems capable of performing tasks autonomously and adjusting their actions based on results. However, the company emphasizes that, at this stage, AI mainly serves to assist in decision-making rather than executing fully autonomous cyberattacks.

This troubling trend is not unique to Microsoft. Google has reported similar abuses of its Gemini AI across different stages of cyberattacks. In another instance, researchers from Amazon linked an AI-assisted campaign to a hacker responsible for compromising over 600 FortiGate firewalls in just five weeks.

In light of these developments, Microsoft advises organizations to approach AI-assisted attacks as scenarios involving insider risks, particularly since many of these threats exploit legitimate credentials or employee access. Recommendations from the company include monitoring unusual credential activities, fortifying identity systems against phishing attempts, and safeguarding AI systems that could be future attack targets. Cybersecurity experts point out that while AI is advancing productivity and innovation, it simultaneously becomes a formidable tool for cybercriminals, complicating modern cyber defense efforts.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Technology

Nvidia halts H200 AI chip production for China amid export restrictions and invests $4B in optical components to enhance AI infrastructure capabilities.

AI Regulation

Minnesota lawmakers introduce five bills to regulate AI, prohibiting its use in health insurance and restricting children's access to chatbots amid rising safety concerns.

AI Business

Developer launches AI micro-SaaS in just 48 hours, generating revenue from three users by solving content repurposing challenges for freelancers.

AI Research

Mining companies leverage AI for predictive maintenance, significantly reducing downtime and extending machinery lifespan while navigating rising operational costs and strict regulations

AI Tools

Samsung is exploring AI-driven vibe coding for Galaxy devices, enabling users to create customized apps without coding skills, transforming mobile personalization.

AI Finance

UK finance firms must enhance AI security with five essential tactics, as reports show boards are prioritizing trust and resilience amid rising risks.

AI Cybersecurity

IBM's latest report highlights a 44% surge in AI-driven cyberattacks targeting vulnerable public-facing applications, underscoring urgent cybersecurity needs.

AI Education

Securly reports that 1 in 5 student interactions with AI involve cheating, self-harm, or bullying, highlighting urgent safety concerns in education.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.