Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI-Driven Cyberattacks Predicted to Surge in 2026, Warns Moody’s Report

Moody’s warns that by 2026, AI-driven cyberattacks could leverage autonomous threats, prompting 90% of CISOs to prioritize AI cybersecurity investments.

As the evolving landscape of artificial intelligence (AI) continues to impact various sectors, cybersecurity remains at the forefront of concern. Predictions for 2026 indicate that AI-driven cyber threats will escalate, potentially reshaping how organizations approach security. Experts have voiced alarm over the increasing sophistication of cyberattacks, particularly those leveraging AI to enhance their effectiveness and reach.

Paddy Harrington, an analyst at Forrester, foresees that a significant breach involving agentic AI will lead to repercussions such as employee dismissals. This reflects a broader concern regarding how adversaries are likely to utilize AI in their attacks. Marcus Sachs, senior vice president and chief engineer at the Center for Internet Security (CIS), warned that autonomous and agentic AI could become mainstream threats, enabling attackers to execute fully automated phishing campaigns and exploit chains with minimal human oversight. John Grady, an analyst at Omdia, added that as AI capabilities advance, living-off-the-land attacks, which utilize existing tools within a network, will become increasingly common.

Such predictions are underscored by a recent report from Moody’s, which emphasizes the urgency of enhanced cybersecurity measures. The firm’s 2026 cyber outlook report cautions that as companies adopt AI technologies, they are at risk from adaptive malware and autonomous threats. The report highlights how AI has already facilitated more personalized phishing attempts and deepfake attacks, with further risks anticipated from model poisoning and faster, AI-assisted hacking. Despite the promise of AI-powered defenses, Moody’s warns that these systems may introduce unpredictable behaviors, necessitating robust governance frameworks.

In addition to predictive insights from Moody’s, a study by cybersecurity vendor Trellix revealed that nearly 90% of Chief Information Security Officers (CISOs) consider AI-driven attacks a significant threat. With healthcare systems particularly vulnerable—evidenced by the exposure of 275 million patient records in 2024—CIOs are compelled to increase investments in AI-powered cybersecurity tools. However, they face the challenge of balancing these investments against budgets for innovation in other areas.

The looming threats of AI-driven impersonation scams are also gaining attention. A report from identity vendor Nametag predicts a surge in such scams targeting enterprises, fueled by the growing accessibility of deepfake technology. Criminals are increasingly leveraging AI to mimic voices, images, and videos, leading to attacks such as hiring fraud and social engineering schemes. A notable case involved a $25 million scam against British firm Arup, underscoring the risks posed to IT, HR, and finance departments. Nametag warns that as agentic AI becomes more prevalent, organizations must rethink their identity verification processes to ensure that legitimate personnel are behind each action.

Furthermore, the National Institute of Standards and Technology (NIST) is taking proactive steps by inviting public feedback on managing security risks associated with AI agents. Through its Center for AI Standards and Innovation (CAISI), NIST aims to gather insights on best practices, methodologies, and case studies that can fortify the secure development and deployment of AI systems. The agency stresses concerns over inadequately secured AI agents that could expose critical infrastructure to cyber threats, jeopardizing public safety.

Looking ahead, the ongoing evolution of AI technology calls for a comprehensive reassessment of security protocols. As organizations grapple with the dual-edged sword of AI—its potential for innovation alongside its capacity for abuse—stakeholders across industries must remain vigilant. The year 2026 could prove pivotal in defining the parameters of AI’s role in cybersecurity, as both threats and defenses continue to evolve.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.