Cybercrime is poised to undergo a significant transformation by 2026, as advancements in artificial intelligence (AI) and automation drive a new wave of fully industrialized cyber operations, according to forecasts from global cybersecurity firms. Trend Micro, in its annual security predictions report, indicates that cybercrime is evolving beyond incremental changes, shifting instead to machine-driven campaigns that can operate autonomously without direct human intervention.
The report highlights a growing trend where threat actors are utilizing AI agents to perform various tasks, including reconnaissance, vulnerability identification, system exploitation, and monetization of intrusions at unprecedented speeds. This shift is expected to substantially increase the scale, persistence, and complexity of cyberattacks, raising alarms for organizations that continue to rely on reactive security models.
Ryan Flores, the lead of forward-looking threat research at Trend Micro, stated that 2026 could be seen as a pivotal year when cybercrime transitions from operating like a service industry to functioning as an automated entity. Defenders, he noted, will face mounting challenges in keeping pace with these machine-driven threats rather than merely identifying individual attacks.
Among the characteristics of this evolving threat landscape is the emergence of generative AI, which enables the creation of polymorphic malware. This type of malware continuously rewrites its own code to evade detection, while deepfake technologies are increasingly utilized in fraudulent and social engineering operations. Attackers are also embedding malicious elements into legitimate development workflows, leveraging compromised open-source packages and poisoned AI models.
Trend Micro predicts that key areas such as hybrid cloud environments, software supply chains, and AI development platforms will be prime targets for cybercriminals in 2026. Issues such as overprivileged cloud identities, weak access controls, and poorly managed automation pipelines are seen as significant vulnerabilities that attackers may exploit.
The report also warns of heightened interest from state-linked actors in “harvest-now, decrypt-later” strategies. In these scenarios, encrypted data is stolen with the expectation that advances in quantum computing will eventually allow for decryption, prolonging the effectiveness of espionage campaigns.
Ransomware is similarly evolving, with groups now developing AI-powered ecosystems capable of not only identifying targets and exploiting vulnerabilities but also managing extortion negotiations through automated systems. Data theft will likely play a more prominent role in these operations, moving beyond mere encryption of files.
In a related commentary, Keeper Security emphasizes that organizations need to reassess their approach to identity, access, and trust as AI and automation become entrenched in operations. The company argues that AI serves both as a defensive tool and an offensive weapon, enhancing threat detection and response while simultaneously enabling attackers to accelerate phishing and malware activities.
Takanori Nishiyama, senior vice president for APAC and country manager at Keeper Security, highlighted that AI-driven threats are reducing the margin for error in security environments. He warned that weak access controls could allow attackers to manipulate AI systems through methods like prompt injection and unauthorized changes to data.
The need for an identity-first security approach is critical, especially as organizations deploy AI systems interacting with sensitive data. Keeper Security recommends implementing least-privilege access, continuous session monitoring, and clearly defined permissions to mitigate risk in AI-driven environments.
An increasingly pressing concern is the rising prevalence of non-human identities, such as bots and service accounts, which now often outnumber human users in enterprise settings. Keeper Security notes that each non-human identity requires rigorous authentication, authorization, and ongoing oversight to prevent exploitation by attackers.
Implementing zero-trust principles for both human and non-human identities is becoming essential. Under a zero-trust framework, no identity is trusted by default, and all access requests are verified in real time. When combined with privileged access management, this strategy can significantly limit the potential damage from compromised accounts.
As automation accelerates, the importance of secure-by-design practices is growing. Incorporating security controls like multi-factor authentication and comprehensive logging during system development can minimize the need for reactive patching post-deployment. While AI can assist in this process through automated code analysis, it is crucial that these systems are safeguarded against bias and unauthorized manipulation.
As organizations face the dual challenges of AI advancements and quantum computing threats, Keeper Security urges early adoption of quantum-resistant encryption and cryptographic agility strategies, particularly for sensitive data with long retention periods. Additionally, regulatory frameworks in the Asia-Pacific region are tightening around data protection, AI governance, and cybersecurity accountability. Companies integrating compliance into their core security architecture are likely to adapt more effectively to evolving standards.
Both Trend Micro and Keeper Security assert that cybersecurity can no longer lag behind digital transformation initiatives. Instead, security must be embedded as a fundamental component of operational infrastructure to support innovation rather than merely react to it. As 2026 approaches, the core challenge for organizations will not solely be the adoption of advanced technologies, but ensuring they are equipped with adequate controls to manage the accompanying risks.
See also
Parrot OS 7.0 Launches with Advanced AI Tools and New Penetration Testing Features
Parrot OS 7.0 Launches with AI Tools and Major System Overhaul Based on Debian 13
OpenAI Hires Preparedness Chief to Combat Rising Cyberattack Risks Amid AI Advances
AI-Driven Cyberattacks Surge as Organizations Face Growing Security Challenges
Ethiopia’s New AI Policies Aim to Mitigate Rising Cybersecurity Threats and Risks



















































