In a landmark advisory, the Indian Computer Emergency Response Team (CERT-In) has warned that next-generation artificial intelligence systems, such as Mythos by Anthropic and GPT-5.5 by OpenAI, exhibit capabilities that could significantly amplify cyber threat landscapes. This marks the government’s first formal response to mounting concerns about the dual-use nature of advanced AI technologies, which appear to be evolving beyond mere productivity enhancements into tools for offensive cyber operations.
CERT-In emphasized that these frontier AI systems represent a major leap in cyber capability maturity, highlighting their ability to autonomously identify vulnerabilities in widely used software. Alarmingly, these models can also plan and execute complex, multi-stage cyberattacks, operating at a speed and scale that previously necessitated coordinated efforts from teams of skilled human hackers. The advisory underscored the real and pressing risk of the “weaponisation of security vulnerabilities” by these AI models.
The advisory details several concerning capabilities associated with these advanced AI systems. These include the identification of zero-day vulnerabilities, rapid weaponisation of discovered flaws, reconnaissance of APIs and cloud infrastructure, and sophisticated methods of credential harvesting and social engineering. Additionally, the report warns of AI-generated phishing campaigns and the autonomous orchestration of multi-stage attacks that could pose severe risks to both institutions and individuals.
As a result, organisations now face a “heightened risk” environment characterized by low-cost, automated reconnaissance and exploitation cycles. This evolving threat landscape could lead to unauthorized access, service disruptions, data exfiltration, financial fraud, and cascading compromises across interconnected systems. In response, CERT-In has advised companies to adopt an “elevated alert” posture, increase the frequency and sophistication of threat detection, and deploy AI-enabled defensive tools to counter these emerging AI-driven attacks.
The agency also emphasized the importance of transitioning to “zero trust” security frameworks. Strengthening password protocols and providing targeted training for security teams to comprehend how AI-augmented attackers operate are also critical measures recommended to mitigate these threats. This shift underscores the increasing recognition that individuals are no longer sidelined but are now considered “part of the frontline” in the battle against cybercrime.
With personal devices, accounts, and digital identities increasingly vulnerable to AI-driven threats, CERT-In warned individuals about the rising risks of impersonation and deepfake attacks. These threats are facilitated by generative AI systems capable of convincingly mimicking trusted individuals and organizations. As such, the advisory advises individuals to practice heightened vigilance and adhere to basic cyber hygiene practices.
Among the recommended measures for individuals are keeping operating systems, browsers, and applications updated, avoiding the download of unknown apps or files, and using strong, unique passwords across accounts. Users are also urged to be cautious of unsolicited emails, messages, and links, scrutinizing content that appears AI-generated, especially if it mimics trusted sources. It is advised to treat any “too good to be true” offers with skepticism and to regularly back up important data.
While the advisory refrains from imposing specific restrictions, it reflects an institutional concern that the very capabilities driving AI innovation could simultaneously lower the barriers to sophisticated cybercrime. This dual-use nature of AI technologies presents a significant challenge for both organizations and individuals as they navigate an increasingly complex cybersecurity landscape.
The implications of this advisory extend beyond immediate cybersecurity measures. As governments, corporations, and individuals grapple with the rapidly evolving capabilities of AI, a broader conversation is necessary regarding the ethical and security frameworks that will govern these technologies. The rise of AI has the potential to alter not only the technological landscape but also the very nature of threats that society faces, emphasizing the critical need for comprehensive strategies to safeguard against the vulnerabilities introduced by such powerful tools.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks


















































