Connect with us

Hi, what are you looking for?

AI Cybersecurity

CERT-In Warns: Mythos and GPT-5.5 Elevate Cyber Threats with AI-Driven Attacks

CERT-In warns that AI systems like Anthropic’s Mythos and OpenAI’s GPT-5.5 could automate cyberattacks, raising organizational risks significantly.

In a landmark advisory, the Indian Computer Emergency Response Team (CERT-In) has warned that next-generation artificial intelligence systems, such as Mythos by Anthropic and GPT-5.5 by OpenAI, exhibit capabilities that could significantly amplify cyber threat landscapes. This marks the government’s first formal response to mounting concerns about the dual-use nature of advanced AI technologies, which appear to be evolving beyond mere productivity enhancements into tools for offensive cyber operations.

CERT-In emphasized that these frontier AI systems represent a major leap in cyber capability maturity, highlighting their ability to autonomously identify vulnerabilities in widely used software. Alarmingly, these models can also plan and execute complex, multi-stage cyberattacks, operating at a speed and scale that previously necessitated coordinated efforts from teams of skilled human hackers. The advisory underscored the real and pressing risk of the “weaponisation of security vulnerabilities” by these AI models.

The advisory details several concerning capabilities associated with these advanced AI systems. These include the identification of zero-day vulnerabilities, rapid weaponisation of discovered flaws, reconnaissance of APIs and cloud infrastructure, and sophisticated methods of credential harvesting and social engineering. Additionally, the report warns of AI-generated phishing campaigns and the autonomous orchestration of multi-stage attacks that could pose severe risks to both institutions and individuals.

As a result, organisations now face a “heightened risk” environment characterized by low-cost, automated reconnaissance and exploitation cycles. This evolving threat landscape could lead to unauthorized access, service disruptions, data exfiltration, financial fraud, and cascading compromises across interconnected systems. In response, CERT-In has advised companies to adopt an “elevated alert” posture, increase the frequency and sophistication of threat detection, and deploy AI-enabled defensive tools to counter these emerging AI-driven attacks.

The agency also emphasized the importance of transitioning to “zero trust” security frameworks. Strengthening password protocols and providing targeted training for security teams to comprehend how AI-augmented attackers operate are also critical measures recommended to mitigate these threats. This shift underscores the increasing recognition that individuals are no longer sidelined but are now considered “part of the frontline” in the battle against cybercrime.

With personal devices, accounts, and digital identities increasingly vulnerable to AI-driven threats, CERT-In warned individuals about the rising risks of impersonation and deepfake attacks. These threats are facilitated by generative AI systems capable of convincingly mimicking trusted individuals and organizations. As such, the advisory advises individuals to practice heightened vigilance and adhere to basic cyber hygiene practices.

Among the recommended measures for individuals are keeping operating systems, browsers, and applications updated, avoiding the download of unknown apps or files, and using strong, unique passwords across accounts. Users are also urged to be cautious of unsolicited emails, messages, and links, scrutinizing content that appears AI-generated, especially if it mimics trusted sources. It is advised to treat any “too good to be true” offers with skepticism and to regularly back up important data.

While the advisory refrains from imposing specific restrictions, it reflects an institutional concern that the very capabilities driving AI innovation could simultaneously lower the barriers to sophisticated cybercrime. This dual-use nature of AI technologies presents a significant challenge for both organizations and individuals as they navigate an increasingly complex cybersecurity landscape.

The implications of this advisory extend beyond immediate cybersecurity measures. As governments, corporations, and individuals grapple with the rapidly evolving capabilities of AI, a broader conversation is necessary regarding the ethical and security frameworks that will govern these technologies. The rise of AI has the potential to alter not only the technological landscape but also the very nature of threats that society faces, emphasizing the critical need for comprehensive strategies to safeguard against the vulnerabilities introduced by such powerful tools.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

OpenAI shifts from Microsoft to explore partnerships with Amazon and Google Cloud, aiming to enhance flexibility and drive AI innovation amid rising competition.

AI Cybersecurity

Cybersecurity experts warn that AI model Claude Mythos poses an urgent threat, with 190 cyberattacks on Jewish nonprofits reported in just four months.

AI Technology

Cerebras targets a $35 billion IPO ahead of OpenAI, fueled by a $20 billion partnership and innovative wafer-scale chips promising 15x faster AI inference.

AI Business

Salesforce CEO Marc Benioff defies AI job fears by hiring 1,000 new grads and interns, aiming to boost AI development despite industry layoffs.

Top Stories

OpenAI's new non-exclusive deal with Microsoft allows access to other cloud providers, while 45% of Microsoft's AI backlog remains tied to OpenAI.

AI Cybersecurity

CERT-In issues a high-severity alert on AI-driven cyber threats, warning MSMEs of unprecedented attack capabilities that could compromise entire networks.

AI Finance

OpenAI caps revenue share to Microsoft at 20% while expanding cloud access, enabling sales growth across competitors like Amazon and Google by 2030.

Top Stories

Elon Musk's $134 billion lawsuit against OpenAI over its shift to a profit model goes to trial, potentially reshaping AI governance and ethics.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.