As artificial intelligence (AI) services like Claude become more accessible, the threshold for launching cyberattacks has plummeted, warns Anirban Mukherji, founder and CEO of miniOrange. In a recent interview with Firstpost, Mukherji emphasized the transformative impact of AI on the cybersecurity landscape, highlighting a shift from dramatic, high-profile attacks to more insidious forms of data manipulation.
Mukherji pointed out that, unlike in fictional narratives where hackers cause widespread chaos by disabling critical services, the current threats are often subtle and focused on altering data directly. Cybercriminals may manipulate bank balances, medical records, or legal documents rather than stealing them outright, which not only results in direct harm to individuals but also undermines trust in institutions.
As businesses and individuals increasingly adopt AI technologies without adequate security measures, the risk of inadvertently exposing sensitive information rises significantly. Mukherji articulated concerns about a growing “security debt” in developing economies like India, where the drive for digital growth often sidelines cybersecurity considerations, creating vulnerabilities that can be easily exploited by sophisticated attackers.
Looking ahead, Mukherji predicts that the next three to five years will see AI fundamentally reshaping the cyber-threat landscape, enabling faster and more localized attacks. “We are entering the era of ‘hyper-personalised’ cybercrime,” he stated, noting that AI can now craft context-aware phishing messages that resonate with cultural nuances and local dialects, making them particularly hard to detect.
He further stressed that the greatest threat to ordinary users today is not merely about devices being hacked; rather, it revolves around identity theft. In an interconnected world, digital identities—integrated with services like UPI, email, and banking—serve as the primary attack surface. Once a criminal gains access to someone’s credentials, they can exploit established trust without needing to breach physical devices.
To address these evolving threats, Mukherji advocates for a transformative approach to cybersecurity, particularly in developing regions. He argues that affordable cybersecurity must involve infrastructure-level protection rather than merely low-cost antivirus solutions. “Security needs to be embedded by design into the digital platforms we rely on,” he said, emphasizing that the responsibility for safety should not solely fall on users.
For businesses, the rapid adoption of AI often outpaces the establishment of robust security frameworks, with a particular risk emerging from “Shadow AI.” This term refers to employees using unapproved public AI tools to process sensitive information, exposing organizations to compliance gaps and potential data leaks. Sectors like banking and healthcare, which handle vast amounts of sensitive data, are particularly susceptible to such breaches.
Mukherji noted that many organizations mistakenly treat AI as a “magic wand” that enhances efficiency without acknowledging the accompanying risks. Each AI system introduces new vulnerabilities, making it crucial for companies to adopt a “zero trust” approach that rigorously verifies the safety of all users and devices within their networks.
Recent reports have highlighted the use of AI in state-sponsored espionage, underscoring the need for nations like India to prioritize cybersecurity as a core component of national defense. Mukherji argues for the development of sovereign AI systems that can secure sensitive data and infrastructure while advocating for continuous, AI-driven monitoring to address emerging cyber threats proactively.
As the lines between state-sponsored cyber operations and independent criminal activities blur, Mukherji emphasizes the importance of a unified cyber command in India that integrates government and private sector efforts. Critical infrastructure remains a high-value target, making it imperative to bolster defenses against potential coordinated attacks that could disrupt economic activity and erode public trust.
Mukherji warns that the barrier to entry for cyberattacks has effectively dropped to zero, allowing individuals with minimal technical skills to harness AI tools for malicious purposes. This shift necessitates an evolution in cybersecurity strategies, focusing on adaptive defenses that leverage AI for real-time threat detection and response.
The prospect of an international agreement on cyber-disarmament faces significant challenges, according to Mukherji. Unlike physical weapons, cyber weapons exist as code, complicating efforts to enforce any global treaty. Instead, he advocates for accountability, suggesting that nations harboring cyber attackers should face diplomatic and economic consequences.
Reflecting on his two decades in the cybersecurity field, Mukherji concluded that the current landscape favors attackers, with their ability to experiment with numerous attack variations using AI. However, he remains optimistic that defenders will soon level the playing field by employing AI to enhance cybersecurity practices, ultimately making the focus on Zero Trust critical for future resilience.
As society grapples with these complex cybersecurity issues, Mukherji cautions against hyperbolic fears of catastrophic cyber-attacks. Instead, he emphasizes the more insidious danger of data manipulation, which can erode trust in institutions and destabilize society. The need for robust cybersecurity measures has never been more urgent.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































