AI is increasingly fueling cyberattacks, leaving many organizational leaders unprepared for the evolving threat landscape. Hise Gibson highlights the inadequacy of traditional risk prevention strategies and provides a playbook for organizations to bolster their defenses against imminent breaches. The financial toll of AI-enabled data breaches averages $4.88 million, a figure that does not account for reputational damage or regulatory penalties. More concerning, the most significant risk may lie in leaders who fail to anticipate these threats.
A striking example illustrates this danger: a deepfake video featuring a former president declaring a national emergency circulated widely, misleading the public and causing market turmoil before it was recognized as fraudulent. Such incidents underscore how AI is not merely accelerating attacks but also rendering the security environment profoundly unpredictable and perilous.
This crisis is not hypothetical; it has already manifested. In 2022, a convincingly fabricated video of Ukrainian President Volodymyr Zelensky ordering troop surrender gained traction online, despite quick denouncements from Ukrainian officials. This shift in attack dynamics means that the technology capable of creating such videos is now accessible on standard laptops, exponentially increasing the likelihood of mass deception and rapid escalation of threats.
The rise of AI-enabled cyberattacks is alarming. A 2026 IBM study reported a 44% increase in attacks targeting public-facing software and systems, with many driven by AI-related vulnerabilities. These attacks are capable of adapting and evolving autonomously, exploiting system weaknesses without human intervention. Concurrently, Accenture’s 2025 State of Cybersecurity Resilience report revealed that 77% of executives lack confidence in their organizations’ abilities to combat AI-driven threats. This dissonance between the speed of emerging threats and the readiness of companies to address them represents a critical strategic vulnerability.
With the proliferation of AI technologies, leaders must reassess their frameworks for understanding business challenges. The traditional VUCA (volatile, uncertain, complex, ambiguous) paradigm no longer suffices in the context of AI and cybersecurity. Instead, we are navigating a BANI (brittle, anxious, nonlinear, incomprehensible) landscape. In this environment, the fallibility of seemingly robust systems becomes evident, as a single point of failure can lead to catastrophic outcomes in mere minutes.
Leaders are also grappling with anxiety driven by a surplus of choices and insufficient information regarding potential outcomes. This scenario often results in decision paralysis, impeding timely and effective responses. Furthermore, AI-enabled threats defy traditional risk models predicated on proportionality; minor lapses can trigger significant ramifications, complicating the decision-making processes.
The challenges that leaders face in managing AI technology are multifaceted. While efficiency gains are appealing, each advancement also introduces potential vulnerabilities. Leaders must navigate the delicate balance of harnessing AI’s capabilities while concurrently establishing responsible governance protocols to mitigate risks.
To thrive amid these complexities, organizations must adopt a proactive approach encapsulated in the ACTS framework, which emphasizes taking decisive action. The first step is to assume a breach is inevitable. This mindset fosters a culture of preparedness, prompting organizations to implement zero-trust architectures, network segmentation, and routine crisis simulations. Notably, FedEx’s response to the NotPetya malware attack demonstrated the effectiveness of such preparations, as its leaders executed manual workarounds, minimizing damage.
In contrast, MGM Resorts International’s ransomware incident in September 2023 resulted from a lack of preparedness, leading to an estimated $100 million loss in revenue and extensive operational disruptions. This incident serves as a cautionary tale about the consequences of neglecting cybersecurity training and crisis rehearsals.
Another crucial component of the ACTS framework is to cultivate AI fluency across all leadership levels. Understanding the implications of AI is no longer the purview of IT departments alone; every leader must be equipped with knowledge about AI systems, their risks, and their potential benefits. Organizations should implement reverse mentoring programs to facilitate knowledge transfer from tech-savvy junior employees to senior leaders.
It is also essential to tie AI investments to core operations. Many companies initiate AI pilots that fail to scale or deliver long-term value. Each AI project should be anchored in clear return-on-investment frameworks that address specific business needs and generate measurable results.
Strengthening governance is equally important. Organizations should establish ethical guidelines and create AI governance councils with representatives from various departments, ensuring a comprehensive approach to assessing fairness and bias within AI systems. Transparency regarding accountability for AI-related issues is crucial for fostering trust and mitigating risks before breaches occur.
As organizations confront the realities of AI-enabled threats, leaders should prepare for their next board meeting by addressing four critical questions: Can the business operate for 48 hours without digital systems? Have top leaders completed comprehensive training in AI security? Is the AI deployment strategy focused on resilience? Can decision-makers navigate scenarios where data is unavailable? Organizations that cannot affirmatively answer these questions must act decisively to address vulnerabilities.
The leaders who regard AI security as a fundamental responsibility will be better positioned to withstand future attacks. In an era where AI is reshaping the cyber landscape, readiness is not just a precaution; it is imperative for survival.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks


















































