Cybersecurity professionals are sounding the alarm over significant gaps in the governance of artificial intelligence (AI) as organizations increasingly adopt AI technologies across their operations. A recent survey of 1,253 cybersecurity experts reveals that while AI tools are now deployed in 73% of organizations, only 7% have implemented real-time governance to enforce security policies, highlighting a 66-point deficit that is growing as AI adoption continues to outpace security measures.
The report, which surveyed cybersecurity practitioners, architects, and technology leaders, underscores a troubling paradox: although 90% of respondents have increased their AI security budgets this year, 29% feel less secure than they did a year ago. Many cite pressures to adopt AI faster than security frameworks can adapt, alongside skill gaps and legacy tools that fail to address AI-specific threats.
Visibility into AI activity remains a critical concern, with 94% of organizations acknowledging deficiencies. A staggering 88% cannot differentiate between personal and corporate AI accounts, complicating data governance and making existing Data Loss Prevention (DLP) measures ineffective. The survey found that only 8% of organizations possess controls capable of evaluating content based on its semantic meaning, a necessary capability given how AI transforms and rephrases data.
As autonomous AI agents operate with significant write access—53% to collaboration tools and 40% to emails—most organizations only discover unauthorized actions after they occur. Alarmingly, 91% of respondents can only identify agent actions retrospectively, underscoring a broader issue where AI-driven risk is evolving from human misuse to machine autonomy without adequate oversight. This lack of control has already led to 37% of organizations experiencing operational issues due to agent actions in the past year.
Governance Challenges
Despite the widespread adoption of AI, governance has not kept pace. Twelve months ago, many organizations viewed AI governance as a future need, but the rapid rollout of AI tools has left security frameworks lagging behind. Today, 68% of organizations describe their AI governance as reactive or still developing. This reactive approach has led to fragmented adoption, with 48% predicting governance failures will precipitate the next major AI-related breach. In many cases, AI tools are being deployed by multiple teams without any shared security policies, leading to a chaotic landscape of governance.
The inadequacy of existing governance frameworks raises critical questions about the effectiveness of security measures in the age of AI. Organizations are encouraged to identify their three highest-risk AI use cases, embed enforceable policies for these, and designate ownership to mitigate risks effectively. Closing the governance gap is essential, as a staggering 68% of organizations currently operate under reactive governance, leaving them vulnerable to unforeseen threats.
While AI adoption escalates, security professionals are urging organizations to modernize their approaches. This includes enhancing visibility into AI activity by tracking all data movements, enforcing policies in real time, and ensuring that data protection measures can adapt to the semantic transformations introduced by AI. A focus on identifying anomalous behaviors in AI actions is also pivotal, as the report indicates that many organizations are operating with insufficient detection capabilities for agent-driven activities.
As AI continues to evolve, so too must the frameworks that govern its use. The ongoing mismatch between AI adoption rates and the maturity of governance frameworks poses a significant risk to organizational security. Cybersecurity experts assert that organizations must take proactive steps to bolster their defenses, integrating AI-specific controls and updating their security architectures to accommodate the rapid changes driven by this technology.
With a concerted effort to close these gaps, organizations can better navigate the complexities introduced by AI. As the landscape of cybersecurity shifts, ensuring adequate governance and control of AI systems will be crucial in mitigating risks and securing sensitive data in an increasingly automated world.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks





















































