Connect with us

Hi, what are you looking for?

AI Cybersecurity

CISO Survey Reveals 92% of Companies Lack AI Oversight, 75% Facing Shadow AI Risks

CISO survey reveals 92% of organizations lack AI oversight, with 75% exposed to unapproved “Shadow AI” tools accessing sensitive data.

Most large companies are grappling with a lack of visibility and control over the artificial intelligence systems operating within their networks, according to findings from the 2026 CISO AI Risk Report by Cybersecurity Insiders. The report, which surveyed 235 Chief Information Security Officers (CISOs), Chief Information Officers (CIOs), and senior security leaders across the United States and the United Kingdom, reveals a troubling trend: AI tools are frequently deployed without appropriate approval.

Specifically, the report indicates that 75% of organizations have discovered unapproved “Shadow AI” tools running within their systems, many of which have access to sensitive data. Alarmingly, 71% of CISOs acknowledged that AI systems have access to core business systems, yet only 16% effectively govern that access. This disconnect raises serious concerns about data security within organizations.

The survey highlights a significant visibility gap, with 92% of organizations lacking full oversight of their AI identities. Furthermore, 95% of respondents expressed doubt about their ability to detect malicious activity perpetrated by an AI agent. Only a mere 5% of participants felt confident in their capacity to contain a compromised AI system, pointing to a pervasive sense of vulnerability among security leaders.

Security leaders identified the rapid and decentralized adoption of AI tools, such as AI-assisted copilots, as a key challenge. These systems often operate autonomously, complicating efforts to track their activities using traditional security measures designed for human users. The report emphasizes that 86% of leaders do not enforce access policies specifically tailored for AI, while just 25% utilize monitoring controls designed to oversee AI systems.

This lack of governance and oversight over AI presents new risks for organizations, particularly as AI technologies become increasingly integrated into business operations. The findings suggest that most companies are ill-prepared to manage the implications of AI deployments, especially when it comes to safeguarding sensitive information from unauthorized access.

Industry experts warn that without robust governance frameworks and monitoring systems in place, organizations may find themselves exposed to significant risks, including data breaches and operational disruptions. As AI systems evolve and become more autonomous, the need for comprehensive security measures becomes even more critical.

The report’s findings underscore the urgent need for organizations to reassess their AI governance strategies. Security leaders are encouraged to implement stricter access controls, develop monitoring capabilities specifically for AI systems, and ensure that all AI tools in use are formally approved. By doing so, organizations can better position themselves to mitigate risks associated with the burgeoning use of AI technologies.

Looking ahead, as organizations continue to navigate the complexities of AI integration, a proactive approach to governance and security will be essential. With the ever-increasing reliance on AI capabilities, addressing these gaps in oversight could not only enhance security but also enable organizations to harness the full potential of AI in a responsible manner.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

Nearly 30% of organizations have faced major AI security incidents in the past year, highlighting urgent risks as 70% track compliance with evolving regulations.

AI Tools

Meta and Microsoft plan to cut up to 16,000 jobs—10% of Meta's workforce—amid escalating AI investment costs, with Meta's spending projected to reach $135...

Top Stories

OpenAI, Meta, and Microsoft data centers are projected to emit over 129 million tons of CO2 annually, surpassing Morocco's total emissions.

AI Generative

Revolutionizing OCT analysis, a new 3D multi-modal model enhances retinal diagnosis accuracy by 30%, promising significant advances in AMD management.

AI Regulation

Oklahoma City bans AI data centers until year-end, joining 11 states in imposing restrictions as Trump's federal framework aims to limit state regulations.

AI Research

OpenAI launches GPT-Rosalind, a specialized AI model poised to accelerate drug discovery, outperforming experts in RNA predictions and streamlining research workflows.

AI Tools

AI tools are automating repetitive tasks and enhancing decision-making, enabling businesses to cut costs and improve efficiency by up to 30% in daily workflows.

AI Marketing

ForeverCRM expands its hybrid lead response service to qualify prospects in under 10 minutes, enhancing sales efficiency and engagement for teams.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.