Connect with us

Hi, what are you looking for?

AI Cybersecurity

CISO Survey Reveals 92% of Companies Lack AI Oversight, 75% Facing Shadow AI Risks

CISO survey reveals 92% of organizations lack AI oversight, with 75% exposed to unapproved “Shadow AI” tools accessing sensitive data.

Most large companies are grappling with a lack of visibility and control over the artificial intelligence systems operating within their networks, according to findings from the 2026 CISO AI Risk Report by Cybersecurity Insiders. The report, which surveyed 235 Chief Information Security Officers (CISOs), Chief Information Officers (CIOs), and senior security leaders across the United States and the United Kingdom, reveals a troubling trend: AI tools are frequently deployed without appropriate approval.

Specifically, the report indicates that 75% of organizations have discovered unapproved “Shadow AI” tools running within their systems, many of which have access to sensitive data. Alarmingly, 71% of CISOs acknowledged that AI systems have access to core business systems, yet only 16% effectively govern that access. This disconnect raises serious concerns about data security within organizations.

The survey highlights a significant visibility gap, with 92% of organizations lacking full oversight of their AI identities. Furthermore, 95% of respondents expressed doubt about their ability to detect malicious activity perpetrated by an AI agent. Only a mere 5% of participants felt confident in their capacity to contain a compromised AI system, pointing to a pervasive sense of vulnerability among security leaders.

Security leaders identified the rapid and decentralized adoption of AI tools, such as AI-assisted copilots, as a key challenge. These systems often operate autonomously, complicating efforts to track their activities using traditional security measures designed for human users. The report emphasizes that 86% of leaders do not enforce access policies specifically tailored for AI, while just 25% utilize monitoring controls designed to oversee AI systems.

This lack of governance and oversight over AI presents new risks for organizations, particularly as AI technologies become increasingly integrated into business operations. The findings suggest that most companies are ill-prepared to manage the implications of AI deployments, especially when it comes to safeguarding sensitive information from unauthorized access.

Industry experts warn that without robust governance frameworks and monitoring systems in place, organizations may find themselves exposed to significant risks, including data breaches and operational disruptions. As AI systems evolve and become more autonomous, the need for comprehensive security measures becomes even more critical.

The report’s findings underscore the urgent need for organizations to reassess their AI governance strategies. Security leaders are encouraged to implement stricter access controls, develop monitoring capabilities specifically for AI systems, and ensure that all AI tools in use are formally approved. By doing so, organizations can better position themselves to mitigate risks associated with the burgeoning use of AI technologies.

Looking ahead, as organizations continue to navigate the complexities of AI integration, a proactive approach to governance and security will be essential. With the ever-increasing reliance on AI capabilities, addressing these gaps in oversight could not only enhance security but also enable organizations to harness the full potential of AI in a responsible manner.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

AI integration in nuclear strategy escalates risks of miscalculation among the US, Russia, and China, complicating crisis management in an evolving arms race.

Top Stories

Dario Amodei warns at Davos that selling advanced AI chips to China could jeopardize U.S. national security, likening them to nuclear weapons.

AI Regulation

Pope Leo XIV urges global regulations for AI chatbots to prevent emotional harm, following a tragedy linked to chatbot interactions, emphasizing human dignity and...

AI Generative

China's open-source AI strategy, led by platforms like DeepSeek, targets 70% economic integration by 2027, aiming to distribute benefits across its economy.

Top Stories

Georgieva warns AI could reshape 60% of jobs in advanced economies, while Lagarde highlights rising wealth inequality as a critical global challenge

AI Cybersecurity

Jeffs' Brands' KeepZone AI secures pivotal partnerships to deploy AI-driven security solutions for the 2026 FIFA World Cup, enhancing safety for millions of attendees.

AI Regulation

NVIDIA's H200 GPUs face a 25% U.S. tariff and a Chinese customs blockade, threatening AI innovation and erasing billions in market value.

AI Education

K–12 school leaders are implementing AI tools to enhance personalized learning and streamline tasks, while navigating ethical concerns and data privacy issues.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.