Nearly 70% of organizations are actively tracking AI regulations and preparing to comply, according to Sprinto’s latest CISO Pulse Check report. However, the report reveals that over 30% of these organizations have already faced a significant AI-related security incident in the past year, underscoring the growing risks associated with the rapid adoption of artificial intelligence technologies.
As the landscape of AI evolves, concerns are becoming increasingly tangible. Security leaders have identified shadow AI usage and sensitive data leaks through public AI tools as the most pressing threats. Despite this alarming trend, only 21% of organizations have implemented controls to prevent confidential data from being shared on external AI platforms, a statistic that raises questions about the effectiveness of current governance measures.
While awareness of AI-related risks is on the rise, many organizations report feeling less prepared to manage these threats compared to traditional cybersecurity issues. Approximately 30% indicate a lack of readiness to handle AI risks, which encompass incidents like shadow AI usage, data leakage, model inversion, API abuse, unauthorized access, and data poisoning. These risks are not abstract concerns; they are operational realities that often outpace the implementation of internal controls.
Many companies find themselves taking weeks or even months to enact policies regarding AI usage, with 39% reporting inconsistent enforcement of these rules. This inconsistency can lead to vulnerabilities, allowing potential security incidents to proliferate within organizations that are still adapting to the demands of a fast-evolving technological landscape.
The report indicates that governance systems for AI are still in development. Only a quarter of organizations claim to possess advanced AI governance maturity, while most are still in early or developing stages. Although policies do exist, enforcement and monitoring processes remain weak, leading to fragmented oversight that could exacerbate the risks associated with AI technologies.
Investment in AI risk mitigation is on the rise, with about 69% of organizations allocating budgets for such initiatives by 2026, and many more planning to do so. Key priorities for these investments include stronger technical controls, comprehensive AI risk assessments, and enhanced employee training to better equip staff against emerging threats.
However, the report highlights a significant mismatch between the rapid adoption of AI technologies and the slower pace at which governance systems are evolving. It suggests that organizations need to adopt more continuous and adaptive risk management strategies, rather than relying on static policies that may become obsolete quickly. “AI has moved faster than most organizations were prepared for… The companies that win in 2026 will be those building trust, control, and resilience alongside adoption,” said Raghuveer Kancherla, co-founder of Sprinto.
In conclusion, while organizations are increasingly aware of the risks associated with AI, the majority are not yet prepared to manage them effectively at scale. This reality makes governance a critical priority moving forward, as businesses navigate the complexities of integrating AI technologies while ensuring the security and integrity of their operations.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































