Connect with us

Hi, what are you looking for?

AI Business

CISOs Must Address 61% Surge in AI Usage to Combat Emerging Cybersecurity Risks

CISOs face urgent challenges as AI usage skyrockets 61 times from 2023 to 2025, exposing organizations to unprecedented cybersecurity risks.

The rapid adoption of artificial intelligence (AI) tools in workplaces is reshaping cybersecurity landscapes, posing new challenges for organizations. According to the 2025 Cyberhaven AI Adoption Risk Report, workplace AI usage surged by a staggering 61 times between 2023 and 2025. As Chief Information Security Officers (CISOs) grapple with this transformation, they face the urgent task of updating their security protocols to mitigate emerging AI-related threats. Failure to adapt may leave organizations vulnerable to risks associated with sensitive data processing and ungoverned AI usage.

AI’s rise introduces various security risks that CISOs must prioritize as they plan for 2026. A notable concern is the phenomenon of shadow AI, where unauthorized tools are utilized by employees. The 2025 State of Shadow AI report highlights that 81% of employees employ unapproved AI tools at work. This trend correlates with employees’ understanding of internal protocols, suggesting that as knowledge grows, so does the willingness to bypass company regulations. Furthermore, with over 40% of SaaS applications now AI-enabled, organizations may be unwittingly exposing themselves to significant risks, especially if the tools are unvetted or lack oversight regarding compliance with internal policies.

In addition to shadow AI, adversarial threats loom large. Cybercriminals increasingly utilize AI capabilities for malicious purposes, such as deepfake-based phishing attacks targeting executives and adversarial machine learning techniques aimed at undermining AI defenses. The potential for threat actors to weaponize large language models (LLMs) for malware creation and automated attacks has raised alarms in the cybersecurity community. Furthermore, model extraction attacks, where sensitive data is reverse-engineered from deployed AI models, heighten the stakes for organizations that fail to secure their AI frameworks.

As organizations integrate AI into their operations, risks associated with AI development and supply chains are also becoming apparent. The use of insecure open-source AI models can introduce vulnerabilities, while inadequate training hygiene may lead to biased or toxic datasets being used. Additionally, a lack of secure machine learning operations (MLOps) pipelines can facilitate tampering with training data. The reliance on third-party AI APIs further complicates matters, as these services often come with unclear service level agreements and data retention policies.

Legal, compliance, and ethical issues present additional hurdles. Violations of data privacy laws, such as GDPR and HIPAA, can occur through improper data usage, while the lack of model explainability and auditability complicates adherence to AI regulations. Organizations may find themselves liable for negligence if AI-assisted decisions cause harm, indicating the necessity of robust governance frameworks.

Despite these challenges, many CISOs misjudge the safety of third-party AI tools, assuming that vendor contracts will shield them from liability. Research indicates that 56% of organizations using such tools experienced sensitive data exposures, yet only 23% have integrated AI-specific evaluations into their risk assessments. Naveen Balakrishnan, managing director at TD Securities, reported that 70% of AI-driven cyberattacks entering his organization come from third-party vendors. This underscores the importance of rigorous evaluation processes for any AI tools being employed.

Moreover, the rise of shadow AI suggests that organizations have less control over employees’ AI usage than they might believe. With many employees utilizing AI-enabled applications beyond official channels, risk management must become a priority. Developing in-house models, although providing greater control, does not eliminate risks such as hallucinations, data leaks, or unintended discriminatory outputs.

To effectively mitigate these risks, CISOs should develop comprehensive AI governance and risk management programs aligned with industry best practices, such as the NIST AI Risk Management Framework and ISO 42001. Key elements of such programs include detecting shadow AI usage, incorporating AI reviews into third-party risk management, and establishing acceptable use policies. Training programs should also be deployed organization-wide to raise awareness about AI security risks.

As businesses navigate the evolving cybersecurity landscape, the challenges presented by AI are likely to intensify. Organizations must take proactive steps to secure their AI usage while fostering an environment that enhances overall cybersecurity. Engaging with cybersecurity experts can provide invaluable insights and strategies to turn potential threats into opportunities for growth and resilience in a technology-driven world.

See also
Marcus Chen
Written By

At AIPressa, my work focuses on analyzing how artificial intelligence is redefining business strategies and traditional business models. I've covered everything from AI adoption in Fortune 500 companies to disruptive startups that are changing the rules of the game. My approach: understanding the real impact of AI on profitability, operational efficiency, and competitive advantage, beyond corporate hype. When I'm not writing about digital transformation, I'm probably analyzing financial reports or studying AI implementation cases that truly moved the needle in business.

You May Also Like

Top Stories

AMD's shares surge 107.1% as demand for AI chips drives projected Q4 revenue to $9.6B, positioning it as a formidable competitor to NVIDIA.

AI Technology

Teradyne's stock hits $249.40, reflecting a remarkable 109% annual growth fueled by surging AI demand, despite a high P/E ratio of 87.65.

Top Stories

India's Chief Economic Adviser V Anantha Nageswaran emphasizes a stable GDP growth forecast of 6.8-7.2% as AI reshapes the labor market and drives strategic...

AI Tools

Midpage integrates with Anthropic's Claude to enhance legal research, enabling law firms to streamline workflows with advanced AI tools and comprehensive case law access.

AI Research

Baymard Institute launches an AI tool achieving 95% accuracy in heuristic evaluations, up from 39%, revolutionizing e-commerce usability testing.

AI Marketing

AI transforms marketing strategies as organizations that integrate it effectively see increased lead quality and reduced customer acquisition costs, driving measurable results.

AI Finance

Abhishek Mittal of AML RightSource advocates for immediate AI deployment in combating financial crime, a $3 trillion global issue, stressing pragmatism over perfection.

Top Stories

India's Economic Survey proposes an AI Economic Council to assess labor impacts and ensure ethical AI adoption, promoting human welfare in a labor-rich economy.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.