Connect with us

Hi, what are you looking for?

AI Business

CISOs Must Address 61% Surge in AI Usage to Combat Emerging Cybersecurity Risks

CISOs face urgent challenges as AI usage skyrockets 61 times from 2023 to 2025, exposing organizations to unprecedented cybersecurity risks.

The rapid adoption of artificial intelligence (AI) tools in workplaces is reshaping cybersecurity landscapes, posing new challenges for organizations. According to the 2025 Cyberhaven AI Adoption Risk Report, workplace AI usage surged by a staggering 61 times between 2023 and 2025. As Chief Information Security Officers (CISOs) grapple with this transformation, they face the urgent task of updating their security protocols to mitigate emerging AI-related threats. Failure to adapt may leave organizations vulnerable to risks associated with sensitive data processing and ungoverned AI usage.

AI’s rise introduces various security risks that CISOs must prioritize as they plan for 2026. A notable concern is the phenomenon of shadow AI, where unauthorized tools are utilized by employees. The 2025 State of Shadow AI report highlights that 81% of employees employ unapproved AI tools at work. This trend correlates with employees’ understanding of internal protocols, suggesting that as knowledge grows, so does the willingness to bypass company regulations. Furthermore, with over 40% of SaaS applications now AI-enabled, organizations may be unwittingly exposing themselves to significant risks, especially if the tools are unvetted or lack oversight regarding compliance with internal policies.

In addition to shadow AI, adversarial threats loom large. Cybercriminals increasingly utilize AI capabilities for malicious purposes, such as deepfake-based phishing attacks targeting executives and adversarial machine learning techniques aimed at undermining AI defenses. The potential for threat actors to weaponize large language models (LLMs) for malware creation and automated attacks has raised alarms in the cybersecurity community. Furthermore, model extraction attacks, where sensitive data is reverse-engineered from deployed AI models, heighten the stakes for organizations that fail to secure their AI frameworks.

As organizations integrate AI into their operations, risks associated with AI development and supply chains are also becoming apparent. The use of insecure open-source AI models can introduce vulnerabilities, while inadequate training hygiene may lead to biased or toxic datasets being used. Additionally, a lack of secure machine learning operations (MLOps) pipelines can facilitate tampering with training data. The reliance on third-party AI APIs further complicates matters, as these services often come with unclear service level agreements and data retention policies.

Legal, compliance, and ethical issues present additional hurdles. Violations of data privacy laws, such as GDPR and HIPAA, can occur through improper data usage, while the lack of model explainability and auditability complicates adherence to AI regulations. Organizations may find themselves liable for negligence if AI-assisted decisions cause harm, indicating the necessity of robust governance frameworks.

Despite these challenges, many CISOs misjudge the safety of third-party AI tools, assuming that vendor contracts will shield them from liability. Research indicates that 56% of organizations using such tools experienced sensitive data exposures, yet only 23% have integrated AI-specific evaluations into their risk assessments. Naveen Balakrishnan, managing director at TD Securities, reported that 70% of AI-driven cyberattacks entering his organization come from third-party vendors. This underscores the importance of rigorous evaluation processes for any AI tools being employed.

Moreover, the rise of shadow AI suggests that organizations have less control over employees’ AI usage than they might believe. With many employees utilizing AI-enabled applications beyond official channels, risk management must become a priority. Developing in-house models, although providing greater control, does not eliminate risks such as hallucinations, data leaks, or unintended discriminatory outputs.

To effectively mitigate these risks, CISOs should develop comprehensive AI governance and risk management programs aligned with industry best practices, such as the NIST AI Risk Management Framework and ISO 42001. Key elements of such programs include detecting shadow AI usage, incorporating AI reviews into third-party risk management, and establishing acceptable use policies. Training programs should also be deployed organization-wide to raise awareness about AI security risks.

As businesses navigate the evolving cybersecurity landscape, the challenges presented by AI are likely to intensify. Organizations must take proactive steps to secure their AI usage while fostering an environment that enhances overall cybersecurity. Engaging with cybersecurity experts can provide invaluable insights and strategies to turn potential threats into opportunities for growth and resilience in a technology-driven world.

See also
Marcus Chen
Written By

At AIPressa, my work focuses on analyzing how artificial intelligence is redefining business strategies and traditional business models. I've covered everything from AI adoption in Fortune 500 companies to disruptive startups that are changing the rules of the game. My approach: understanding the real impact of AI on profitability, operational efficiency, and competitive advantage, beyond corporate hype. When I'm not writing about digital transformation, I'm probably analyzing financial reports or studying AI implementation cases that truly moved the needle in business.

You May Also Like

AI Marketing

Singletrack acquires Mediasterling to enhance AI-driven client engagement tools, streamlining research workflows for financial institutions through integrated solutions.

AI Education

Microsoft unveils its Education Security Toolkit, empowering educators and students with AI-driven cybersecurity resources to enhance online safety ahead of Safer Internet Day 2026.

AI Cybersecurity

West Midlands Business Festival unites local SMEs with academia to explore AI integration and cybersecurity strategies, aiming to enhance operational resilience and innovation.

Top Stories

India's AI Impact Summit 2026 gathers global leaders, including Sundar Pichai and Bill Gates, to forge inclusive AI governance with measurable outcomes in healthcare,...

AI Research

AI Lab Notebooks (AILNs) could revolutionize research workflows, enhancing hypothesis generation and analysis efficiency, as highlighted by a survey of 150 scientists.

Top Stories

Insilico Medicine partners with CMS to co-develop at least two AI-driven drug programs, securing tens of millions in funding to accelerate CNS and autoimmune...

AI Technology

Vodafone's survey finds 31% of children aged 11-16 view AI chatbots as friends, raising concerns about emotional reliance and critical thinking development.

AI Finance

99% of UK financial firms now leverage AI, driving a transformative shift in banking with 59% reporting productivity gains and a surge in security...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.