The rapid adoption of artificial intelligence (AI) tools in workplaces is reshaping cybersecurity landscapes, posing new challenges for organizations. According to the 2025 Cyberhaven AI Adoption Risk Report, workplace AI usage surged by a staggering 61 times between 2023 and 2025. As Chief Information Security Officers (CISOs) grapple with this transformation, they face the urgent task of updating their security protocols to mitigate emerging AI-related threats. Failure to adapt may leave organizations vulnerable to risks associated with sensitive data processing and ungoverned AI usage.
AI’s rise introduces various security risks that CISOs must prioritize as they plan for 2026. A notable concern is the phenomenon of shadow AI, where unauthorized tools are utilized by employees. The 2025 State of Shadow AI report highlights that 81% of employees employ unapproved AI tools at work. This trend correlates with employees’ understanding of internal protocols, suggesting that as knowledge grows, so does the willingness to bypass company regulations. Furthermore, with over 40% of SaaS applications now AI-enabled, organizations may be unwittingly exposing themselves to significant risks, especially if the tools are unvetted or lack oversight regarding compliance with internal policies.
In addition to shadow AI, adversarial threats loom large. Cybercriminals increasingly utilize AI capabilities for malicious purposes, such as deepfake-based phishing attacks targeting executives and adversarial machine learning techniques aimed at undermining AI defenses. The potential for threat actors to weaponize large language models (LLMs) for malware creation and automated attacks has raised alarms in the cybersecurity community. Furthermore, model extraction attacks, where sensitive data is reverse-engineered from deployed AI models, heighten the stakes for organizations that fail to secure their AI frameworks.
As organizations integrate AI into their operations, risks associated with AI development and supply chains are also becoming apparent. The use of insecure open-source AI models can introduce vulnerabilities, while inadequate training hygiene may lead to biased or toxic datasets being used. Additionally, a lack of secure machine learning operations (MLOps) pipelines can facilitate tampering with training data. The reliance on third-party AI APIs further complicates matters, as these services often come with unclear service level agreements and data retention policies.
Legal, compliance, and ethical issues present additional hurdles. Violations of data privacy laws, such as GDPR and HIPAA, can occur through improper data usage, while the lack of model explainability and auditability complicates adherence to AI regulations. Organizations may find themselves liable for negligence if AI-assisted decisions cause harm, indicating the necessity of robust governance frameworks.
Despite these challenges, many CISOs misjudge the safety of third-party AI tools, assuming that vendor contracts will shield them from liability. Research indicates that 56% of organizations using such tools experienced sensitive data exposures, yet only 23% have integrated AI-specific evaluations into their risk assessments. Naveen Balakrishnan, managing director at TD Securities, reported that 70% of AI-driven cyberattacks entering his organization come from third-party vendors. This underscores the importance of rigorous evaluation processes for any AI tools being employed.
Moreover, the rise of shadow AI suggests that organizations have less control over employees’ AI usage than they might believe. With many employees utilizing AI-enabled applications beyond official channels, risk management must become a priority. Developing in-house models, although providing greater control, does not eliminate risks such as hallucinations, data leaks, or unintended discriminatory outputs.
To effectively mitigate these risks, CISOs should develop comprehensive AI governance and risk management programs aligned with industry best practices, such as the NIST AI Risk Management Framework and ISO 42001. Key elements of such programs include detecting shadow AI usage, incorporating AI reviews into third-party risk management, and establishing acceptable use policies. Training programs should also be deployed organization-wide to raise awareness about AI security risks.
As businesses navigate the evolving cybersecurity landscape, the challenges presented by AI are likely to intensify. Organizations must take proactive steps to secure their AI usage while fostering an environment that enhances overall cybersecurity. Engaging with cybersecurity experts can provide invaluable insights and strategies to turn potential threats into opportunities for growth and resilience in a technology-driven world.
See also
Bank of America Warns of Wage Concerns Amid AI Spending Surge
OpenAI Restructures Amid Record Losses, Eyes 2030 Vision
Global Spending on AI Data Centers Surpasses Oil Investments in 2025
Rigetti CEO Signals Caution with $11 Million Stock Sale Amid Quantum Surge
Investors Must Adapt to New Multipolar World Dynamics
















































