Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI Threat Intelligence Reveals 66% of AI Firms Face Security Risks From Exposed Data

Wiz Research finds 66% of AI firms face security risks due to exposed sensitive data, highlighting urgent vulnerabilities in cloud-native AI infrastructures.

In a rapidly evolving landscape, organizations are increasingly recognizing the need for dedicated AI threat intelligence to safeguard their AI systems. This practice focuses on understanding, tracking, and operationalizing threats specifically aimed at AI technologies, including models, data pipelines, and the cloud infrastructure that supports them. Unlike traditional threat detection, which identifies suspicious activity as it happens, AI threat intelligence emphasizes understanding the patterns, techniques, and trends that characterize these threats over time.

The importance of dedicated AI threat intelligence arises from the unique characteristics of AI systems, which introduce new assets and trust assumptions that are not present in traditional applications. Components such as models, training data, and inference endpoints create distinct attack surfaces. They are often automated, rely on non-human identities, and handle sensitive data in ways that make conventional threat intelligence sources inadequate. As a result, organizations must develop a comprehensive understanding of how attacks on AI systems manifest.

Recent research from Wiz and Gatepoint Research surveyed 100 cloud architects, engineers, and security leaders to shed light on the current state of AI security. The findings indicate that security failures related to AI often stem from familiar cloud security vulnerabilities, rather than purely novel attack methods. This underscores the necessity for organizations to adapt their threat intelligence strategies to accommodate the unique risks associated with AI.

Understanding AI Threat Landscape

AI systems significantly alter how risks manifest within cloud environments. High-value components such as training pipelines and inference endpoints become long-lived targets for attackers. Many AI-related failures result not from isolated exploits but from a combination of vulnerabilities, including exposed services and insecure data access. This interconnectedness necessitates a nuanced approach to threat intelligence, as it is not enough to simply identify individual risks without understanding how they relate to one another.

The gap in threat intelligence is particularly evident when organizations rely on generic indicators focused primarily on malware or phishing threats. As AI is deployed more widely, the limitations of traditional feeds become increasingly apparent. Dedicated AI threat intelligence aims to fill this gap by examining how AI systems can be exposed, abused, and targeted in cloud environments, thereby enabling security teams to develop a more coherent understanding of real-world attack vectors.

Wiz Research emphasizes that effective AI threat intelligence should reflect the actual behavior of attackers who operate in production environments. Their analysis of cloud-native AI infrastructure has identified significant issues, such as the frequent exposure of sensitive AI data and misconfigurations. For instance, a recent examination of the Forbes AI 50 companies revealed that nearly two-thirds had a verified leak of sensitive secrets, typically related to AI services.

Other findings include vulnerabilities in AI runtimes and inference infrastructures, which often mirror traditional cloud security risks but carry amplified consequences due to shared infrastructures. A notable example is CVE-2025-23266, a critical vulnerability allowing unauthorized access to host systems utilized by AI services. Such findings reinforce the need for AI threat intelligence to map vulnerabilities in AI infrastructure, focusing not just on models but on the supporting components as well.

AI systems also heavily depend on automation and non-human identities, leading to significant risks associated with leaked credentials. Wiz Research has documented numerous cases where exposed API tokens or service accounts permitted attackers to manipulate AI workflows, thereby gaining access to sensitive data and model functionalities. This trend highlights the urgency for organizations to adopt a proactive stance in managing non-human identities and their associated permissions.

As AI environments grow more complex, supply chain risks become increasingly important to address. Attackers can exploit third-party dependencies to propagate malicious changes, significantly increasing the risk of compromise. The s1ngularity supply chain attack demonstrated how compromised npm tokens could lead to the distribution of malicious AI tool versions that accelerated reconnaissance efforts against sensitive systems.

In light of these vulnerabilities and emerging threats, organizations must re-evaluate their approaches to AI threat intelligence. The need for precise, actionable insights cannot be overstated, as security teams must move beyond theoretical risks to focus on practical, operational contexts. By correlating AI-specific risks with cloud resources and data paths, organizations can prioritize remediation efforts based on realistic attack scenarios.

Wiz has operationalized AI threat intelligence by creating a continuously updated Security Graph that maps cloud resources, identities, and permissions. This model incorporates insights from real-world attacker behavior, allowing security teams to focus on the conditions that enable threats rather than only monitoring for isolated indicators. Such an approach shifts the focus from passive awareness to actionable intelligence, crucial for effectively managing the unique risks associated with AI systems.

As the integration of AI in organizational processes continues to expand, the necessity for robust and tailored threat intelligence becomes increasingly evident. By understanding how AI systems interact with their environments and the vulnerabilities they present, organizations can fortify their defenses against potential threats and ensure the secure deployment of their AI technologies.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Marketing

Interact Marketing warns that unchecked AI content creation threatens brand integrity, with a notable decline in quality standards and rising consumer fatigue.

AI Cybersecurity

AI-driven cyberattacks are expected to surge by 50% in 2026, as attackers exploit vulnerabilities faster than organizations can adapt, pushing cybersecurity to a critical...

AI Technology

As AI integration accelerates, professionals must upskill to meet the soaring demand for expertise in automation and machine learning, with online courses from Coursera...

AI Regulation

China's CAC proposes strict AI regulations to protect children, banning harmful chatbot content and requiring parental consent for emotional interactions.

AI Generative

Venture-backed AI startups are redefining industries, attracting billions in funding to innovate solutions in healthcare, manufacturing, and climate tech.

Top Stories

China's Cyberspace Administration proposes draft regulations mandating transparency and ethical safeguards for emotional AI, impacting major firms like Baidu and Alibaba.

AI Regulation

Governments worldwide are rushing to finalize AI regulations by 2026, with the EU's AI Act imposing strict compliance on high-risk applications like healthcare and...

Top Stories

AI transforms leadership roles, urging executives to shift from authority to purpose-driven orchestration, enhancing decision-making through ethical, AI-integrated strategies.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.