As enterprises rapidly adopted artificial intelligence (AI) across their operations, a significant oversight came to light in 2025: the inability to track the data underpinning these systems. This alarming realization has been linked to rising audit failures, security breaches, and mounting regulatory scrutiny.
A recent study by Bedrock Security reveals that a majority of IT and security leaders still lack adequate visibility into the datasets that are essential for training and inference in AI systems. Bruno Kurtic, co-founder and CEO of Bedrock Data, emphasized the importance of controls established before AI models become operational. “You can’t govern retroactively,” he stated. “You need controls before the model runs, not after.”
The urgency of the situation stems from a flurry of AI initiatives that began in 2023 and accelerated into 2025. “Companies moved fast, with AI projects burgeoning across every business unit,” Kurtic explained. However, by mid-2025, many organizations faced a critical question: “What data is actually feeding these systems?” A shocking number found they had no clear answers as models were already in production, drawing data from both cloud and on-premises sources without any documented history of its origins or movements.
This lack of clarity quickly turned theoretical risks into concrete challenges. For instance, a biotech company realized too late that confidential personal data had been included in a training dataset, resulting in permanent exposure. Kurtic warned that without proper governance, companies risk accountability failures when regulators inquire about data sources used in their AI systems.
Amid this evolving landscape, the concept of a Data Bill of Materials (DBOM) has emerged as a vital tool for organizations. Kurtic described the DBOM as akin to an ingredient label for AI models, detailing what data was used for training, how it was classified, and its processing methods. As businesses transition from experimental AI to production-level implementations, questions regarding data access—especially concerning personally identifiable information (PII)—are becoming increasingly urgent.
“Without a DBOM, these queries are challenging to address,” Kurtic noted, adding that regulatory pressures are driving this change. Companies are recognizing the necessity of governing what they cannot see. However, pitfalls in governance persist, with many treating it as a mere checkbox rather than a comprehensive, ongoing process.
Another prevalent issue is an overreliance on various security tools that often lack the capability to contextualize data sensitivity effectively. Kurtic pointed out that while traditional security information and event management (SIEM) and data loss prevention (DLP) tools can generate alerts, they often fail to provide the necessary context, leading to alert fatigue among security teams.
The ingestion of sensitive data into AI workflows also remains a significant blind spot. Increased speed in development can result in sensitive information slipping into production unnoticed, creating a scenario where “shadow AI” operates outside of sanctioned oversight.
As regulatory frameworks tighten, especially in the U.S., organizations are increasingly unprepared for scrutiny. The Securities and Exchange Commission (SEC) has raised its expectations, requiring companies to demonstrate not only their use of AI but also clarity on the data utilized and how it influenced decision-making. Kurtic noted, “Most of today’s infrastructure isn’t built for that,” indicating a widening gap in compliance capabilities and a growing regulatory risk.
With AI agents functioning autonomously across various environments, the stakes have never been higher. Kurtic explained that while human operators manage their pace, AI agents can execute hundreds of queries per minute across multiple platforms without oversight. This raises concerns not only about the speed of operations but also about the nature of data generated by these agents, which may inadvertently include inaccuracies or “hallucinations.” If such outputs find their way into official reports or operational systems, the potential for harm escalates.
To navigate this complex landscape, Kurtic advocates for a shift in focus towards operational governance at the data layer. “Build systems that provide real-time visibility into where data lives, how it flows, and which agents access it,” he suggested. By establishing this foundational framework, organizations can better control their data pathways and align AI behaviors with existing policies, ultimately paving the way for responsible AI scaling.
As enterprises gear up for 2026, the imperative remains clear: effective governance must be embedded into the very fabric of AI operations. This proactive approach can facilitate accountability and ensure a more secure future for AI deployments across industries.
See also
Diana Burley Elected NAPA Fellow, Champions Transparency in Cybersecurity Policies
AI Reshapes Cybersecurity: 75% of Workers Lack Confidence in AI Integration
AI Revolutionizes Cybersecurity: 10 Predictions for 2026 Highlights Major Threats and Innovations
Nomani Investment Scam Spikes 62% with AI Deepfake Ads on Social Media
Shadow AI Poses Security Risks for SaaS Integrations, Warns Nudge Security CTO



















































