Connect with us

Hi, what are you looking for?

AI Cybersecurity

Data Bill of Materials Emerges as Essential for AI Governance Amid Regulatory Pressure

Enterprises face rising audit failures and regulatory scrutiny as 85% of IT leaders lack visibility into AI training data, prompting the urgent need for a Data Bill of Materials.

Enterprises face rising audit failures and regulatory scrutiny as 85% of IT leaders lack visibility into AI training data, prompting the urgent need for a Data Bill of Materials.

As enterprises rapidly adopted artificial intelligence (AI) across their operations, a significant oversight came to light in 2025: the inability to track the data underpinning these systems. This alarming realization has been linked to rising audit failures, security breaches, and mounting regulatory scrutiny.

A recent study by Bedrock Security reveals that a majority of IT and security leaders still lack adequate visibility into the datasets that are essential for training and inference in AI systems. Bruno Kurtic, co-founder and CEO of Bedrock Data, emphasized the importance of controls established before AI models become operational. “You can’t govern retroactively,” he stated. “You need controls before the model runs, not after.”

The urgency of the situation stems from a flurry of AI initiatives that began in 2023 and accelerated into 2025. “Companies moved fast, with AI projects burgeoning across every business unit,” Kurtic explained. However, by mid-2025, many organizations faced a critical question: “What data is actually feeding these systems?” A shocking number found they had no clear answers as models were already in production, drawing data from both cloud and on-premises sources without any documented history of its origins or movements.

This lack of clarity quickly turned theoretical risks into concrete challenges. For instance, a biotech company realized too late that confidential personal data had been included in a training dataset, resulting in permanent exposure. Kurtic warned that without proper governance, companies risk accountability failures when regulators inquire about data sources used in their AI systems.

Amid this evolving landscape, the concept of a Data Bill of Materials (DBOM) has emerged as a vital tool for organizations. Kurtic described the DBOM as akin to an ingredient label for AI models, detailing what data was used for training, how it was classified, and its processing methods. As businesses transition from experimental AI to production-level implementations, questions regarding data access—especially concerning personally identifiable information (PII)—are becoming increasingly urgent.

“Without a DBOM, these queries are challenging to address,” Kurtic noted, adding that regulatory pressures are driving this change. Companies are recognizing the necessity of governing what they cannot see. However, pitfalls in governance persist, with many treating it as a mere checkbox rather than a comprehensive, ongoing process.

Another prevalent issue is an overreliance on various security tools that often lack the capability to contextualize data sensitivity effectively. Kurtic pointed out that while traditional security information and event management (SIEM) and data loss prevention (DLP) tools can generate alerts, they often fail to provide the necessary context, leading to alert fatigue among security teams.

The ingestion of sensitive data into AI workflows also remains a significant blind spot. Increased speed in development can result in sensitive information slipping into production unnoticed, creating a scenario where “shadow AI” operates outside of sanctioned oversight.

As regulatory frameworks tighten, especially in the U.S., organizations are increasingly unprepared for scrutiny. The Securities and Exchange Commission (SEC) has raised its expectations, requiring companies to demonstrate not only their use of AI but also clarity on the data utilized and how it influenced decision-making. Kurtic noted, “Most of today’s infrastructure isn’t built for that,” indicating a widening gap in compliance capabilities and a growing regulatory risk.

With AI agents functioning autonomously across various environments, the stakes have never been higher. Kurtic explained that while human operators manage their pace, AI agents can execute hundreds of queries per minute across multiple platforms without oversight. This raises concerns not only about the speed of operations but also about the nature of data generated by these agents, which may inadvertently include inaccuracies or “hallucinations.” If such outputs find their way into official reports or operational systems, the potential for harm escalates.

To navigate this complex landscape, Kurtic advocates for a shift in focus towards operational governance at the data layer. “Build systems that provide real-time visibility into where data lives, how it flows, and which agents access it,” he suggested. By establishing this foundational framework, organizations can better control their data pathways and align AI behaviors with existing policies, ultimately paving the way for responsible AI scaling.

As enterprises gear up for 2026, the imperative remains clear: effective governance must be embedded into the very fabric of AI operations. This proactive approach can facilitate accountability and ensure a more secure future for AI deployments across industries.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

Top Stories

DeepSeek introduces the groundbreaking mHC method to enhance the scalability and stability of language models, positioning itself as a major AI contender.

AI Regulation

2026 will see AI adoption shift towards compliance-driven frameworks as the EU enforces new regulations, demanding accountability and measurable ROI from enterprises.

Top Stories

AI stocks surge 81% since 2020, with TSMC's 41% sales growth and Amazon investing $125B in AI by 2026, signaling robust long-term potential.

Top Stories

New studies reveal AI-generated art ranks lower in beauty than human creations, while chatbots risk emotional dependency, highlighting cultural impacts on tech engagement.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.