Allegations against AI compliance startup Delve are raising urgent questions about how enterprises vet vendors in the race to adopt automation. As scrutiny grows, the controversy underscores a broader issue: many AI tools marketed as “enterprise-ready” may lack the safeguards, validation, and transparency buyers assume are in place.
On March 18, an anonymous Substack account named Deepdelver published an article alleging that Delve had misrepresented its security measures, fabricated evidence of tests and processes, and misled hundreds of clients into believing their companies were compliant. The company has since halted all product demonstrations while conducting internal investigations, leaving users scrambling to determine whether they are less compliant than they assumed.
Delve’s core proposition is that its agentic AI platform can replace much of the manual audit and compliance workload. Instead of human teams spending weeks gathering evidence, screenshots, logs, and policies, these agents plug into systems such as AWS, GitHub, and Slack to collect evidence automatically. However, according to the allegations, the agentic AI was a ruse for cheap certification mills, pre-populated templates, and the fabrication of evidence by humans.
Delve is not some two-bit operator either. It reportedly has over 1,000 clients in 50 countries, including AI startups Lovable, Bland, and Wispr Flow. In 2024, it was part of the Y Combinator Winter batch and was valued at $300 million by established investor Insight Partners in July 2025. Following the news, Insight Partners scrubbed a public post referencing its investment in the startup.
The controversy highlights that in this new age of AI, organizations need to be hyper-aware of the “move fast, break things” culture that swept social media, crypto, and now AI. A growing number of startups are selling AI software described as “compliance-ready” and “enterprise-grade” long before they have undergone the audits, certifications, or real-world testing those labels imply. For buyers, compliance is often the reason a product is purchased in the first place. When a vendor overstates its capabilities, companies are not only buying a faulty product but also outsourcing risk to a provider that may not be equipped to manage it.
Procurement teams need to return to deep technical validation rather than relying on sales decks and high-level assurances. The flaws in Delve were not difficult to identify; reportedly, a publicly accessible Google Sheet exposed the company’s mishandling of hundreds of client audit reports. For channel partners, this is an opportunity to remind clients of the value they provide throughout the product lifecycle, including in the procurement process.
For those tempted by the promise of skipping the queue on audit and compliance, there needs to be far greater oversight and transparency around the underlying processes and software. A polished demo is not enough. Buyers need visibility into what an agent has done, why they did it, what safeguards are in place to prevent critical errors, and the ability to shut systems down if they go rogue.
Most importantly, compliance claims must be treated as high-risk assertions. Certifications need to be independently verified, as does regulatory coverage and GDPR compliance. These processes cannot be fully automated, and organizations should never rely solely on screenshots or correspondence as proof. The Delve scandal may push the industry toward a more disciplined approach to vendor vetting, with organizations running pilots, then limited exposure trials, followed by detailed audits before full deployment.
However, given the pace of AI development, it seems unlikely that Delve will be the last scandal of this kind, as the incentives for startups to overstate their capabilities continue to grow.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































