Meta has halted its collaboration with Mercor, a $10 billion AI data startup, following a significant supply chain attack that exposed sensitive information, including potential training methodologies for prominent large language models. The breach, executed via a compromised version of the LiteLLM open-source library, has triggered investigations at leading AI firms such as OpenAI and Anthropic. It has also led to a class action lawsuit affecting over 40,000 individuals.
In a sophisticated operation last month, hackers not only harvested personal data but reportedly accessed critical blueprints on how some of today’s leading AI models are constructed. According to reports from Wired, the repercussions of this breach have sent shockwaves throughout the AI industry, which has invested billions to safeguard its proprietary methods.
Mercor, based in San Francisco and founded in 2023 by Brendan Foody, Adarsh Hiremath, and Surya Midha, specializes in curating bespoke training datasets for major AI organizations, including Meta, OpenAI, and Google. Following the cyberattack, the company announced an indefinite suspension of its work with Meta, a decision that reflects the gravity of the situation and the anxiety it has instigated among industry players who rely on Mercor’s services.
Mercor’s meteoric rise has been extraordinary, even by Silicon Valley standards. The company achieved a valuation of $10 billion after closing a $350 million Series C funding round in October 2025, making its founders the youngest self-made billionaires globally at just 22 years old. By September 2025, Mercor reported annualized revenues of $500 million, a significant leap from $100 million six months prior, thanks largely to its model of providing training data that AI labs depend on yet seldom publicly discuss. However, this very positioning has now exposed its vulnerabilities.
Technical Details
The breach originated from an upstream attack on the CI/CD pipeline of LiteLLM, a widely used open-source Python library, with an astonishing 97 million monthly downloads. The threat actor group TeamPCP compromised this library by previously exploiting a supply chain vulnerability in another tool called Trivy. On March 27, 2026, TeamPCP published two malicious versions of LiteLLM directly to PyPI, the Python package repository, which were available for approximately 40 minutes before detection and removal.
Both malicious versions contained sophisticated payloads: Version 1.82.7 embedded base64-encoded malware that executed upon import, while Version 1.82.8 employed a malicious path configuration file that activated on every Python process startup. The attack aimed to harvest a variety of sensitive information, including environment variables, API keys, and cloud credentials, exfiltrating all data to a server at models.litellm[.]cloud.
Mercor, which confirmed it was “one of thousands of companies” affected, found that the breach exposed around four terabytes of data. This cache reportedly includes 939 gigabytes of source code, a 211-gigabyte user database, and approximately three terabytes of video interview recordings and identity verification documents. Personal data belonging to over 40,000 current and former contractors and customers was likely compromised, including full names and Social Security numbers.
While the exposure of personal data is alarming, what particularly concerns Meta and other AI laboratories is the potential revelation of proprietary training methodologies. Because Mercor services multiple AI companies simultaneously, the breach may have disclosed crucial details regarding data selection criteria and training strategies that represent significant competitive advantages. This has prompted firms like OpenAI to investigate the incident, although it has not paused its ongoing projects with Mercor. Meanwhile, Anthropic has yet to comment publicly on its exposure, and Google is also assessing the situation.
The breach illustrates a structural risk facing the AI industry: when numerous competitors rely on the same third-party data supplier, a single incident can jeopardize the trade secrets of all involved.
In the aftermath of the incident, the threat group Lapsus$, known for high-profile attacks, has claimed responsibility for the Mercor breach and is reportedly auctioning the stolen data on dark web forums. Security researchers suspect that Lapsus$ is collaborating with TeamPCP, which has been linked to a broader campaign that has compromised over 1,000 enterprise SaaS environments, including a breach of the European Commission.
A class action lawsuit was filed against Mercor in the US District Court for the Northern District of California, alleging inadequate cybersecurity protections that left numerous individuals exposed to identity theft and fraud. The complaint emphasizes that the LiteLLM incident served as the entry point for the breach and criticizes Mercor’s reliance on a compromised open-source dependency without sufficient monitoring.
Meta’s silence on the incident has raised eyebrows, especially given its substantial investment in AI infrastructure. A recent $27 billion deal with Nebius Group and projected capital expenditures of up to $135 billion for the year highlight the strategic sensitivity of its AI training pipeline. The decision to pause a data vendor relationship signifies a calculated risk assessment where the potential loss of proprietary methodology outweighs the operational costs associated with halting ongoing projects.
The Mercor breach serves as a cautionary tale for the AI supply chain, revealing vulnerabilities in an interconnected framework reliant on third-party data suppliers and open-source tools. Security firms have long warned about the risks posed by open-source dependencies, and this incident underscores that the challenges may be even more acute for the AI industry. Moving forward, Mercor’s founders will face scrutiny over their ability to maintain momentum in a landscape now fraught with uncertainty regarding data security and proprietary methodologies.
See also
Colorado Enacts Landmark Law Protecting Defendants from Faulty Roadside Drug Tests
Microsoft Launches MAI-Image-2, Securing 3rd Place on ArenaAI Leaderboard
RWS Integrates Cohere’s LLMs into Language Weaver Pro, Enhancing Context-Aware Translation
Meta Suspends Collaboration with $10B AI Startup Mercor After Major Security Breach
Google Study Reveals AI Benchmarks Require Over 10 Raters for Reliable Evaluations


















































