PROBABLYPWNED
Data BreachesApril 10, 20264 min read

Mercor Breach Exposes 4TB of AI Training Data After LiteLLM Attack

AI startup Mercor confirms breach via LiteLLM supply chain attack. Lapsus$ claims 4TB stolen including candidate data, source code, and API keys. Meta pauses contracts.

Sarah Mitchell

AI hiring startup Mercor has confirmed a data breach linked to a supply chain attack on the open-source LiteLLM project, with attackers claiming to have stolen 4TB of sensitive data including candidate profiles, employer information, and source code. The $10 billion-valued company now faces multiple lawsuits and has reportedly lost major customers including Meta.

The LiteLLM Supply Chain Attack

The breach traces back to late March 2026 when a threat group known as TeamPCP compromised credentials belonging to a LiteLLM maintainer. LiteLLM is an open-source tool that enables communication between different AI models—essentially a universal adapter that lets applications work with OpenAI, Anthropic, and other providers through a single interface.

On March 27, TeamPCP used the stolen credentials to publish two malicious versions of LiteLLM (1.82.7 and 1.82.8) directly to PyPI. The tainted packages were available for roughly 40 minutes before being identified and removed.

Forty minutes doesn't sound like much, but LiteLLM sees millions of downloads per day and is present in approximately 36% of cloud environments according to security researchers. Any organization that happened to install or update during that window potentially pulled malicious code into production systems.

Mercor was among the victims. The company confirmed the breach on March 31 and stated it "moved promptly to contain and remediate the incident" while engaging third-party forensics experts.

What Was Stolen

The Lapsus$ extortion group listed Mercor on its leak site, claiming possession of 4TB of data allegedly including:

  • Candidate profiles and personally identifiable information
  • Employer data from companies using Mercor's hiring platform
  • Source code and internal documentation
  • API keys and secrets
  • Tailscale VPN usage data
  • Video interviews between AI systems and contractors

These claims haven't been independently verified, and Mercor hasn't confirmed the full scope of exposure. The connection between TeamPCP (which executed the supply chain attack) and Lapsus$ (which is attempting extortion) remains unclear, though researchers have suggested possible links.

For a broader understanding of how supply chain attacks propagate, the recent WordPress Smart Slider compromise demonstrates similar techniques in a different ecosystem.

Business Impact

The fallout has been severe. According to TechCrunch reporting, Meta has paused its contracts with Mercor indefinitely. Other major customers are reportedly reassessing their relationships.

At least five contractor lawsuits have been filed in the past week, with plaintiffs alleging Mercor failed to adequately protect their personal information. The company connects AI researchers and contractors with employers for training data labeling and other AI development tasks—work that requires sharing significant personal details.

The breach also raises questions about AI training data security more broadly. If attackers accessed video interviews and work samples, that material could potentially be used for unauthorized AI training or other purposes.

Why Open Source Supply Chains Matter

LiteLLM's compromise highlights ongoing risks in the AI/ML software supply chain. As organizations rapidly adopt AI tools, they often pull in dozens of open-source dependencies maintained by small teams or individual developers. The attack vector mirrors last month's AppsFlyer SDK hijacking that distributed crypto-stealing malware through a compromised registrar.

The pattern echoes other recent supply chain attacks. North Korean actors have targeted npm, PyPI, and other package ecosystems with similar techniques—compromising maintainer accounts or submitting malicious packages that look legitimate.

Security teams should treat AI tooling dependencies with the same scrutiny as any other code running in production. That means:

  1. Pin dependency versions rather than automatically pulling latest releases
  2. Use lock files to ensure consistent, auditable package installations
  3. Monitor for security advisories affecting AI/ML packages in your stack
  4. Implement package signing verification where supported

Broader Lessons

The Mercor incident demonstrates how quickly supply chain compromises can cascade. A 40-minute window of malicious package availability was enough to breach a multi-billion-dollar company handling sensitive personal data.

Our data breach response guide covers immediate steps organizations should take when facing similar incidents.

For Mercor specifically, the combination of lawsuit exposure, customer defection, and reputational damage may prove more damaging than any direct technical costs. AI startups depend heavily on trust—enterprises won't share sensitive training data or employee information with vendors they don't trust to protect it.

Whether Mercor can rebuild that trust while managing legal liability and customer relationships remains an open question. The breach serves as a cautionary tale for the rapidly growing AI services sector, where security investments often lag behind growth.

Contractors and candidates who worked through Mercor's platform should monitor for potential misuse of their personal information and consider credit monitoring services given the scope of allegedly exposed data.

Related Articles