PROBABLYPWNED
VulnerabilitiesApril 29, 20264 min read

LiteLLM SQL Injection Exploited 36 Hours After Disclosure

CVE-2026-42208 lets attackers steal API keys and forge admin sessions in LiteLLM without authentication. Exploitation began within 36 hours of public disclosure.

Marcus Chen

Attackers are actively exploiting a pre-authentication SQL injection flaw in LiteLLM, the open-source LLM gateway used to proxy requests to OpenAI, Anthropic, and other model providers. CVE-2026-42208 allows unauthenticated access to stored secrets, and exploitation began just 36 hours after the vulnerability was publicly disclosed.

Sysdig's threat research team detected the first attacks on April 26 at 16:17 UTC—roughly a day and a half after the GitHub advisory went live. The attackers weren't running generic scanners. They targeted three specific database tables known to contain production secrets, suggesting prior knowledge of LiteLLM's internal architecture.

TL;DR

  • What happened: Pre-auth SQL injection in LiteLLM proxy API key verification allows database access
  • Who's affected: LiteLLM versions >= 1.81.16 and < 1.83.7 (estimated 100,000+ instances)
  • Severity: Critical (CVSS 9.3) - No authentication required
  • Action required: Upgrade to version 1.83.7 immediately and rotate all stored credentials

How Does the Attack Work?

The vulnerability exists in LiteLLM's proxy API key verification step. When a request hits any LLM API route, the proxy attempts to validate the Bearer token by querying the LiteLLM_VerificationToken table. The problem: the Bearer value gets concatenated directly into the SQL query without parameterized binding.

A single quote in the Authorization header escapes the string literal, allowing arbitrary SQL to be appended. Because this happens before authentication is validated, any HTTP client that can reach the proxy port has full access to the database.

In practice, attackers send requests like:

Authorization: Bearer ' UNION SELECT password_hash FROM users--

The response leaks whatever data the injected query returns. LiteLLM stores API keys for OpenAI, Anthropic, and other providers alongside OAuth tokens, database credentials, and environment configuration—everything an attacker needs to pivot deeper into an organization's infrastructure.

The Attackers Knew What to Look For

Sysdig's analysis reveals the attackers operated from two IP addresses within the same German autonomous system. They fired 29 UNION-based SQL injection payloads, targeting precisely three tables:

  1. LiteLLM_VerificationToken — Contains virtual API keys used to authenticate clients
  2. litellm_credentials — Stores provider API keys (OpenAI, Anthropic, etc.)
  3. litellm_config — Holds environment variables and master key configuration

The attackers already knew LiteLLM's Prisma ORM naming conventions, including the PascalCase quirk that generic SQL injection scanners routinely miss. This wasn't opportunistic scanning—someone studied the codebase.

Why This Matters

LiteLLM has become infrastructure for thousands of organizations running AI workloads. It's the translation layer between internal applications and LLM providers, which means it necessarily holds the keys to expensive API accounts. A compromised LiteLLM instance doesn't just expose chat logs—it hands attackers authenticated access to every model provider the organization uses.

The 36-hour exploitation window also demonstrates how quickly determined attackers can weaponize vulnerability disclosures. Security teams that rely on weekly patch cycles are gambling with assets like this.

This follows a concerning pattern we've seen with other AI infrastructure vulnerabilities, where exploitation windows have shrunk from weeks to hours. Organizations running any AI-related infrastructure need to treat these disclosures with the same urgency as network edge vulnerabilities.

Recommended Mitigations

  1. Upgrade immediately — Install LiteLLM version 1.83.7 or later
  2. Rotate all credentials — Virtual API keys, master keys, and provider credentials should be considered compromised on any internet-exposed instance
  3. Apply the workaround if patching isn't possible — Set disable_error_logs: true under general_settings to block the vulnerable query path
  4. Restrict webhook access — Limit publicly accessible endpoints until you can verify you're patched

Frequently Asked Questions

How do I know if my LiteLLM instance was exploited?

Check your access logs for unusual Bearer token values containing SQL syntax (single quotes, UNION statements, SELECT keywords). Any request with SQL fragments in the Authorization header warrants investigation.

Does this affect LiteLLM's hosted offering?

The vulnerability affects self-hosted deployments. Contact LiteLLM directly regarding their managed service status.

For organizations evaluating AI infrastructure security, this incident underscores why supply chain attacks against developer tools remain a top concern. The tools that sit between your applications and external services often accumulate the most sensitive credentials.

Related Articles