LiteLLM SQL Injection Exploited 36 Hours After Disclosure
CVE-2026-42208 lets attackers steal API keys and forge admin sessions in LiteLLM without authentication. Exploitation began within 36 hours of public disclosure.
Attackers are actively exploiting a pre-authentication SQL injection flaw in LiteLLM, the open-source LLM gateway used to proxy requests to OpenAI, Anthropic, and other model providers. CVE-2026-42208 allows unauthenticated access to stored secrets, and exploitation began just 36 hours after the vulnerability was publicly disclosed.
Sysdig's threat research team detected the first attacks on April 26 at 16:17 UTC—roughly a day and a half after the GitHub advisory went live. The attackers weren't running generic scanners. They targeted three specific database tables known to contain production secrets, suggesting prior knowledge of LiteLLM's internal architecture.
TL;DR
- What happened: Pre-auth SQL injection in LiteLLM proxy API key verification allows database access
- Who's affected: LiteLLM versions >= 1.81.16 and < 1.83.7 (estimated 100,000+ instances)
- Severity: Critical (CVSS 9.3) - No authentication required
- Action required: Upgrade to version 1.83.7 immediately and rotate all stored credentials
How Does the Attack Work?
The vulnerability exists in LiteLLM's proxy API key verification step. When a request hits any LLM API route, the proxy attempts to validate the Bearer token by querying the LiteLLM_VerificationToken table. The problem: the Bearer value gets concatenated directly into the SQL query without parameterized binding.
A single quote in the Authorization header escapes the string literal, allowing arbitrary SQL to be appended. Because this happens before authentication is validated, any HTTP client that can reach the proxy port has full access to the database.
In practice, attackers send requests like:
Authorization: Bearer ' UNION SELECT password_hash FROM users--
The response leaks whatever data the injected query returns. LiteLLM stores API keys for OpenAI, Anthropic, and other providers alongside OAuth tokens, database credentials, and environment configuration—everything an attacker needs to pivot deeper into an organization's infrastructure.
The Attackers Knew What to Look For
Sysdig's analysis reveals the attackers operated from two IP addresses within the same German autonomous system. They fired 29 UNION-based SQL injection payloads, targeting precisely three tables:
- LiteLLM_VerificationToken — Contains virtual API keys used to authenticate clients
- litellm_credentials — Stores provider API keys (OpenAI, Anthropic, etc.)
- litellm_config — Holds environment variables and master key configuration
The attackers already knew LiteLLM's Prisma ORM naming conventions, including the PascalCase quirk that generic SQL injection scanners routinely miss. This wasn't opportunistic scanning—someone studied the codebase.
Why This Matters
LiteLLM has become infrastructure for thousands of organizations running AI workloads. It's the translation layer between internal applications and LLM providers, which means it necessarily holds the keys to expensive API accounts. A compromised LiteLLM instance doesn't just expose chat logs—it hands attackers authenticated access to every model provider the organization uses.
The 36-hour exploitation window also demonstrates how quickly determined attackers can weaponize vulnerability disclosures. Security teams that rely on weekly patch cycles are gambling with assets like this.
This follows a concerning pattern we've seen with other AI infrastructure vulnerabilities, where exploitation windows have shrunk from weeks to hours. Organizations running any AI-related infrastructure need to treat these disclosures with the same urgency as network edge vulnerabilities.
Recommended Mitigations
- Upgrade immediately — Install LiteLLM version 1.83.7 or later
- Rotate all credentials — Virtual API keys, master keys, and provider credentials should be considered compromised on any internet-exposed instance
- Apply the workaround if patching isn't possible — Set
disable_error_logs: trueundergeneral_settingsto block the vulnerable query path - Restrict webhook access — Limit publicly accessible endpoints until you can verify you're patched
Frequently Asked Questions
How do I know if my LiteLLM instance was exploited?
Check your access logs for unusual Bearer token values containing SQL syntax (single quotes, UNION statements, SELECT keywords). Any request with SQL fragments in the Authorization header warrants investigation.
Does this affect LiteLLM's hosted offering?
The vulnerability affects self-hosted deployments. Contact LiteLLM directly regarding their managed service status.
For organizations evaluating AI infrastructure security, this incident underscores why supply chain attacks against developer tools remain a top concern. The tools that sit between your applications and external services often accumulate the most sensitive credentials.
Related Articles
SGLang CVSS 9.8 Flaw Allows RCE via Malicious AI Model Files
Critical CVE-2026-5760 in SGLang enables unauthenticated RCE through poisoned GGUF model files. Attackers can weaponize Hugging Face models to compromise inference servers.
Apr 26, 2026LMDeploy SSRF Exploited 12 Hours After Disclosure
CVE-2026-33626 in LMDeploy AI toolkit was weaponized within 12 hours of publication, targeting AWS credentials and internal services. Patch to v0.12.3 immediately.
Apr 24, 2026OpenClaw Sandbox Escape Hits CVSS 9.9—Upgrade Before It's Exploited
CVE-2026-41329 lets attackers bypass OpenClaw's sandbox via heartbeat context manipulation, achieving privilege escalation. CVSS 9.9 demands immediate patching.
Apr 21, 2026SAP Patches 9.9-Severity SQL Injection in BPC and Business Warehouse
CVE-2026-27681 allows low-privileged users to execute arbitrary SQL commands in SAP Business Planning and Consolidation. CVSS 9.9 - patch immediately.
Apr 19, 2026