LangChain Flaws Expose Files, Secrets, and Databases
Three vulnerabilities in LangChain and LangGraph expose filesystems, environment secrets, and conversation histories. CVE-2026-34070 enables path traversal. Patches available now.
Security researchers have disclosed three vulnerabilities in LangChain and LangGraph, two of the most widely deployed AI development frameworks, that could allow attackers to read sensitive files, extract environment secrets, and access conversation histories from production systems.
The flaws affect frameworks with staggering adoption rates. LangChain, LangChain-Core, and LangGraph recorded more than 52 million, 23 million, and 9 million downloads respectively in the previous week alone, making these vulnerabilities a significant concern for organizations building AI agents and workflows.
What Are the Vulnerabilities?
The disclosure covers three distinct issues, each targeting a different attack surface:
CVE-2026-34070 (CVSS 7.5) is a path traversal vulnerability in LangChain's prompt-loading functionality. The vulnerable code path resides in langchain_core/prompts/loading.py, where arbitrary file paths can be passed without validation. An attacker exploiting this flaw could read Docker configurations, application source code, or any file accessible to the running process.
CVE-2025-67644 (CVSS 7.3) affects LangGraph's SQLite checkpoint implementation. The flaw enables SQL injection through metadata filter keys, allowing attackers to manipulate queries and execute arbitrary SQL against the checkpoint database. This could expose conversation histories associated with sensitive workflows.
CVE-2025-68664 is a deserialization vulnerability where the dumps() and dumpd() functions fail to properly escape user-controlled dictionaries containing the reserved lc key. Successful exploitation could enable prompt injection attacks that siphon secrets from AI workflows.
Who Is Affected?
Any organization running LangChain-Core below version 1.2.22, LangGraph-checkpoint-sqlite below 3.0.1, or LangChain-Core below 0.3.81 is vulnerable. The path traversal flaw is particularly concerning for self-hosted deployments where the AI framework has filesystem access to sensitive resources.
The SQL injection issue affects teams using SQLite checkpointing for conversation persistence. Given that many LangGraph deployments use this storage backend during development and testing, credentials and API keys stored in conversation context could be at risk.
How to Protect Your Systems
Patches are available now:
- CVE-2026-34070: Update langchain-core to version 1.2.22 or later
- CVE-2025-68664: Update to versions 0.3.81 or 1.2.5
- CVE-2025-67644: Update langgraph-checkpoint-sqlite to version 3.0.1
Organizations should audit their LangChain deployments for any custom prompt-loading logic that accepts user input. The NVD entry for CVE-2026-34070 provides additional technical context.
For teams running AI agents in production, consider implementing input validation at the application layer rather than relying solely on framework-level protections. Similar to the Excel Copilot data exfiltration vulnerability we covered last week, these flaws demonstrate how AI integration points create new attack surfaces that traditional security controls don't anticipate.
Why This Matters
These vulnerabilities highlight a growing pattern in AI framework security. As organizations race to deploy LangChain-based agents, the attack surface expands with each new integration. The path traversal flaw in particular mirrors classic web application vulnerabilities, suggesting that AI frameworks may be repeating security mistakes from earlier technology generations.
The SQL injection issue in LangGraph's checkpoint system is especially concerning for enterprises. Conversation checkpoints often contain extracted data, API responses, and intermediate reasoning steps that could reveal business logic or customer information.
Organizations exploring AI agent deployments should treat these frameworks with the same security scrutiny applied to web frameworks and database drivers. The n8n vulnerabilities disclosed earlier this month showed similar patterns where automation platforms introduced unexpected privilege escalation paths.
For a deeper understanding of how AI systems can be compromised, our guide on social engineering techniques explains how attackers combine technical exploits with manipulation tactics to achieve their objectives.
Related Articles
LangChain Serialization Flaw Lets Attackers Steal AI Agent Secrets
CVE-2025-68664 scores CVSS 9.3 and enables secret extraction and prompt injection in LangChain Core. Patch immediately if you're running AI agents.
Dec 27, 2025Ubiquiti UniFi Flaw Scores CVSS 10—Patch Before Full Takeover
CVE-2026-22557 lets unauthenticated attackers traverse paths and hijack UniFi Network accounts. CVSS 10.0 severity demands immediate patching to 10.1.89.
Mar 27, 2026Langflow RCE Exploited Within 20 Hours of Disclosure
CVE-2026-33017 (CVSS 9.3) lets attackers execute arbitrary Python code on Langflow AI pipelines without authentication. Exploitation began before any PoC existed.
Mar 21, 2026Custom Fonts Let Attackers Hide Commands from AI Assistants
LayerX researchers found that custom font rendering can hide malicious prompts from ChatGPT, Claude, Gemini, and other AI assistants while displaying them to users.
Mar 18, 2026