VulnerabilitiesDecember 27, 20254 min read

LangChain Serialization Flaw Lets Attackers Steal AI Agent Secrets

CVE-2025-68664 scores CVSS 9.3 and enables secret extraction and prompt injection in LangChain Core. Patch immediately if you're running AI agents.

Marcus Chen

A critical vulnerability in LangChain Core allows attackers to extract secrets and inject malicious prompts through unsafe serialization handling. The flaw, tracked as CVE-2025-68664 and dubbed "LangGrinch" by the researcher who discovered it, affects how the popular AI framework processes untrusted input during data serialization.

TL;DR

  • What happened: Serialization injection flaw in LangChain Core's dumps() and dumpd() functions enables secret theft and prompt injection
  • Who's affected: Organizations running LangChain Core versions prior to the December 2025 patch
  • Severity: Critical (CVSS 9.3)
  • Action required: Update LangChain Core to the latest patched version immediately

How Does CVE-2025-68664 Work?

The vulnerability exists in LangChain Core's serialization functions—specifically dumps() and dumpd()—which handle converting Python objects to storable formats. When these functions process untrusted input, attackers can craft malicious payloads that trigger unintended object instantiation.

Security researcher Yarden Porat reported the vulnerability to LangChain maintainers on December 4, 2025. The attack vector allows bad actors to:

  1. Extract secrets stored in memory or configuration
  2. Inject malicious prompts that alter AI agent behavior
  3. Instantiate arbitrary objects during deserialization

LangChain.js has a related but separate vulnerability (CVE-2025-68665, CVSS 8.6) stemming from improper escaping of objects with "lc" keys.

Who Should Be Concerned?

Any organization deploying LangChain-based AI agents in production environments faces exposure. The risk compounds in architectures where:

  • AI agents process user-controlled input
  • LangChain serialization handles data from external sources
  • Secrets like API keys or credentials pass through the framework

Enterprise deployments integrating LangChain with customer-facing chatbots, automated workflows, or data processing pipelines should treat this as an emergency.

Why This Matters

LangChain has become foundational infrastructure for AI application development. The framework powers everything from simple chatbots to complex autonomous agents handling sensitive business logic. A serialization vulnerability at this layer can cascade across entire AI systems.

The timing compounds the concern. As organizations rush to deploy AI agents for competitive advantage, security review often lags behind feature development. CVE-2025-68664 demonstrates that LLM orchestration frameworks require the same security scrutiny as traditional application infrastructure.

Prompt injection attacks have already proven devastating in production AI systems. This vulnerability provides another avenue for such attacks, bypassing application-layer defenses by exploiting the framework itself.

How Can Organizations Protect Themselves?

  1. Patch immediately - Update LangChain Core to the latest version containing the fix
  2. Audit serialization boundaries - Review where your LangChain deployment handles external data
  3. Validate input sources - Never pass untrusted data directly to serialization functions
  4. Monitor agent behavior - Watch for anomalous prompt patterns or unexpected secret access
  5. Implement defense in depth - Don't rely solely on framework security; add application-layer validation

Organizations should also review their secret management practices. Credentials accessible to AI agents represent high-value targets. Consider whether your deployment truly requires secrets in agent-accessible memory.

Patch Availability

LangChain maintainers released fixes promptly after receiving the vulnerability report. The patched versions address the unsafe serialization handling in both the Python and JavaScript implementations.

Check your langchain-core package version and upgrade if you're running anything prior to the December 2025 security release. The JavaScript library langchain requires a separate update for CVE-2025-68665.

Frequently Asked Questions

Is my organization affected by CVE-2025-68664?

If you're using LangChain Core in any capacity and haven't applied the December 2025 patches, you're potentially vulnerable. Check your installed version with pip show langchain-core and compare against the security advisory.

What should I do first?

Update LangChain Core immediately. If you can't patch right away, audit all entry points where external data reaches LangChain serialization functions and add input validation as a temporary mitigation.

Are there indicators of compromise I can check?

Look for unusual patterns in your AI agent logs—unexpected prompt modifications, attempts to access secrets outside normal operation, or errors related to object instantiation. The vulnerability's exploitation would likely leave traces in serialization-related error logs.

Related Articles