PROBABLYPWNED
VulnerabilitiesMarch 21, 20264 min read

Langflow RCE Exploited Within 20 Hours of Disclosure

CVE-2026-33017 (CVSS 9.3) lets attackers execute arbitrary Python code on Langflow AI pipelines without authentication. Exploitation began before any PoC existed.

Marcus Chen

Attackers began exploiting a critical vulnerability in the Langflow AI orchestration platform within 20 hours of its public disclosure—without waiting for a proof-of-concept to appear. The speed of weaponization underscores how quickly threat actors can reverse-engineer vulnerabilities from advisory descriptions alone.

CVE-2026-33017, rated CVSS 9.3, combines missing authentication with unsandboxed code injection to give unauthenticated attackers full remote code execution on any exposed Langflow instance. A single HTTP request is all it takes.

How the Attack Works

The flaw exists in Langflow's public flow build endpoint: POST /api/v1/build_public_tmp/{flow_id}/flow. This endpoint is designed to let users build flows without authentication—a feature meant for demo environments that many production deployments never locked down.

When an attacker supplies a malicious data parameter, Langflow uses that attacker-controlled flow data instead of retrieving stored flow definitions from the database. The flow data can contain arbitrary Python code embedded in node definitions, which Langflow then passes directly to exec() with zero sandboxing.

No credentials. No user interaction. Just raw Python execution on the target server.

Timeline of Exploitation

Sysdig researchers observed the first exploitation attempts within 20 hours of the March 17, 2026 advisory publication. Notably, no public PoC existed at that point—attackers built working exploits directly from the vulnerability description.

This mirrors a pattern we've seen increasingly with critical vulnerabilities in automation platforms. When a flaw offers unauthenticated RCE with minimal complexity, sophisticated threat actors don't wait for hand-holding.

What Attackers Are Targeting

Observed post-exploitation activity focuses on credential harvesting:

  • Environment variables and secrets
  • .env file contents
  • Configuration files containing database credentials
  • API keys for connected services

Sysdig identified malicious callbacks to 173.212.205[.]251:8443 in early exploitation attempts. Given Langflow's role as an AI workflow orchestration tool, compromised instances likely have access to API keys for LLM providers, vector databases, and enterprise data sources.

Who's Affected

All Langflow versions through 1.8.1 are vulnerable. The fix landed in development version 1.9.0.dev8, but many production deployments haven't caught up.

Langflow's appeal as a visual tool for building AI pipelines means it's often deployed by teams without deep security expertise. The "public flow" feature that enables this attack was likely enabled by default or during initial experimentation—and never revisited.

Organizations running Langflow should assume they're being scanned. The window between disclosure and exploitation was too short for most patch cycles.

Mitigation Steps

  1. Update immediately to version 1.9.0 or later
  2. Audit environment variables on any publicly exposed instance—treat them as compromised
  3. Rotate all secrets: API keys, database passwords, and authentication tokens
  4. Monitor for outbound connections to unusual callback services
  5. Restrict network access using firewall rules or place Langflow behind a reverse proxy with authentication

For organizations that can't patch immediately, blocking access to the /api/v1/build_public_tmp/ endpoint provides temporary mitigation—though this breaks public flow functionality.

Why This Matters

The 20-hour exploitation window demonstrates that advisory-to-weaponization timelines continue shrinking. Security teams can no longer rely on "patch within 30 days" policies for internet-facing services, especially when the vulnerability involves unauthenticated RCE.

AI infrastructure is becoming a high-value target. Platforms like Langflow often have privileged access to multiple backend services and may process sensitive business data. Compromising an AI orchestration layer can provide attackers with a persistence mechanism that's harder to detect than traditional malware—and access to whatever data flows through the pipeline.

This also highlights recurring issues with supply chain and infrastructure security in the AI tooling ecosystem. Many of these platforms prioritize rapid feature development over security hardening, leaving users exposed when vulnerabilities inevitably surface.

If you're running AI infrastructure exposed to the internet, this is a reminder to audit your attack surface. The attackers already did.

Related Articles