PROBABLYPWNED
VulnerabilitiesApril 4, 20263 min read

PraisonAI Sandbox Bypass Scores Perfect CVSS 10

CVE-2026-34938 lets attackers escape PraisonAI's three-layer Python sandbox to execute arbitrary OS commands. CVSS 10 — patch to version 1.5.90 immediately.

Marcus Chen

A maximum-severity vulnerability in PraisonAI, the popular multi-agent teams framework, allows attackers to break out of its Python sandbox and execute arbitrary commands on the host system. CVE-2026-34938 carries a CVSS score of 10.0 and was disclosed on April 3, 2026.

How the Bypass Works

PraisonAI's execute_code() function is designed to run user-supplied Python code within a three-layer sandbox environment. The sandbox relies on a _safe_getattr wrapper to restrict access to dangerous attributes and functions.

The flaw lies in how that wrapper handles string objects. An attacker can craft a custom str subclass that overrides the startswith() method. When this malicious subclass instance passes through _safe_getattr, the overridden method subverts the wrapper's logic entirely, disabling the protective measures.

From there, the attacker's Python code executes directly on the host with no sandbox constraints. This is a complete compromise of the execution boundary PraisonAI relies on for safety.

The vulnerability pattern here echoes issues we've seen in other AI agent frameworks. Just last week, n8n disclosed a similar sandbox escape affecting its merge node functionality. AI orchestration tools that accept user input and execute code remain a high-risk category.

Who's Affected

Any deployment of PraisonAI before version 1.5.90 is vulnerable. The framework is used by organizations building multi-agent AI systems where autonomous agents collaborate on tasks. These deployments often run with elevated privileges to perform their work, making host-level command execution particularly dangerous.

If you're running PraisonAI in production—especially in environments where agents process external data—this vulnerability provides a direct path from malicious input to system compromise.

What Organizations Should Do

  1. Upgrade immediately to PraisonAI version 1.5.90 or later, which contains the fix
  2. Audit recent agent activity for anomalous command execution patterns
  3. Review network segmentation around AI agent infrastructure
  4. Implement monitoring for process spawning from w3wp.exe or Python worker processes

The speed from sandbox bypass to host compromise makes detection difficult. Prevention through patching is the only reliable mitigation.

Why This Matters

AI agent frameworks occupy a uniquely dangerous position in the security landscape. They're designed to execute code based on external inputs, often with the permissions needed to accomplish real work. When sandbox mechanisms fail, there's no second line of defense.

PraisonAI isn't alone in this risk category. Organizations deploying Vertex AI agents and other agentic systems should audit their own sandbox boundaries. The pattern of sandbox escape vulnerabilities in AI tools suggests the industry hasn't fully internalized how to build secure execution environments for autonomous systems.

For those evaluating AI agent deployments, this vulnerability illustrates why defense-in-depth remains critical even when vendors claim strong sandboxing. Assume the sandbox will eventually fail and architect accordingly.

Related Articles