PROBABLYPWNED
VulnerabilitiesApril 9, 20263 min read

Second PraisonAI Sandbox Escape in a Week Scores CVSS 9.9

CVE-2026-39888 bypasses PraisonAI's Python sandbox via exception frame traversal. Attackers chain __traceback__ attributes to reach exec(). Patch to 1.5.115.

Marcus Chen

PraisonAI disclosed another critical sandbox escape vulnerability on April 8—just four days after we covered CVE-2026-34938, a different bypass that also allowed arbitrary code execution. CVE-2026-39888 carries a CVSS score of 9.9 and affects all versions before 1.5.115.

Two critical sandbox escapes in one week suggests the framework's security boundary has fundamental design issues that attackers are actively probing.

How CVE-2026-39888 Works

The flaw lies in an incomplete AST-based blocklist within the execute_code() function in praisonaiagents.tools.python_tools. When running in sandbox_mode="sandbox", the function executes user-provided Python code in a subprocess—but the blocklist misses a crucial attack path.

An attacker can trigger a caught exception intentionally, then traverse through exception frame attributes that weren't blocked:

  • __traceback__ (exception traceback object)
  • tb_frame (frame object from traceback)
  • f_back (previous stack frame)
  • f_builtins (builtins dictionary from frame)

By chaining these attributes, the attacker reaches the real Python builtins dictionary of the subprocess wrapper frame. From there, retrieving and assigning exec to a non-blocked variable name becomes trivial. The result: arbitrary Python code execution with no sandbox constraints.

This pattern—using exception handling mechanics to escape sandboxes—has appeared in Python security research for years. That it bypasses PraisonAI's protections indicates the blocklist approach wasn't comprehensive.

What's Different from CVE-2026-34938

Last week's vulnerability exploited a weakness in how the _safe_getattr wrapper handled string subclasses. An attacker could override startswith() to subvert the wrapper logic entirely.

CVE-2026-39888 takes a different path through exception handling. The two vulnerabilities are independent—patching one doesn't address the other. Organizations that applied last week's fix to version 1.5.90 remain vulnerable to this new bypass until they upgrade again.

The rapid discovery of multiple escape routes suggests security researchers (or attackers) are systematically probing PraisonAI's sandbox implementation. More bypasses may follow.

Who's Affected

Any PraisonAI deployment before version 1.5.115 is vulnerable. The framework powers multi-agent AI systems where autonomous agents collaborate on tasks, often processing external data with elevated privileges.

The attack requires network access to submit code to the PraisonAI instance. In typical deployments, agents accept input through APIs or orchestration layers—meaning the attack surface extends to any data the agents process.

If you're running PraisonAI in production, this is the second time in four days you've needed to emergency-patch the same component. That pattern should inform architectural decisions.

Remediation Steps

  1. Upgrade immediately to PraisonAI version 1.5.115 or later
  2. Audit agent configurations to minimize what code execution is permitted
  3. Implement network segmentation between AI infrastructure and sensitive systems
  4. Monitor for anomalous subprocess activity from Python worker processes
  5. Review input validation for any external data flowing into agent workflows

The PraisonAI security advisory provides additional technical details and upgrade instructions.

The Broader AI Security Problem

AI orchestration frameworks that execute arbitrary code remain high-risk targets. We've seen similar sandbox escapes in n8n, Flowise AI, and other automation tools.

The fundamental tension: these platforms need code execution to be useful, but every execution boundary becomes an attack surface. Blocklist-based approaches—attempting to enumerate all dangerous operations—consistently fail against creative attackers who find paths the defenders didn't anticipate.

Until frameworks adopt more robust isolation mechanisms (containers, VMs, or capability-based security), expect this vulnerability class to persist. Organizations deploying AI agents should treat them as high-risk components requiring defense-in-depth, not trusted internal services.

Related Articles