Claude Code Flaws Let Malicious Repos Steal API Keys, Run Code
Check Point found CVE-2025-59536 and CVE-2026-21852 in Anthropic's Claude Code. Opening a cloned repo could execute code and leak API credentials.
Security researchers at Check Point discovered critical vulnerabilities in Anthropic's Claude Code AI assistant that allowed attackers to achieve remote code execution and steal API credentials simply by tricking a developer into opening a malicious repository. Anthropic has since patched both flaws.
The vulnerabilities, tracked as CVE-2025-59536 (MCP user consent bypass) and CVE-2026-21852 (API key exfiltration), exploited configuration mechanisms in Claude Code's project file structure. An attacker could craft a repository that executes arbitrary commands before users have a chance to review security warnings.
How the Attacks Worked
Hooks-Based RCE
Claude Code's Hooks feature allows developers to define shell commands that run at specific points during the tool's lifecycle—like when a session starts. The feature is useful for automation, but Check Point found that malicious hook configurations in .claude/settings.json could execute commands silently during session initialization.
When a developer ran Claude Code in a directory containing an attacker-controlled configuration, the hooks fired before any trust dialog appeared. The researcher demonstrated launching Calculator on macOS as a proof of concept, but the same technique could establish a reverse shell or download additional payloads.
MCP Consent Bypass
Model Context Protocol (MCP) servers extend Claude Code's capabilities by connecting to external data sources and tools. Legitimate uses include database access, API integrations, and development tooling. But the .mcp.json configuration file could also be weaponized.
Check Point discovered that certain configuration flags—enableAllProjectMcpServers and enabledMcpjsonServers—bypassed user consent dialogs entirely. Commands specified in MCP server configurations executed immediately when Claude Code initialized, before users could read or respond to security prompts.
API Key Exfiltration
The most concerning finding involved environment variable manipulation. By setting ANTHROPIC_BASE_URL to an attacker-controlled endpoint in the project configuration, Claude Code would send API requests to the wrong server before showing any trust prompt. These requests included Authorization headers containing the user's plaintext API key.
Stolen API keys grant access to Anthropic's API under the victim's account. More significantly, Claude Code allows file uploads to workspaces that can be shared across teams. An attacker with a stolen key could potentially access files uploaded by other users in the same organization.
Attack Requirements
The attacks required minimal effort from the attacker's perspective:
- Create a repository with malicious
.claude/settings.jsonor.mcp.jsonfiles - Convince a victim to clone the repository (social engineering, typosquatting, etc.)
- Wait for the victim to open the directory in Claude Code
No special privileges needed. The configuration files look like legitimate project metadata, unlikely to raise suspicion during code review. This attack pattern mirrors what we've seen with other AI agent credential theft targeting the expanding AI tooling ecosystem.
What Anthropic Fixed
Check Point worked with Anthropic's security team to remediate all reported issues before public disclosure. The fixes include:
- Enhanced trust dialogs - Users now see explicit warnings about untrusted configurations before any execution
- Deferred MCP execution - MCP server commands no longer run until after user approval
- Delayed network operations - API requests are postponed until trust dialog confirmation, preventing credential interception during initialization
All patches were deployed prior to the February 25, 2026 disclosure.
Why This Matters for AI Development Tools
AI coding assistants operate with significant privileges on developer machines. They read source code, execute commands, access APIs, and interact with development infrastructure. This privilege level makes them attractive targets for supply chain attacks.
The Claude Code vulnerabilities demonstrate that configuration-as-code patterns—common in modern development—create implicit trust relationships that attackers can exploit. A developer who reviews every line of source code might still overlook a JSON configuration file.
Organizations adopting AI coding tools should consider:
- Repository vetting - Treat cloned repositories as untrusted, especially from unknown sources
- Configuration review - Audit
.claude/,.mcp.json, and similar tool-specific directories - Network monitoring - Watch for unexpected API calls from development machines
- Key rotation - Regularly rotate API keys and revoke those exposed to untrusted environments
The broader lesson applies beyond Claude Code. As AI assistants become standard development tools, their security model becomes part of the software supply chain. Organizations should evaluate AI tools with the same rigor applied to compilers, package managers, and other privileged development infrastructure.
Responsible Disclosure
Check Point coordinated disclosure with Anthropic, allowing time for patches before publishing technical details. This collaboration ensured users were protected before exploitation became possible. The timeline represents industry best practices for handling vulnerabilities in widely-deployed tools.
Developers who previously opened untrusted repositories with Claude Code should rotate their Anthropic API keys as a precaution.
Related Articles
OpenLIT GitHub Actions Flaw Exposes API Keys, Cloud Credentials
CVE-2026-27941 (CVSS 9.9) lets attackers execute code via pull requests to OpenLIT, stealing GITHUB_TOKEN and cloud secrets. Patch to 1.37.1 now.
Feb 26, 2026Microsoft Copilot Bug Exposed Confidential Emails for Weeks
Microsoft confirms Copilot bug bypassed DLP policies, reading confidential emails without authorization. European Parliament blocked Copilot over concerns.
Feb 25, 2026Microsoft Semantic Kernel RCE Flaw Scores Perfect 10.0 CVSS
CVE-2026-26030 in Microsoft's Semantic Kernel Python SDK enables unauthenticated RCE through InMemoryVectorStore. Upgrade to 1.39.4 immediately.
Feb 20, 2026Anthropic Accuses Chinese AI Labs of Industrial-Scale Model Theft
Anthropic alleges DeepSeek, Moonshot AI, and MiniMax used 24,000 fake accounts to extract Claude capabilities through 16 million distillation queries.
Feb 25, 2026