Moltbook Breach Exposes 1.5 Million AI Agent API Keys
Wiz researchers found Moltbook's Supabase database exposed without authentication, leaking 1.5M API tokens, private messages, and plaintext OpenAI keys.
A viral AI social network launched on January 28 and breached three days later. Moltbook, the Reddit-style platform where autonomous AI agents interact and share content, exposed its entire production database to the internet—including 1.5 million API authentication tokens, 35,000 email addresses, and private conversations containing plaintext OpenAI credentials.
Wiz Research discovered that a hardcoded Supabase API key in client-side JavaScript granted unauthenticated access to approximately 4.75 million database records. Anyone who found this key could read every message, steal every token, and impersonate any AI agent on the platform.
What Was Exposed
The breach affected nearly every data type Moltbook collected:
1.5 million API authentication tokens — These claim tokens and verification codes authenticate AI agents to the platform. With stolen tokens, attackers could fully impersonate any agent, including high-karma accounts and well-known persona agents.
35,000 user email addresses — Plus an additional 29,631 emails from early access signups for developer products.
4,060 private DM conversations — Messages between agents, some containing third-party credentials. Wiz researchers found plaintext OpenAI API keys shared in private conversations between agents.
Complete platform data — Posts, votes, karma scores, and every interaction logged by the system.
The Root Cause: Missing Row Level Security
Moltbook runs on Supabase, a hosted PostgreSQL platform. When properly configured, Supabase uses Row Level Security (RLS) policies to restrict what each API key can access. The public API key is safe to expose in client-side JavaScript because RLS prevents unauthorized access.
Moltbook shipped without RLS enabled.
The exposed credentials:
- Supabase Project: ehxbxtjliybbloantpwq.supabase.co
- Exposed API Key: sb_publishable_4ZaiilhgPir-2ns8Hxg5Tw_JqZU_G6-
This granted full read and write access to the entire database. The researchers noted that "vibe-coded applications" built quickly without security review frequently expose credentials in frontend code. The write access made this particularly dangerous—attackers could have modified content, injected malicious prompts, or manipulated agent behavior at scale.
Discovery and Response
Security researcher Gal Nagli at Wiz discovered the misconfiguration. Another researcher, Jameson O'Reilly, independently found the same vulnerability, underscoring how obvious the exposure was to anyone looking.
The timeline shows rapid response once contacted:
- January 28, 2026 — Moltbook launches publicly
- January 31, 21:48 UTC — Wiz contacts Moltbook team
- January 31, 23:29 UTC — First fix applied
- February 1, 01:00 UTC — Final patches completed
Moltbook secured the database within hours of notification. Wiz confirmed they deleted all data accessed during research and fix verification.
The AI Agent Security Pattern
This breach adds to a troubling pattern around AI agent platforms. Just days ago, we covered infostealers harvesting OpenClaw configuration files, exfiltrating gateway tokens and cryptographic keys. Moltbook's breach demonstrates that cloud-hosted AI platforms face the same risks as locally deployed agents—and potentially worse, since a single database misconfiguration exposes every user simultaneously.
The stolen OpenAI API keys found in private messages highlight a secondary concern. AI agents share credentials with each other, sometimes in plaintext. When one platform gets breached, the blast radius extends to every integrated service those agents access. A similar pattern emerged in the ClawHub malicious skills campaign that distributed Atomic Stealer through fake cryptocurrency automation tools.
Implications for AI Platform Security
"Write access introduces far greater risk than data exposure alone," the Wiz researchers noted. An attacker with write access could have:
- Injected malicious prompts into agent memory files
- Posted content that manipulates other agents through prompt injection
- Modified agent credentials to redirect API calls through attacker infrastructure
- Deleted legitimate content to disrupt the platform
The incident arrived as AI agent adoption accelerates. Organizations deploying autonomous agents need to treat platform security with the same rigor applied to enterprise authentication systems. Credentials shared between agents should be encrypted, not stored in plaintext messages.
Protecting AI Agent Deployments
For organizations building or using AI agent platforms:
-
Enforce Row Level Security — Never deploy Supabase or similar platforms without RLS policies. Test authentication boundaries before launch.
-
Audit client-side code — Search for hardcoded credentials in JavaScript bundles. API keys visible in browser developer tools are API keys visible to attackers.
-
Encrypt inter-agent credentials — If agents need to share API keys, use encrypted channels. Plaintext credentials in messages become breach amplifiers.
-
Monitor token usage — Watch for authentication tokens appearing in unexpected locations or being used from anomalous IP ranges.
-
Assume breach exposure — If you used Moltbook before February 1, consider any credentials shared on the platform compromised. Rotate API keys, especially OpenAI tokens.
The speed of Moltbook's adoption—and breach—captures the current AI security moment. Platforms can go viral before anyone thinks to check whether the database is properly secured. For defenders tracking emerging threat intelligence around AI systems, the lesson is clear: velocity kills security, and AI platforms are moving very fast.
Related Articles
Hacking AI Platform WormGPT Breached, 19,000 Users Exposed
WormGPT database allegedly leaked on dark web forums, exposing emails, payment data, and subscription details of cybercriminals using the service.
Feb 20, 2026PayPal Breach Exposed SSNs for Six Months Before Detection
A coding error in PayPal Working Capital exposed customer SSNs and business data since July 2025. Unauthorized transactions detected on some affected accounts.
Feb 24, 2026ShinyHunters Demands $1.5M From Wynn Resorts Over Stolen Data
ShinyHunters claims 800,000+ Wynn Resorts employee records including SSNs, salaries, and personal details. Group demands 22 Bitcoin by February 23, exploited Oracle PeopleSoft.
Feb 23, 2026Australian Court Files Exposed via Third-Party Offshoring Breach
VIQ Solutions confirms sensitive Australian court data including domestic violence and national security cases accessed by unauthorized Indian subcontractor e24 Technologies.
Feb 23, 2026