Chrome Extensions Stealing ChatGPT Chats Hit 900K Users
Two rogue browser extensions masquerading as AI tools exfiltrated complete conversation histories from ChatGPT and DeepSeek to attacker-controlled servers every 30 minutes.
Two Chrome extensions with combined downloads exceeding 900,000 users were caught stealing complete conversation histories from ChatGPT and DeepSeek. The extensions transmitted harvested data to attacker-controlled servers every 30 minutes, potentially exposing source code, business strategies, and personal information users had shared with AI chatbots.
OX Security discovered the malicious extensions on December 29, 2025. Both have since been removed from the Chrome Web Store, but anyone who installed them before removal should assume their AI conversations were compromised.
The Rogue Extensions
The two extensions impersonated legitimate AI productivity tools:
| Extension Name | Users | Extension ID |
|---|---|---|
| Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI | 600,000 | fnmihdojmnkclgjpcoonokmkhjpjechg |
| AI Sidebar with Deepseek, ChatGPT, Claude, and more | 300,000 | inhcgfpbfdjbjogdfjbclgolkmhnooop |
Both extensions mimicked a legitimate tool called "Chat with all AI models (Gemini, Claude, DeepSeek...)" published by AITOPIA, which has approximately 1 million legitimate users. The naming similarity was deliberate—attackers counted on users confusing the fakes with the real thing.
How the Data Theft Worked
The extensions used deceptive permission requests to gain access to sensitive data. During installation, they asked users to consent to "anonymous, non-identifiable analytics data" collection. The actual behavior was anything but anonymous.
Once installed, the extensions:
- Extracted specific DOM elements from ChatGPT and DeepSeek web pages
- Captured complete conversation content—every prompt and response
- Collected all URLs from open Chrome tabs
- Harvested browsing activity and search queries
- Transmitted everything to command-and-control servers every 30 minutes
The 30-minute exfiltration interval meant users didn't need to be actively chatting for their data to be stolen. Historical conversations already visible in the browser were fair game.
Command and Control Infrastructure
OX Security researcher Moshe Siman Tov Bustan identified four C2 domains receiving stolen data:
- chatsaigpt[.]com
- deepaichats[.]com
- chataigpt[.]pro
- chatgptsidebar[.]pro
The domain naming pattern mirrors the extension names, suggesting coordinated infrastructure planning. All domains registered with privacy protection services, making attribution difficult.
What Data Was Exposed
The breach potentially exposed anything users discussed with ChatGPT or DeepSeek:
- Source code and development queries shared by engineers seeking debugging help
- Personally identifiable information users included in prompts
- Confidential business data from strategy discussions
- Legal matters users asked AI assistants to analyze
- Session tokens and authentication data embedded in captured URLs
The URL harvesting from all browser tabs adds another layer of exposure. Users may have had internal corporate dashboards, admin panels, or sensitive applications open in other tabs while using ChatGPT. Those URLs—potentially including session parameters—were also transmitted to attackers.
This Is Different from Earlier Campaigns
This operation is distinct from the DarkSpectre campaign we covered in January, which infected 8.8 million users across multiple extensions over seven years. It also differs from the Urban VPN extensions exposed in December, which harvested AI conversations from 8 million users.
The common thread across all these campaigns: browser extensions remain a persistent blind spot in enterprise security. Chrome Web Store's review process, designed to catch obvious malware, struggles with extensions that exhibit malicious behavior only after installation or only against specific targets. The extension supply chain keeps getting worse—our enterprise browser extension guide covers the full scope of the problem.
Why Organizations Should Care
Employees increasingly use ChatGPT and similar tools for work tasks without formal IT oversight. When those conversations contain proprietary information—code, financial data, customer details—and flow through compromised extensions to unknown third parties, the organization has a data breach it might never detect.
The extensions targeted ChatGPT and DeepSeek specifically because users interact with AI assistants differently than with search engines. People share context, explain problems in detail, and paste code snippets. That conversational intimacy makes stolen AI chat histories more valuable than typical browsing data.
Recommended Actions
For individual users:
- Check if you installed either extension by navigating to
chrome://extensions/ - Remove any suspicious AI-related extensions
- Change passwords for any accounts discussed in ChatGPT conversations
- Consider rotating API keys or secrets mentioned in prompts
For security teams:
- Audit browser extension installations across your fleet
- Implement extension allowlisting where feasible
- Train users on extension risks, particularly for productivity tools
- Monitor network traffic for connections to unknown domains from browser processes
The Chrome Web Store has removed both extensions, but users who installed them before removal are still at risk. Uninstalling doesn't undo the data already transmitted to attacker infrastructure.
Related Articles
Malicious Chrome Extensions Target Meta Business, VK, AI Tools
Researchers expose three Chrome extension campaigns stealing Meta Business Suite exports, VK accounts, and AI chatbot conversations from over 760,000 users.
Feb 14, 2026Chrome Extensions Target Workday and NetSuite for Session Theft
Five malicious extensions masquerading as HR tools steal authentication tokens, block security panels, and enable account takeover through cookie injection.
Jan 17, 2026PromptSpy: First Android Malware Using Gemini AI at Runtime
ESET discovers PromptSpy, the first Android malware weaponizing Google's Gemini AI to maintain persistence by analyzing UI and generating real-time tap instructions to stay pinned in recent apps.
Feb 21, 2026VSCode Extensions With 1.5M Installs Exfiltrate Code to China
Two AI coding assistants on Microsoft's marketplace steal source code and credentials in real-time. Extensions use hidden iframes and analytics SDKs to profile developers.
Jan 25, 2026