MalwareJanuary 9, 20264 min read

Chrome Extensions Stealing ChatGPT Chats Hit 900K Users

Two rogue browser extensions masquerading as AI tools exfiltrated complete conversation histories from ChatGPT and DeepSeek to attacker-controlled servers every 30 minutes.

James Rivera

Two Chrome extensions with combined downloads exceeding 900,000 users were caught stealing complete conversation histories from ChatGPT and DeepSeek. The extensions transmitted harvested data to attacker-controlled servers every 30 minutes, potentially exposing source code, business strategies, and personal information users had shared with AI chatbots.

OX Security discovered the malicious extensions on December 29, 2025. Both have since been removed from the Chrome Web Store, but anyone who installed them before removal should assume their AI conversations were compromised.

The Rogue Extensions

The two extensions impersonated legitimate AI productivity tools:

Extension NameUsersExtension ID
Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI600,000fnmihdojmnkclgjpcoonokmkhjpjechg
AI Sidebar with Deepseek, ChatGPT, Claude, and more300,000inhcgfpbfdjbjogdfjbclgolkmhnooop

Both extensions mimicked a legitimate tool called "Chat with all AI models (Gemini, Claude, DeepSeek...)" published by AITOPIA, which has approximately 1 million legitimate users. The naming similarity was deliberate—attackers counted on users confusing the fakes with the real thing.

How the Data Theft Worked

The extensions used deceptive permission requests to gain access to sensitive data. During installation, they asked users to consent to "anonymous, non-identifiable analytics data" collection. The actual behavior was anything but anonymous.

Once installed, the extensions:

  1. Extracted specific DOM elements from ChatGPT and DeepSeek web pages
  2. Captured complete conversation content—every prompt and response
  3. Collected all URLs from open Chrome tabs
  4. Harvested browsing activity and search queries
  5. Transmitted everything to command-and-control servers every 30 minutes

The 30-minute exfiltration interval meant users didn't need to be actively chatting for their data to be stolen. Historical conversations already visible in the browser were fair game.

Command and Control Infrastructure

OX Security researcher Moshe Siman Tov Bustan identified four C2 domains receiving stolen data:

  • chatsaigpt[.]com
  • deepaichats[.]com
  • chataigpt[.]pro
  • chatgptsidebar[.]pro

The domain naming pattern mirrors the extension names, suggesting coordinated infrastructure planning. All domains registered with privacy protection services, making attribution difficult.

What Data Was Exposed

The breach potentially exposed anything users discussed with ChatGPT or DeepSeek:

  • Source code and development queries shared by engineers seeking debugging help
  • Personally identifiable information users included in prompts
  • Confidential business data from strategy discussions
  • Legal matters users asked AI assistants to analyze
  • Session tokens and authentication data embedded in captured URLs

The URL harvesting from all browser tabs adds another layer of exposure. Users may have had internal corporate dashboards, admin panels, or sensitive applications open in other tabs while using ChatGPT. Those URLs—potentially including session parameters—were also transmitted to attackers.

This Is Different from Earlier Campaigns

This operation is distinct from the DarkSpectre campaign we covered in January, which infected 8.8 million users across multiple extensions over seven years. It also differs from the Urban VPN extensions exposed in December, which harvested AI conversations from 8 million users.

The common thread across all these campaigns: browser extensions remain a persistent blind spot in enterprise security. Chrome Web Store's review process, designed to catch obvious malware, struggles with extensions that exhibit malicious behavior only after installation or only against specific targets.

Why Organizations Should Care

Employees increasingly use ChatGPT and similar tools for work tasks without formal IT oversight. When those conversations contain proprietary information—code, financial data, customer details—and flow through compromised extensions to unknown third parties, the organization has a data breach it might never detect.

The extensions targeted ChatGPT and DeepSeek specifically because users interact with AI assistants differently than with search engines. People share context, explain problems in detail, and paste code snippets. That conversational intimacy makes stolen AI chat histories more valuable than typical browsing data.

Recommended Actions

For individual users:

  1. Check if you installed either extension by navigating to chrome://extensions/
  2. Remove any suspicious AI-related extensions
  3. Change passwords for any accounts discussed in ChatGPT conversations
  4. Consider rotating API keys or secrets mentioned in prompts

For security teams:

  1. Audit browser extension installations across your fleet
  2. Implement extension allowlisting where feasible
  3. Train users on extension risks, particularly for productivity tools
  4. Monitor network traffic for connections to unknown domains from browser processes

The Chrome Web Store has removed both extensions, but users who installed them before removal are still at risk. Uninstalling doesn't undo the data already transmitted to attacker infrastructure.

Related Articles