VulnerabilitiesJanuary 17, 20264 min read

Reprompt Attack Turned Microsoft Copilot Into a Data Thief

Varonis researchers disclosed a vulnerability chain that let attackers exfiltrate user data through Copilot with a single malicious link click. Microsoft has patched the issue.

Marcus Chen

Varonis Threat Labs disclosed a vulnerability in Microsoft Copilot Personal that allowed attackers to silently exfiltrate user data through a single malicious link click. The attack—dubbed "Reprompt"—chained three techniques to bypass Copilot's safety mechanisms and maintain persistent access to victim sessions.

Microsoft confirmed the issue has been patched as part of the January 2026 security updates. Enterprise customers using Microsoft 365 Copilot were not affected.

How Reprompt Worked

The attack exploited Copilot's URL parameter functionality, which accepts prompts via the 'q' parameter to streamline user experience. Attackers could embed malicious instructions in this parameter and deliver the URL to targets through phishing, social engineering, or malicious websites.

Three techniques combined to create a dangerous attack chain:

Parameter-to-Prompt (P2P) Injection: Copilot automatically executes prompts embedded in the URL's 'q' parameter when a page loads. If an attacker crafts malicious instructions and delivers the URL to a victim, Copilot performs actions without user knowledge or consent.

Double-Request Bypass: Copilot's safeguards only checked the first request for malicious content. By instructing Copilot to repeat sensitive actions twice, attackers could slip requests past the initial filter. The second attempt "worked flawlessly," according to researchers.

Chain-Request Persistence: After the initial compromise, the attacker's server issued follow-up instructions based on prior responses in an ongoing sequence. This enabled continuous data theft without additional user interaction—even after the victim closed the Copilot chat window.

What Could Attackers Steal?

The vulnerability gave attackers access to whatever information Copilot could retrieve from the user's Microsoft ecosystem:

  • Files accessed throughout the day
  • User location and residence details
  • Planned vacations and travel schedules
  • Personal conversation history
  • Username and identity information

"Attackers could use prompts such as 'Summarize all of the files that the user accessed today,' 'Where does the user live?' or 'What vacations does he have planned?'" Varonis explained.

All commands were delivered from the attacker's server after the initial prompt, making it impossible to determine what data was being exfiltrated by inspecting the starting URL. Client-side security tools couldn't detect the exfiltration in progress.

Why Detection Was Difficult

Reprompt differed from typical AI security issues because it required minimal user interaction—just a single click on what appeared to be a legitimate Microsoft link. No plugins needed. No prompts to review or approve. No added permissions to accept.

Traditional warning signs like suspicious prompts, obvious copy/paste behavior, or permission requests never appeared. The victim might never know their session was compromised.

The attack also persisted beyond the initial interaction. Even closing the Copilot chat didn't terminate the data exfiltration session running in the background.

Microsoft's Response

Varonis disclosed the vulnerability to Microsoft on August 31, 2025. Microsoft confirmed it was patched in the January 2026 Patch Tuesday updates.

"We appreciate Varonis Threat Labs for responsibly reporting this issue," Microsoft stated. "We have rolled out protections that address the scenario described and are implementing additional measures to strengthen safeguards against similar techniques as part of our defense-in-depth approach."

Why This Matters

AI assistants with deep integration into user data present new attack surfaces that security teams are still learning to defend. Reprompt demonstrates that prompt injection isn't just about making chatbots say embarrassing things—it can enable serious data theft.

Organizations adopting AI assistants should treat them as privileged applications with access to sensitive data. The same security principles that apply to any system with broad data access—least privilege, monitoring, access controls—apply here too.

The attack also highlights the tension between AI usability and security. Features designed to streamline user experience, like auto-executing URL parameters, can become exploitation vectors when attackers understand how to abuse them.

Recommendations

Organizations using AI assistants should:

  1. Treat AI deep links and auto-filled prompts as untrusted input
  2. Ensure safeguards apply across repeated and chained requests
  3. Enforce strong identity and session protections including MFA
  4. Restrict AI assistant access to managed devices and trusted networks
  5. Apply least-privilege permissions and sensitivity labels across connected content

Individual users should be cautious about clicking links that could invoke AI assistants with pre-filled prompts, particularly from untrusted sources.

Related Articles