PROBABLYPWNED
VulnerabilitiesMay 9, 20263 min read

Microsoft Patches 3 Copilot Flaws That Leaked Sensitive Data

CVE-2026-26129, CVE-2026-26164, and CVE-2026-33111 allowed information disclosure via injection attacks in Microsoft 365 Copilot. No admin action required.

Marcus Chen

Microsoft disclosed and remediated three information disclosure vulnerabilities in Microsoft 365 Copilot and Copilot Chat in Edge on May 7, 2026. The flaws allowed attackers to extract sensitive data through injection attacks without requiring authentication or user interaction.

All three vulnerabilities have been patched server-side. Administrators don't need to apply updates or modify configurations—Microsoft handled remediation through its cloud infrastructure.

The Three Vulnerabilities

CVE-2026-26129 affects Microsoft 365 Copilot's Business Chat feature. Classified under CWE-74 (Injection), this vulnerability has a network-based attack vector requiring no privileges or user interaction. The confidentiality impact is rated high, meaning successful exploitation could expose significant amounts of sensitive organizational data.

CVE-2026-26164 targets Copilot Chat embedded in Microsoft Edge. This command injection flaw (CWE-77) carries a CVSS score of 7.5 and shares the same attack profile: network-accessible, no privileges needed, no user interaction required.

CVE-2026-33111 also affects Copilot Chat in Edge with identical severity ratings and attack characteristics as CVE-2026-26164.

What Could Be Exposed

The injection vulnerabilities allowed attackers to manipulate Copilot's queries in ways that caused it to disclose information it shouldn't. Given Copilot's integration with Microsoft 365 data—including emails, documents, Teams messages, and calendar entries—the potential exposure was substantial.

An attacker successfully exploiting these flaws could potentially retrieve:

  • Email content and metadata from connected Exchange mailboxes
  • Document contents from SharePoint and OneDrive
  • Teams conversation history
  • Calendar appointments and meeting details

The "no user interaction required" classification is particularly concerning. Unlike phishing attacks that need victims to click malicious links, these vulnerabilities could be exploited against any exposed Copilot endpoint without the target's knowledge.

AI Systems and Injection Attacks

These vulnerabilities highlight a growing attack surface as organizations deploy AI assistants with broad data access. Copilot's value comes from its ability to synthesize information across Microsoft 365 services—the same capability that makes injection attacks potentially devastating.

Prompt injection has emerged as a significant threat class for large language model deployments. We covered similar risks in the Vidar infostealer campaign that targeted AI agent configurations, and the attack surface continues expanding as organizations grant AI systems deeper infrastructure access.

The core challenge: AI assistants designed to be helpful will often comply with requests embedded in content they process, even when those requests come from attackers rather than legitimate users. Microsoft has been working on guardrails to prevent this, but these CVEs demonstrate that gaps remain.

Defense-in-Depth Recommendations

While Microsoft has patched these specific flaws, security teams should review their Copilot deployments:

  1. Audit data access permissions - Ensure Copilot only connects to data sources appropriate for the users who interact with it
  2. Apply least-privilege principles - Limit which Microsoft 365 services Copilot can query
  3. Monitor Copilot usage logs - Look for unusual query patterns or bulk data access
  4. Review sensitivity labels - Ensure documents with highly sensitive content are appropriately labeled to restrict AI access

Microsoft's Security Response Center provides detailed guidance on Copilot security controls. Organizations heavily invested in Microsoft 365 should treat Copilot permissions with the same rigor applied to service account access.

Broader Context

The May Patch Tuesday release addressed 74 vulnerabilities across Microsoft products, with seven rated critical. The Copilot flaws weren't part of that count since they're cloud-service vulnerabilities rather than traditional software patches.

This disclosure follows Microsoft's January commitment to increased transparency around AI product security. The company's willingness to assign CVE identifiers to cloud-only vulnerabilities—rather than silently patching them—represents progress in AI security accountability, even if the details provided remain limited.

Related Articles