PROBABLYPWNED
VulnerabilitiesFebruary 25, 20264 min read

Microsoft Copilot Bug Exposed Confidential Emails for Weeks

Microsoft confirms Copilot bug bypassed DLP policies, reading confidential emails without authorization. European Parliament blocked Copilot over concerns.

Marcus Chen

Microsoft confirmed that a bug in Microsoft 365 Copilot allowed its AI to read and summarize confidential emails for weeks, bypassing data loss prevention policies designed to protect sensitive information. The company began rolling out fixes in early February after detecting the issue on January 21, 2026.

The vulnerability, tracked internally as CW1226324, affects Copilot's work tab chat feature. The AI incorrectly processed draft and sent emails carrying confidentiality labels, summarizing content that DLP rules should have blocked.

What Went Wrong

Enterprise customers configure data loss prevention policies to control how sensitive information flows through Microsoft services. When documents or emails carry sensitivity labels like "Confidential" or "Highly Confidential," DLP rules can prevent AI assistants from ingesting that content.

The bug broke this protection. According to Microsoft's service alert, Copilot accessed messages stored in users' Sent Items and Drafts folders regardless of confidentiality labels. Users could inadvertently expose sensitive information simply by using Copilot's chat feature.

The exposure window stretched from January 21 through early February, when Microsoft began deploying fixes. Organizations using Copilot during this period may have had labeled emails processed by the AI without DLP enforcement.

Scope and Impact

Microsoft declined to disclose how many users or organizations were affected, stating only that "the scope of impact may change as the investigation continues." The company hasn't confirmed whether any actual data exposures resulted from the vulnerability.

The affected feature specifically involves the Copilot work tab chat interface. Users interacting with Copilot in that context could receive summaries of confidential emails they or colleagues sent, even when DLP policies prohibited AI access to labeled content.

For organizations in regulated industries, this raises compliance questions. If confidential emails were processed against policy, audit logs may not accurately reflect data handling. Understanding the full exposure requires reviewing Copilot interaction logs from the affected period.

European Parliament Response

The European Parliament's IT department blocked built-in Copilot features on devices issued to EU lawmakers. Their stated concern: the tools could upload confidential correspondence to the cloud without proper controls.

That decision predated this specific bug disclosure, but the timing reinforces parliamentary concerns. EU institutions handle sensitive legislative and diplomatic communications where AI exposure could create security and sovereignty issues.

The parliamentary block demonstrates growing institutional wariness about enterprise AI assistants. Organizations that handle genuinely sensitive information are questioning whether current safeguards adequately protect against both intentional misuse and implementation bugs.

Remediation Status

Microsoft's service alert indicates the root cause has been addressed and fixes have "saturated across the majority of affected environments." The company hasn't provided a complete remediation timeline or confirmed when all affected tenants will be fully patched.

Organizations concerned about exposure during the vulnerability window should:

  1. Review Copilot audit logs for interactions involving confidential content
  2. Verify DLP policy enforcement is now working correctly in test scenarios
  3. Assess regulatory notification requirements if labeled data was processed improperly
  4. Document the incident for compliance records even if no actual exposure is confirmed

The Broader AI Security Challenge

This bug illustrates why AI integration creates novel security challenges. Traditional DLP enforcement relies on well-understood data flows: emails go here, documents stay there, access controls apply consistently. AI assistants create new pathways where sensitive data can flow in unexpected directions.

We've covered related concerns with prompt injection attacks against Copilot and broader issues with enterprise AI credential exposure. Each incident reinforces that deploying AI in enterprise environments requires careful attention to where data flows and what controls actually apply.

Microsoft's rapid response once the bug was detected suggests appropriate incident handling. But the weeks-long exposure window highlights how implementation gaps can persist unnoticed until they surface through auditing or complaints. Organizations deploying enterprise AI should validate that security controls work as documented, not just trust configuration claims.

Related Articles