Microsoft’s Copilot was recently caught summarizing confidential emails without the proper permissions, completely bypassing the security policies designed to keep that sensitive information protected. This is a significant concern for companies that rely on AI assistants to handle their data.
According to The Mashable, the issue specifically affected Copilot Chat for some Microsoft 365 enterprise users. Copilot Chat rolled out to Microsoft 365 apps like Word, Excel, Outlook, and PowerPoint for business customers last fall, and is marketed as a content-aware AI assistant that helps users create documents and process information.
The bug, which Microsoft tracked internally as CW1226324, caused emails labeled as confidential to be “incorrectly processed by Microsoft 365 Copilot chat.” Copilot Chat was pulling in and summarizing emails from users’ Sent Items and Drafts folders, even though these messages had sensitivity labels specifically designed to block automated access.
Integrating AI into the workplace comes with serious security risks that businesses cannot afford to ignore
This kind of incident highlights the risks of using AI in the workplace. Businesses that use AI assistants could face serious problems, including prompt injection vulnerabilities and major data compliance violations, when these tools access information they are not supposed to. Microsoft has been expanding its AI tools across multiple industries, which makes security issues like this one even more consequential.
Microsoft confirmed that a code issue was the cause of this bug. They began rolling out a fix in early February and are actively monitoring the deployment of this patch. The company is also reaching out to some of the affected users to make sure everything is working correctly. Microsoft has not disclosed how many organizations were impacted by this security bypass, but noted that the scope might change as their investigation continues.
The incident is a reminder that even large tech companies face unforeseen challenges when integrating powerful new technologies into their products. Microsoft has also drawn attention recently for quietly pushing software updates to smart TVs, raising further questions about how the company handles user consent.
Companies that depend on these systems to handle sensitive data take on real risk when the tools do not behave as expected. For businesses using Microsoft 365, this serves as a clear example of why data security policies need to be tested and verified regularly, especially when AI tools are involved in handling confidential communications.
Published: Feb 19, 2026 10:45 am