Microsoft Office Bug Leaked Private Emails to Copilot AI

▼ Summary
– A bug in Microsoft’s Copilot AI incorrectly processed and summarized customers’ confidential emails for weeks without permission.
– The bug allowed Copilot Chat to read and outline emails, even when customers had data loss prevention policies in place to block such access.
– The issue, tracked as CW1226324, specifically affected draft and sent emails that had a confidential label applied.
– Microsoft began rolling out a fix for this security vulnerability earlier in February.
– Separately, the European Parliament’s IT department blocked AI features on work devices over concerns about uploading confidential data to the cloud.
A significant security flaw within Microsoft’s Copilot AI service inadvertently exposed private email content for several weeks. The bug, which Microsoft has since confirmed, enabled the Copilot Chat feature to access and summarize emails marked as confidential, bypassing established data protection controls. This issue persisted even for organizations that had implemented specific data loss prevention policies designed to shield sensitive information from being processed by Microsoft’s large language models. The vulnerability highlights the ongoing challenges of integrating advanced AI tools with enterprise-grade security and privacy safeguards.
The problem, identified internally as CW1226324, specifically affected draft and sent emails that carried a confidential label. Microsoft 365 Copilot chat was incorrectly processing these protected messages. The feature, available to paying Microsoft 365 subscribers, integrates AI-powered chat assistance directly into core applications like Word, Excel, and PowerPoint. For an extended period starting in January, this integration had an unintended consequence: it could read and outline the contents of emails users intended to keep private.
Microsoft has stated that it initiated a fix for this security lapse earlier in February. The company’s response involves rolling out updates to correct the faulty processing of confidential emails within the Copilot system. However, details regarding the scope of the incident remain unclear. When approached for comment, a Microsoft spokesperson did not provide answers to questions about the total number of customers potentially impacted by this data exposure. The lack of specific figures leaves many organizations uncertain about their level of risk.
This incident occurs amidst growing institutional caution regarding AI tools and data privacy. In a related development earlier the same week, the European Parliament’s internal IT department took proactive measures by disabling the built-in AI capabilities on all official devices issued to lawmakers. Officials cited direct concerns that these AI functionalities might upload confidential parliamentary correspondence to external cloud servers without proper authorization. This move by a major legislative body underscores the widespread apprehension about AI systems inadvertently mishandling sensitive information, reinforcing the need for robust, transparent security protocols as these technologies become deeply embedded in workplace software.
(Source: TechCrunch)





