AI & TechArtificial IntelligenceBigTech CompaniesCybersecurityNewswire

Microsoft Copilot Accessed Private Emails Without Consent

▼ Summary

– A bug in Microsoft 365’s Copilot Chat AI assistant incorrectly summarized emails labeled as confidential, bypassing data protection policies.
– The security flaw, first detected in January, pulled and processed sensitive emails from users’ Sent Items and Drafts folders.
– Microsoft identified the issue as a code problem and began deploying a fix for it in early February.
– This incident highlights the new cybersecurity risks created by integrating AI assistants into workplace software.
– Microsoft has not disclosed how many organizations were affected, as the investigation into the bug’s scope is ongoing.

A significant security flaw within Microsoft’s AI-powered Copilot service recently exposed private emails, highlighting the unforeseen vulnerabilities that can accompany the rapid integration of artificial intelligence into business software. The bug, which Microsoft has since addressed, allowed the Copilot Chat assistant to access and summarize messages that were explicitly marked as confidential, directly contravening established data protection policies. This incident underscores the critical need for rigorous security testing as AI becomes deeply embedded in workplace productivity tools.

The problem was specific to the Copilot Chat feature integrated into Microsoft 365 applications like Outlook, Word, and Excel. According to a technical report, the AI incorrectly processed emails that carried sensitivity labels, which are designed to prevent automated systems from accessing protected content. Essentially, the assistant bypassed organizational Data Loss Prevention (DLP) policies, pulling information from users’ Sent Items and Drafts folders that should have been completely off-limits.

Microsoft internally identified the issue, tracked as CW1226324, in late January. The company’s advisory confirmed that a code defect was to blame, leading to the unauthorized summarization of sensitive communications. This type of vulnerability represents a new frontier in corporate cybersecurity, where AI assistants, while powerful, can inadvertently create pathways for data compliance violations and privacy breaches.

In response, Microsoft began deploying a corrective fix in early February. The company is actively monitoring the rollout and has engaged with some affected users to confirm the patch’s effectiveness. The exact scale of the impact remains unclear, as Microsoft has not released figures on how many organizations were involved, noting that the investigation is ongoing and the scope could evolve.

This event serves as a potent reminder for businesses. While AI assistants promise enhanced productivity, they also introduce complex security challenges that require constant vigilance. Organizations must balance the adoption of these powerful tools with a robust understanding of their potential to access and mishandle protected data, ensuring that internal safeguards are always one step ahead.

(Source: Mashable)

Topics

ai security 95% Microsoft Copilot 93% Data Privacy 90% software bugs 88% cybersecurity risks 85% AI Integration 82% data compliance 80% tech journalism 75% bug fixes 73% enterprise software 70%