Microsoft Copilot Bug Exposes Confidential Emails to AI Access

Microsoft’s Copilot, an AI tool designed to assist users in productivity applications, has been found to access and summarize email messages marked as “confidential.” This issue arose despite established guidelines intended to prevent such access. The company informed affected users of the flaw in its Microsoft 365 Copilot “work tab” Chat, which should have bypassed messages labeled as sensitive.

According to a message shared with users, a coding error allowed the AI to process emails located in the Sent Items and Draft folders, violating confidentiality protocols. Microsoft stated, “Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat.” The company is actively investigating the source of this bug and its potential impact, with a fix already being deployed since early February 2024.

### Investigation and Response

The problem was first identified on January 21, 2024, and has been tracked internally by Microsoft under the identifier CW1226324. While the company has not disclosed the number of organizations affected by the issue, it has committed to reaching out to users once the solution is fully implemented to ensure that the bug is completely resolved.

Microsoft 365 Copilot was introduced in September 2023 as a tool that enables users to interact with an AI agent across various Microsoft applications, including Word and Excel. It is designed to assist by retrieving information from emails, documents, and chats. To enhance privacy, Microsoft implemented administrative controls to restrict AI access to sensitive materials. Unfortunately, this bug compromised the system’s efficacy, allowing Copilot to summarize all emails, including those marked confidential.

### Broader Security Concerns

The incident raises significant concerns regarding the security of generative AI in corporate environments. There are growing fears of breaches in confidentiality, especially in sensitive industries. This situation is further complicated by the use of shadow AI, where employees utilize AI tools without official approval or oversight from IT departments, thereby circumventing established data-protection guidelines.

A report from Netskope indicates that nearly one-third of employees are engaging with AI technologies covertly at work, leading to an alarming increase in data policy violations. The issue is not isolated; previous vulnerabilities have also been detected in Microsoft’s Copilot systems. In 2024, researchers uncovered security flaws in Microsoft’s retrieval augmented generation (RAG) systems, highlighting ongoing challenges in ensuring confidentiality.

As businesses increasingly adopt AI technologies, the urgency to address these security risks has never been clearer. With the rapid evolution of AI capabilities, organizations must remain vigilant and proactive in safeguarding sensitive information from unauthorized access and breaches. The implications of this incident serve as a reminder of the potential pitfalls in the integration of AI within workplace environments.