American tech giant Microsoft has admitted that a bug in the Microsoft 365 Copilot Chat service allowed artificial intelligence to access confidential email messages. The failure enabled algorithms to analyze and summarize content from 'Sent' and 'Drafts' email folders, bypassing security measures protecting sensitive data. The problem, identified in January, primarily affected corporate users of the cloud-based office suite.

Breach of Correspondence Confidentiality

A bug in Microsoft 365 Copilot enabled artificial intelligence to scan drafts and sent emails without user consent.

Ineffectiveness of DLP Filters

Data Loss Prevention security measures failed to stop AI algorithms from processing information marked as confidential.

Identification of a Technical Vulnerability

The problem marked as CW1226324 lasted from January 21 to mid-February 2026, when Microsoft deployed a patch.

The Microsoft corporation has officially confirmed the existence of a security vulnerability in its Copilot AI assistant, which compromised the privacy of electronic correspondence for many business users. A malfunction in the Copilot Chat component allowed the tool to read and then create summaries of confidential emails, even if they were covered by special data protection policies. Findings indicate the problem primarily concerned messages located in the "Sent" and "Drafts" folders, while the inbox appears to have remained secure. The bug, marked with code CW1226324, was first noticed on January 21, 2026, but the full scale of the issue and the patch deployment process took the company several weeks. Artificial intelligence in office tools relies on large language models, which require constant access to user data—a fact that has raised concerns among experts about corporate secrecy protection from the start. Although Microsoft already implemented an appropriate corrective update in February, cybersecurity experts note that this situation undermines trust in generative AI in the workplace. DLP mechanisms, which should block the transmission of sensitive information to the language model, proved ineffective in this case. Enterprises using Microsoft 365 have been instructed to verify their privacy settings. Microsoft emphasizes that the incident was technical in nature and not the result of a deliberate hacker attack, but rather an error in the integration of the assistant with Outlook mail protocols. 30 dni — AI had uncontrolled access to emails The introduction of Copilot into daily office tasks was meant to revolutionize productivity, but incidents like this show the risks associated with granting automation excessive access to digital resources. The Redmond-based company assures that it is currently monitoring systems for similar anomalies. End users may not have been aware that their unfinished message drafts or private sent correspondence were being processed by algorithms to build responses to colleagues' queries. „The error caused the tool to display information from messages stored in drafts to some corporate users.” — Microsoft Spokesperson Copilot Security Incident Timeline: January 21, 2026 — Bug Detection; February 18, 2026 — Microsoft Confirmation; February 19, 2026 — Patch Deployment Liberal media emphasize the need for rapid technology development while increasing regulatory oversight of IT giants. | Conservative media highlight the threat to data sovereignty and individual privacy in the face of corporate AI algorithms.