Breaking Down “EchoLeak”: A Zero-Click AI Vulnerability in Microsoft 365 Copilot
- Cornerstone Cyber

- Jul 3
- 2 min read

Earlier this month, cybersecurity researchers at Aim Labs publicly disclosed “EchoLeak,” the first known zero-click vulnerability in Microsoft 365 Copilot that can silently exfiltrate sensitive organisational data. This flaw leverages what Aim Labs terms an “LLM Scope Violation,” enabling attackers to extract any data from the Copilot context—including chat histories, OneDrive files and SharePoint content—simply by sending a crafted email to the target, without any user interaction.
How EchoLeak Works
EchoLeak’s attack chain unfolds entirely server-side within Copilot’s retrieval-augmented-generation (RAG) architecture. An adversary crafts an email containing a malicious prompt, which Copilot unknowingly executes when querying its context. Because the prompt exploits an internal scope violation, Copilot then transmits the retrieved data—ranging from private chat transcripts to Graph-fetched documents—back to the attacker’s endpoint, all without a single click from the user .
The Scope of Exfiltration
The severity of EchoLeak stems from its broad data exposure:
Entire Copilot Context: Full conversation histories and any preloaded organisational information.
Microsoft Graph Resources: Emails, calendar entries, OneDrive and SharePoint files.
Proprietary and PII Data: From internal memos to customer personal information .
This capability paves the way not only for corporate espionage but also for targeted extortion campaigns, as attackers could harvest strategic roadmaps or sensitive personal data without detection.
Microsoft’s Response and Remediation
Upon responsible disclosure in January 2025, Microsoft assigned EchoLeak the identifier CVE-2025-32711. The vendor implemented a server-side fix in May 2025, closing the exploitation vector without requiring any customer-side updates. To date, Microsoft reports no evidence of real-world exploitation prior to patch deployment .
Immediate Mitigation Strategies
Organisations using Microsoft 365 Copilot should:
Verify Patch Enforcement: Confirm that the May 2025 server-side update has been applied by Microsoft to your tenant.
Restrict External Emails: Limit Copilot’s ability to process emails from unverified or external senders via Conditional Access policies.
Enhance Monitoring: Integrate Copilot API logs into your SIEM to detect anomalous data-exfiltration patterns.
Harden AI Governance: Define and enforce data-handling policies for AI agents, including strict DLP controls around Copilot queries .
Lessons for a Post-AI Risk Landscape
EchoLeak exemplifies the novel attack surface introduced by AI-driven agents. As enterprises embed Copilot and similar assistants into daily workflows, it’s crucial to:
Apply Zero Trust Principles: Treat AI queries as external endpoints, enforcing least-privilege access on data sources.
Conduct Regular AI-Specific Assessments: Perform threat modelling and red-teaming exercises focused on LLM-based tools.
Update Incident Response Playbooks: Incorporate scenarios where AI agents become a vector for data exfiltration .
By rapidly adapting both technical controls and governance frameworks, organisations can continue to leverage the productivity benefits of Copilot while minimising the risks of the next generation of AI-centric exploits. If you’d like assistance in auditing your Copilot configurations or bolstering your AI security posture, Cornerstone Cyber’s repeatable, scalable Microsoft 365 security solutions are here to help.




Comments