Latest news with #AimLabs


The Verge
13-06-2025
- The Verge
Security researchers found a zero-click vulnerability in Microsoft 365 Copilot.
The vulnerability, called 'EchoLeak,' lets attackers 'automatically exfiltrate sensitive and proprietary information' from Microsoft 365 Copilot without knowledge of the user, according to findings from Aim Labs. An attacker only needs to send their victim a malicious prompt injection disguised as a normal email, which covertly instructs Copilot to pull sensitive information from a user's account. Microsoft has since fixed the critical flaw and given it the identifier CVE-2025-32711. It also hasn't been exploited in the wild.


Time of India
13-06-2025
- Time of India
Researchers find 'dangerous' AI data leak flaw in Microsoft 365 Copilot: What the company has to say
A critical artificial intelligence (AI) vulnerability has been discovered in Microsoft 365 Copilot, raising new concerns about data security in AI-integrated enterprise environments. The flaw, dubbed 'EchoLeak', which enabled attackers to exfiltrate sensitive user data with zero-click interaction, has been devised by Aim Labs researchers in January 2025. According to a report by Bleeping Computer, Aim Labs promptly reported their findings to Microsoft, which rated it as critical. Microsoft swiftly addressed the issue, implementing a server-side fix in May 2025. This means that no user action is required to patch the vulnerability. Microsoft has also stated there is no evidence of any real-world exploitation, essentially confirming that no customers were impacted by this flaw. What is EchoLeak attack and how it worked by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Trade Bitcoin & Ethereum – No Wallet Needed! IC Markets Start Now Undo The EchoLeak attack commenced with a malicious email sent to the target. This email contained text seemingly unrelated to Copilot, designed to resemble a typical business document. It embedded a hidden prompt injection crafted to instruct Copilot's underlying LLM to extract sensitive internal data. Because this hidden prompt was phrased like a normal message, it cleverly bypassed Microsoft's existing XPIA (cross-prompt injection attack) classifier protections. Microsoft 365 Copilot, an AI assistant integrated into Office applications like Word, Excel, Outlook, and Teams, leverages OpenAI's GPT models and Microsoft Graph to help users generate content, analyse data and answer questions based on their organisation's internal files, emails, and chats. When the user prompted Copilot with a related business question, Microsoft's Retrieval-Augmented Generation (RAG) engine retrieved the malicious email into the LLM's prompt context due to its apparent relevance and formatting. Once inside the LLM's active context, the malicious injection "tricked" the AI into pulling sensitive internal data and embedding it into a specially crafted link or image. This led to unintentional leaks of internal data without explicit user intent or interaction.


The Verge
13-06-2025
- The Verge
Posted Jun 13, 2025 at 10:51 AM EDT 0 Comments
Security researchers found a zero-click vulnerability in Microsoft 365 Copilot. The vulnerability, called 'EchoLeak,' lets attackers 'automatically exfiltrate sensitive and proprietary information' from Microsoft 365 Copilot without knowledge of the user, according to findings from Aim Labs. An attacker only needs to send their victim a malicious prompt injection disguised as a normal email, which covertly instructs Copilot to pull sensitive information from a user's account. Microsoft has since fixed the critical flaw and given it the identifier CVE-2025-32711. It also hasn't been exploited in the wild.


Hans India
12-06-2025
- Hans India
AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit
In a major first for the AI security landscape, researchers have identified a critical vulnerability in Microsoft 365 Copilot that could have allowed hackers to steal sensitive user data—without the user ever clicking a link or opening an attachment. Known as EchoLeak, this zero-click flaw revealed how deeply embedded AI assistants can be exploited through subtle prompts hidden in regular-looking emails. The vulnerability was discovered by Aim Labs in January 2025 and promptly reported to Microsoft. It was fixed server-side in May, meaning users didn't need to take any action themselves. Microsoft emphasized that no customers were affected, and there's no evidence that the flaw was exploited in real-world scenarios. Still, the discovery marks a historic moment, as EchoLeak is believed to be the first-ever zero-click vulnerability targeting a large language model (LLM)-based assistant. How EchoLeak Worked Microsoft 365 Copilot integrates across Office applications like Word, Excel, Outlook, and Teams. It utilizes AI, powered by OpenAI's models and Microsoft Graph, to help users by analyzing data and generating content based on internal emails, documents, and chats. EchoLeak took advantage of this feature. Here's a breakdown of the exploit process: A malicious email is crafted to look legitimate but contains a hidden prompt embedded in the message. When a user later asks Copilot a related question, the AI, using Retrieval-Augmented Generation (RAG), pulls in the malicious email thinking it's relevant. The concealed prompt is then activated, instructing Copilot to leak internal data through a link or image. As the email is displayed, the link is automatically accessed by the browser, silently transferring internal data to the attacker's server. Researchers noted that certain markdown image formats used in the email could trigger browsers to send automatic requests, enabling the leak. While Microsoft's Content Security Policies (CSP) block most unknown web requests, services like Teams and SharePoint are considered trusted by default—offering a way in for attackers. The Bigger Concern: LLM Scope Violations The vulnerability isn't just a technical bug—it signals the emergence of a new category of threats called LLM Scope Violations. These occur when language models unintentionally expose data through their internal processing mechanisms, even without direct user commands. 'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs stated in their report. They also cautioned that similar risks could be present in other RAG-based AI systems, not just Microsoft Copilot. Microsoft assigned the flaw the ID CVE-2025-32711 and categorized it as critical. The company reassured users that the issue has been resolved and that there were no known incidents involving the vulnerability. Despite the fix, the warning from researchers is clear: "The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' their report concludes. As AI agents become more integrated into enterprise systems, EchoLeak is a stark reminder that security in the age of intelligent software needs to evolve just as fast as the technology itself.

The Hindu
12-06-2025
- The Hindu
Researchers discover zero-click vulnerability in Microsoft Copilot
Researchers have said that Microsoft Copilot had a critical zero-click AI vulnerability that was fixed before hackers stole sensitive data. Called 'EchoLeak,' the attack was mounted by Aim Labs researchers in January this year and then reported to Microsoft later. In a blog posted by the research team, they said that EchoLeak was the first zero-click attack on an AI agent and could hack remotely via an email. The vulnerability was given the identifier CVE-2025-32711 and rated critical and fixed eventually in May. The researchers have categorised EchoLeak under a new class of vulnerabilities called 'LLM Scope Violation,' which can lead a large language model to leak internal data without any interaction with the hacker. Although Microsoft acknowledged the security flow, it confirmed that there had been no instance of exploitation which had impacted users. Users receive an email that's been designed to look like a business document embedded with a hidden prompt injection that instructs the LLM to extract and exfiltrate sensitive data. When the user asks Copilot a query the email is retrieved into the LLM prompt by Retrieval-Augmented Generation or RAG.