logo
AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

Hans India12-06-2025

In a major first for the AI security landscape, researchers have identified a critical vulnerability in Microsoft 365 Copilot that could have allowed hackers to steal sensitive user data—without the user ever clicking a link or opening an attachment. Known as EchoLeak, this zero-click flaw revealed how deeply embedded AI assistants can be exploited through subtle prompts hidden in regular-looking emails.
The vulnerability was discovered by Aim Labs in January 2025 and promptly reported to Microsoft. It was fixed server-side in May, meaning users didn't need to take any action themselves. Microsoft emphasized that no customers were affected, and there's no evidence that the flaw was exploited in real-world scenarios.
Still, the discovery marks a historic moment, as EchoLeak is believed to be the first-ever zero-click vulnerability targeting a large language model (LLM)-based assistant.
How EchoLeak Worked
Microsoft 365 Copilot integrates across Office applications like Word, Excel, Outlook, and Teams. It utilizes AI, powered by OpenAI's models and Microsoft Graph, to help users by analyzing data and generating content based on internal emails, documents, and chats.
EchoLeak took advantage of this feature. Here's a breakdown of the exploit process:
A malicious email is crafted to look legitimate but contains a hidden prompt embedded in the message.
When a user later asks Copilot a related question, the AI, using Retrieval-Augmented Generation (RAG), pulls in the malicious email thinking it's relevant.
The concealed prompt is then activated, instructing Copilot to leak internal data through a link or image.
As the email is displayed, the link is automatically accessed by the browser, silently transferring internal data to the attacker's server.
Researchers noted that certain markdown image formats used in the email could trigger browsers to send automatic requests, enabling the leak. While Microsoft's Content Security Policies (CSP) block most unknown web requests, services like Teams and SharePoint are considered trusted by default—offering a way in for attackers.
The Bigger Concern: LLM Scope Violations
The vulnerability isn't just a technical bug—it signals the emergence of a new category of threats called LLM Scope Violations. These occur when language models unintentionally expose data through their internal processing mechanisms, even without direct user commands.
'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs stated in their report. They also cautioned that similar risks could be present in other RAG-based AI systems, not just Microsoft Copilot.
Microsoft assigned the flaw the ID CVE-2025-32711 and categorized it as critical. The company reassured users that the issue has been resolved and that there were no known incidents involving the vulnerability.
Despite the fix, the warning from researchers is clear: "The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' their report concludes.
As AI agents become more integrated into enterprise systems, EchoLeak is a stark reminder that security in the age of intelligent software needs to evolve just as fast as the technology itself.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI now writes 25% of code in the US: Should Computer Science students rethink their career plans?
AI now writes 25% of code in the US: Should Computer Science students rethink their career plans?

Time of India

time2 hours ago

  • Time of India

AI now writes 25% of code in the US: Should Computer Science students rethink their career plans?

Artificial Intelligence is no longer just supporting programmers—it's actively writing code. According to insights published by The Atlantic , major U.S. tech companies like Microsoft and Alphabet now rely on artificial intelligence to generate nearly 25% of their code. As generative tools become deeply integrated into software development workflows, they're not only boosting productivity—but also raising difficult questions about the future of entry-level tech jobs. Tech jobs shift as AI takes over AI's growing role in software development isn't just a behind-the-scenes shift—it's showing up in employment data. According to The Atlantic , the number of 22–27-year-olds employed in computer science and math roles has dropped by 8% in recent years. While some of this is attributed to tech layoffs, automation is also playing a central role. Even tech companies acknowledge the shift. Executives at Microsoft and Google's parent company Alphabet have already confirmed the impact of AI on their code output. Meanwhile, at startups like Anthropic, AI models are replacing the need for junior-level coders altogether. Software jobs seen as most at risk These fears aren't just limited to hiring managers and academics. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like The Mess-Free Dog Toy That's Safer for Heavy Chewers Petsume Undo A 2025 Pew Research Center survey found that 48% of Americans believe software engineers will be among the professions most affected by AI in the coming years. That's a higher percentage than for teachers, journalists, or accountants. While manual labor was long seen as most vulnerable to automation, high-skilled roles are now increasingly at risk—starting with tech. Why students are dropping CS While employees in the tech market are worried, the impact of this phenomenon is also seen among tech students. After years of explosive growth, computer science enrollment is flattening. According to recent data referenced in The Atlantic , national growth in CS majors in the U.S. has slowed to just 0.2% this year. At elite institutions such as Princeton and Stanford, once considered pipelines to Silicon Valley, the number of CS undergraduates has either plateaued or started to decline. Princeton's department, for instance, anticipates a nearly 25% drop in majors within two years. Students have now become increasingly cautious. With mass layoffs in big tech, changing visa norms, and rising uncertainty around the long-term role of junior programmers, CS is no longer the default 'safe bet' it once seemed. The road ahead for Computer Science majors The shifting ground poses serious questions for universities and future students. Should colleges reduce CS department sizes? Are interdisciplinary programs—like CS with ethics, bioinformatics, or design—better suited for an AI-enhanced future? And for students: If AI can write your code, what skills will set you apart? The answer may lie in hybrid expertise—combining technical literacy with creativity, strategy, and human-centered design. The next generation of engineers may need to be less about syntax and more about systems thinking. To be clear, computer science isn't dying—but it's evolving. Demand for AI-literate engineers, machine learning experts, and cybersecurity professionals remains strong. However, the pathway to these roles is becoming steeper and more selective. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

Former Meta VP Karandeep Anand takes on CEO role at Character. ai
Former Meta VP Karandeep Anand takes on CEO role at Character. ai

Time of India

time3 hours ago

  • Time of India

Former Meta VP Karandeep Anand takes on CEO role at Character. ai

Google-backed AI chatbot service Character. ai has appointed Karandeep Anand as its next chief executive officer on Friday. Prior to this, he was vice president and head of business products at Meta . He has also held executive roles at Microsoft. In the new role, Anand will focus on advancing Character. ai's long term strategy to enhance multimodal-AI technology and expand the user base. Anand has been a board advisor to Character. ai for the last nine months. In a note, he laid out plans for the company over the next 60 days. These plans include working on refining open source models in an attempt to improve memory and overall model quality. He also aims to improve search and discoverability features to help users navigate better. In parallel, Anand hinted at expanding Character. ai's creative toolkit to help creators design richer, immersive characters, with audio and video capabilities. Live Events To give users better control, he said he is going to make the content filters less overbearing to ease out restrictions. Additionally, he aims to roll out 'Archive' option to allow users to hide or archive characters if they wish to. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories The company also announced Dominic Perella as chief legal officer and senior vice president (SVP) of global affairs. Character. ai uses deep learning models similar to GPT-type models, offering conversational AI characters while also allowing character creation. However, it does not support generating images or code, making it a solely text-based model.

AI meets adult content: THIS platform is a ‘lovechild between OnlyFans and OpenAI'
AI meets adult content: THIS platform is a ‘lovechild between OnlyFans and OpenAI'

Mint

time6 hours ago

  • Mint

AI meets adult content: THIS platform is a ‘lovechild between OnlyFans and OpenAI'

Ever since OpenAI introduced the general world to the many possibilities of artificial intelligence (AI), developers have been experimenting with ways the technology can change the overall user experience. In one such experiment, a start-up with over 2,00,000 users in the United States, brought together the endlessness of AI and fame, and merged it with the "spicy fantasies" of OnlyFans users. OhChat, a platform its creator described as the 'lovechild between OnlyFans and OpenAI,' uses artificial intelligence to build lifelike digital duplicates of public figures. These AI avatars of adult content celebrities don't eat, sleep or breathe, but 'remember you, desire you and never log off'. In an interview with CNN, OhChat CEO Nic Young said goes a step further than platforms such as OnlyFans, where users pay to gain access to adult content from content creators. Once activated, the avatars run autonomously, offering 'infinite personalised content' for subscribers. OhChat 'is an incredibly powerful tool, and tools can be used however the human behind it wants to be used,' he said. 'We could use this in a really scary way, but we're using it in a really, I think, good, exciting way.' Young told CNN that OhChat works on a tiered subscription model wherein a user pays $4.99 ( ₹ 430) per month for unlimited texts on demand, $9.99 ( ₹ 865) for capped access to voice notes and images, or $29.99 ( ₹ 2,600) for unlimited VIP interaction. According to Young, platform creators receive an 80 per cent cut from the revenue their AI avatar generates. OhChat keeps the remaining 20 per cent. 'You have literally unlimited passive income without having to do anything again,' Young told CNN. Since launching OhChat in October 2024, the company has signed 20 creators, including 'Baywatch' actress Carmen Electra, and former British glamour model Katie Price – Jordan. Some of the creators are already earning thousands of dollars per month, Young said. Nic Young said that to build a digital twin, OhChat asks its creators to submit 30 images of themselves and speak to a bot for 30 minutes. The platform can then generate the digital replica 'within hours' using Meta's large language model. For example, the AI avatar of Jordan is trained to mimic her voice, appearance and mannerisms. She can 'sext' users, send voice notes and images, and provide on-demand intimacy at scale – all without her lifting a finger. The platform was categorised with their AI avatars on an internal scale to rank the intensity and explicitness of their interactions. Creators contributing to the platform decide which level their avatar will be.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store