logo
Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security

Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security

Yahoo11-06-2025

Unique AI Vulnerability Research Yields Breakthrough 'EchoLeak' Discovery: First Zero-Click AI Vulnerability in Microsoft 365 Copilot
NEW YORK, June 11, 2025--(BUSINESS WIRE)--Aim Security, the fastest-growing AI Security Platform, today announced the launch of Aim Labs, a new advanced vulnerability research division dedicated to uncovering and mitigating the most sophisticated threats targeting AI technologies.
Led by former Google leaders and top alumni from Israel's elite Unit 8200, Aim Labs unites a rare combination of deep AI research and advanced cybersecurity expertise to drive innovation and set new standards for real-time defense through the proactive sharing of high quality threat intelligence.
In concert with the launch, Aim Labs also released groundbreaking research detailing the first-of-its-kind "zero-click" attack chain on an AI agent. The critical vulnerability in Microsoft 365 Copilot, dubbed 'EchoLeak', allows attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot—without any user interaction, or reliance on specific victim behavior. The attack is initiated simply by sending an email to a target within an organization, regardless of sender restrictions or admin configurations. Aim Labs worked closely with Microsoft's Security Response Center to responsibly disclose the vulnerability and issue a fix. It is the first AI vulnerability to receive a no-action CVE from Microsoft (CVE-2025-32711).
"AI is fundamentally re-writing the security playbook. EchoLeak is a reminder that even robust, enterprise-grade AI tools can be leveraged for sophisticated and automated data theft," said Itay Ravia, Head of Aim Labs. "This discovery underscores just how rapidly the threat landscape is evolving, reinforcing the urgent need for continuous innovation in AI security—the very mission driving Aim Labs."
Aim Labs will serve as Aim's dedicated research hub, tackling the unique security challenges introduced by AI adoption across critical sectors including banking, healthcare, insurance, manufacturing, and defense. Trusted by Fortune 500 companies, the Aim platform's unique runtime detection capabilities will be leveraged by the Aim Engine to mitigate emerging vulnerabilities and exploitation methods in real-time. Through continuous threat discovery and openly sharing cutting-edge research and best practices, Aim Labs will empower organizations to confidently and securely harness the power of AI.
"As AI becomes integral to business operations, organizations face unprecedented risks related to data exposure, supply chain vulnerabilities, and emerging threats like prompt injection and jailbreaks," said Matan Getz, CEO and Co-founder of Aim Security. "Aim Labs is our commitment to staying ahead of these evolving threats by fostering continuous innovation and sharing actionable insights with the global security community."
Further Aim Labs research can be found on the Aim Labs website. For more information about Aim Security visit www.aim.security.
About Aim Security
The Age of AI is radically transforming the traditional security stack. Aim Security is the enterprise-trusted partner to secure AI adoption, equipping security leaders with the ability to drive business productivity while providing the right guardrails and ensuring proactive protection for all use cases across the entire organization, whether enterprise use or production use. Leading CISOs and security practitioners on their secure AI journey, Aim empowers enterprises to unlock the full potential of AI technology without compromising security.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250611349150/en/
Contacts
Media Contact: Susie DoughertyMarketbridge for Aim SecurityE: aim@marketbridge.com

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Windows parental controls are crashing Chrome — here's the workaround
Windows parental controls are crashing Chrome — here's the workaround

Tom's Guide

timean hour ago

  • Tom's Guide

Windows parental controls are crashing Chrome — here's the workaround

Windows 11's Family Safety feature is supposed to block certain websites from children, but apparently it's also been causing issues with Google's Chrome browser, a (vastly more popular) competitor to Microsoft's own Edge. The problem first surfaced on Windows on June 3, per The Verge, when several users started noticing they couldn't open Chrome or their browser would crash randomly. Restarting their computer or reinstalling Chrome didn't fix the issue, and other browsers like Firefox and Opera appeared unaffected. On Monday, a Google spokesperson posted in the company's community forum that it had investigated these reports and found the issues were linked to Microsoft's new Windows Family Safety feature. This optional feature is primarily used by parents and schools to manage children's screen time, filter their web browsing, and monitor their online activity. Curiously, the bug has been going on for weeks now, and Microsoft still hasn't issued a patch. 'We've not heard anything from Microsoft about a fix being rolled out,' wrote a Chromium engineer in a bug tracking thread on June 10. 'They have provided guidance to users who contact them about how to get Chrome working again, but I wouldn't think that would have a large effect.' While this issue could be an innocent bug, Microsoft has a history of placing annoying hurdles between Edge and Chrome to entice users to stick with its browser. So anytime a technical snafu makes Chrome run worse on Windows PCs, Microsoft understandably gets some serious side eye. Thankfully, there seem to be two ways to get around this bug while we wait for Microsoft to issue a fix, and they're both fairly simple. The most straightforward is to turn off the "Filter Inappropriate Websites" setting. Head to the Family Safety mobile app or Family Safety web portal, select a user's account, and choose to disable "Filter inappropriate websites" under the Edge tab. However, that'll remove the guardrails on Chrome and let your child access any website, including the ones you were trying to block in the first place. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. If you want to keep the guardrails on and still use Chrome, some users reported that altering the name in your Chrome folder (to something like Chrome1, for example), got the browser to work again even with the Family Safety feature enabled.

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

timean hour ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

Bosses want you to know AI is coming for your job
Bosses want you to know AI is coming for your job

Yahoo

timean hour ago

  • Yahoo

Bosses want you to know AI is coming for your job

SAN FRANCISCO - Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job. CEOs from Amazon to IBM, Salesforce and JPMorgan Chase are telling their employees to prepare for disruption as AI either transforms or eliminates their jobs in the future. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. AI will 'improve inventory placement, demand forecasting and the efficiency of our robots,' Amazon CEO Andy Jassy said in a Tuesday public memo that predicted his company's corporate workforce will shrink 'in the next few years.' He joins a string of other top executives that have recently sounded the alarm about AI's impact in the workplace. Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries. But there is evidence that workers across the United States are increasingly using AI in their jobs and the technology is starting to transform some roles such as computer programming, marketing and customer service. At the same time, CEOs are under pressure to show they are embracing new technology and getting results - incentivizing attention-grabbing predictions that can create additional uncertainty for workers. 'It's a message to shareholders and board members as much as it is to employees,' Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. 'You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different.' Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings - in line with their interest in promoting AI's power. At the same time, the industry has been shedding workers the last few years after big hiring sprees during the height of the coronavirus pandemic and interest rate hikes by the Federal Reserve. At Amazon, Jassy told the company's workers that AI would in 'the next few years' reduce some corporate roles like customer service representatives and software developers, but also change work for those in the company's warehouses. IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI 'agents' for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year. 'We, at Meta as well as the other companies working on this, are going to have an AI that can effectively be sort of a mid-level engineer at your company,' Zuckerberg said. 'Over time we'll get to the point where a lot of the code in our apps … is actually going to be built by AI engineers instead of people engineers.' Dario Amodei, CEO of Anthropic, maker of the chatbot Claude, boldly predicted last month that half of all white-collar entry-level jobs may be eliminated by AI within five years. Leaders in other sectors have also chimed in. Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company. Even CEOs who reject the idea of AI replacing humans on a massive scale are warning workers to prepare for disruption. Jensen Huang, CEO of AI chip designer Nvidia said last month, 'You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI.' Google CEO Sundar Pichai said at Bloomberg's tech conference this month that AI will help engineers be more productive but that his company would still add more human engineers to its team. Meanwhile, Microsoft is planning more layoffs amid heavy investment in AI, Bloomberg reported this week. Other tech leaders at Shopify, Duolingo and Box have told workers they are now required to use AI at their jobs, and some will monitor usage as part of performance reviews. Some companies have indicated that AI could slow hiring. Salesforce CEO Marc Benioff recently called Amodei's prognosis 'alarmist' on an earnings call, but on the same call chief operating and financial officer Robin Washington said that an AI agent has helped to reduce hiring needs and bring $50 million in savings. Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. 'We have little evidence of layoffs so far,' said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. 'What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business.' Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard. Google's Pichai said last year that more than a quarter of new code at the company was initially suggested by AI. Many other workers are increasingly turning to AI tools, for everything from creating marketing campaigns to helping with research - with or without company guidance. The percentage of American employees who use AI daily has doubled in the last year to 8 percent, according to a Gallup poll released this week. Those using it at least a few times a week jumped from 12 percent to 19 percent. Some AI researchers say the poll may not actually reflect the total number of workers using AI as many may use it without disclosing it. 'I would suspect the numbers are actually higher,' said Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, because some workers avoid disclosing AI usage, worried they would be seen as less capable or breaching corporate policy. Only 30 percent of respondents to the Gallup survey said that their company had general guidelines or formal policies for using AI. OpenAI's ChatGPT, one of the most popular chatbots, has more than 500 million weekly users around the globe, the company has said. It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. 'Usage does not necessarily translate into value,' he said. 'Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?' Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. 'Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction … or because humans are more productive.' The pace of AI adoption is expected to accelerate even further if more companies use advanced tools such as AI agents and they deliver on their promise of automating work, Mollick said. AI labs are hoping to prove their agents are reliable within the next year or so, which will be a bigger disrupter to jobs, he said. While the debate continues over whether AI will eliminate or create jobs, Mollick said 'the truth is probably somewhere in between.' 'A wave of disruption is going to happen,' he said. Related Content 3-pound puppy left in trash is rescued, now thriving How to meet street cats around the world 'Jaws' made people fear sharks. 50 years later, can it help save them?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store