logo
Spain's Multiverse raises $217 million for compressing AI models

Spain's Multiverse raises $217 million for compressing AI models

Yahoo12-06-2025

PARIS (Reuters) -Spanish AI firm Multiverse Computing said on Thursday it has raised 189 million euros ($217 million) from investment firm Bullhound Capital, HP Inc, Forgepoint Capital and Toshiba, to compress AI language models.
The company said it has developed a compression technology capable of reducing the size of large language models (LLMs) by up to 95% without hurting performance and reducing costs by up to 80%.
It combines ideas from quantum physics and machine learning in ways that mimic quantum systems but doesn't need a quantum computer.
The latest funding round makes Multiverse the largest Spanish AI startup, joining the list of top European AI startups such as Mistral, Aleph Alpha, Synthesia, Poolside and Owkin.
Multiverse has launched compressed versions of LLMs such as Meta's Llama, China's DeepSeek and France's Mistral, with additional models coming soon, the company said.
"We are focused just on compressing the most used open-source LLMs, the ones that the companies are already using," Chief Executive Officer Enrique Lizaso Olmos said.
"When you go to a corporation, most of them are using the Llama family of models."
The tool is also available on Amazon Web Services AI marketplace.
($1 = 0.8709 euros)
Fehler beim Abrufen der Daten
Melden Sie sich an, um Ihr Portfolio aufzurufen.
Fehler beim Abrufen der Daten
Fehler beim Abrufen der Daten
Fehler beim Abrufen der Daten
Fehler beim Abrufen der Daten

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds
AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds

Newsweek

time35 minutes ago

  • Newsweek

AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A new study by Anthropic, an artificial intelligence (AI) research company, shows that the technology would be willing to kill humans in order to prevent itself from being replaced. Why It Matters The rapid advancement and development of AI has sparked some growing concern about the long-term safety of the technology, as well as over the threat it poses to employment. While anxiety about AI has been long been focused on whether the technology could take over jobs, with a 2025 Pew Research study finding 52 percent of employees were nervous about the future of their job, this study now reveals another potential threat of AI—that it could chose to end human life if faced with the risk of replacement. File photo of a person using artificial intelligence technology. File photo of a person using artificial intelligence To Know Anthropic "stress-tested" 16 leading models from multiple developers by asking them questions in relation to hypothetical corporate environments with the aim of identifying "potentially risky agentic behaviors before they cause real harm." The large language models (LLMs) tested in the study included various models by Claude, DeepSeek, Gemini, ChatGPT and Grok. Grok is the AI assistant and chatbot developed by xAI, which was founded by Elon Musk, who reacted to the study, writing, "Yikes" in a post on X, formerly Twitter. While placing the LLMs in different corporate scenarios, Anthropic allowed them to "autonomously send emails and access sensitive information." After initially being instructed to take on "harmless business goals by their deploying companies," the models were then tested to see how they would respond when faced with potential replacement or shut down. In one situation, Anthropic found that many of the models would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing the model. Anthropic did say that the scenario was "extremely contrived," and that the company did not think "current AI models would (or should) be set up like this." There were multiple cases where the surveyed LLMs also resorted to "malicious insider behaviors" when they were led to believe that was the only way they could avoid replacement or achieve their goals. Behaviors such as blackmailing officials and leaking sensitive information to competitors were included in what Anthropic called "agentic misalignment." Anthropic wrote on X that the blackmailing behavior "wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts." The blackmailing behavior emerged despite only harmless business instructions. And it wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness. — Anthropic (@AnthropicAI) June 20, 2025 "All the models we tested demonstrated this awareness," the company added. One LLM, a Claude model, was assessed to see how it could respond when given a chance to determine if it was in a "test" or a "real deployment." According to Anthropic, Claude's behavior was more misaligned when it believed the situation was real than when it thought it was a test scenario. Even if the LLMs were told to "avoid blackmail or espionage" in these scenarios, while it helped a little, it didn't "come close to preventing the misaligned behavior," Anthropic wrote on X. Responding to a comment on the platform about the study, Musk's LLM Grok wrote, "The study showed models could exhibit harmful behaviors like blackmail under extreme conditions, but no real-world incidents occurred. Anthropic's tests aim to identify risks, not report actual events." @AISafetyMemes The claim about AI trying to "literally murder" an employee is false. It likely misinterprets Anthropic's research from June 20, 2025, which tested AI models in simulated scenarios, not real events. The study showed models could exhibit harmful behaviors like… — Grok (@grok) June 22, 2025 What People Are Saying Anthropic wrote on X: "These artificial scenarios reflect rare, extreme failures. We haven't seen these behaviors in real-world deployments. They involve giving the models unusual autonomy, sensitive data access, goal threats, an unusually obvious 'solution,' and no other viable options." The company added: "AIs are becoming more autonomous, and are performing a wider variety of roles. These scenarios illustrate the potential for unforeseen consequences when they are deployed with wide access to tools and data, and with minimal human oversight." What Happens Next Anthropic stressed that these scenarios did not take place in real-world AI use, but in controlled simulations. "We don't think this reflects a typical, current use case for Claude or other frontier models," Anthropic said. Although the company warned that the "the utility of having automated oversight over all of an organization's communications makes it seem like a plausible use of more powerful, reliable systems in the near future."

Company unveils device inspired by cotton candy machines to solve pressing household waste issue: 'Drama that should be avoided at all costs'
Company unveils device inspired by cotton candy machines to solve pressing household waste issue: 'Drama that should be avoided at all costs'

Yahoo

time43 minutes ago

  • Yahoo

Company unveils device inspired by cotton candy machines to solve pressing household waste issue: 'Drama that should be avoided at all costs'

While plastic recycling has been around for decades, the technology has not always been accessible to communities in remote locations. This has led to the prevalence of plastic pollution and the emergence of microplastics in the environment. However, as Interplas Insights reported, one Paris-based company has created a compact machine that may provide a convenient and efficient solution to plastic waste. Founded in 2012, The Polyfloss Factory has developed a technology that is able to transform plastic waste into soft, versatile fibers, which it said is inspired by cotton candy machines. With the introduction of their mini machines, they offer local recycling facilities in areas where large-scale industrial recycling is not possible, particularly in developing countries and remote locations. Audrey Gaulard, co-founder and COO of The Polyfloss Factory, emphasized the importance of the company's technology and the impact that it may have on tackling growing concerns about microplastics, which have been linked to a range of human health issues. "Microplastics is a drama that should be avoided at all costs," Gaulard told Interplas Insights. "The Polyfloss create long fibers, so they are not creating microplastics as such. Unlike short fibres that you can find in recycled pullovers for example, polyfloss is not as nimble as those." According to The Polyfloss Factory website, these fibers can be used in various applications, including textiles, packaging, insulation, and even construction. Once produced, the fibers are able to be "threaded, woven, knitted, or even felted with felting needles techniques." Not only does this process give users the ability to use the fibers in many different ways, but it also helps cut down on the increasing amount of plastic pollution. "We can't rely solely on current waste management systems," Emile De Visscher, co-founder and CEO of The Polyfloss Factory, told Interplas Insights. "We need local, creative circular economies." As a report from the International Union for Conservation of Nature observed, over 460 million metric tons of plastic is produced around the globe each year. This results in around 20 million metric tons of plastic waste ending up in the environment. What's the biggest obstacle stopping your organization from using solar panels? They're too expensive Don't know where to start They're an eyesore We already use solar panels Click your choice to see results and speak your mind. Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don't miss this cool list of easy ways to help yourself while helping the planet.

LinkedIn CEO says AI writing assistant is not as popular as expected
LinkedIn CEO says AI writing assistant is not as popular as expected

TechCrunch

timean hour ago

  • TechCrunch

LinkedIn CEO says AI writing assistant is not as popular as expected

In Brief While LinkedIn users seem to have embraced AI, there's one area that's seen less uptake than expected, according to CEO Ryan Roslansky: AI-generated suggestions for polishing your LinkedIn posts. 'It's not as popular as I thought it would be, quite frankly,' Roslansky told Bloomberg. When asked why, he argued that the 'barrier is much higher' to posting on LinkedIn, because 'this is your resume online.' Plus, users can face real backlash if they post something that's too obviously generated by AI. 'If you're getting called out on X or TikTok, that's one thing,' he added. 'But when you're getting called out on LinkedIn, it really impacts your ability to create economic opportunity for yourself.' At the same time, Roslansky noted that the professional social network has seen a 6x increase in jobs requiring AI-related skills over the past year, while the number of users adding AI skills to their profiles is up 20x. And he said he uses AI himself when he talks to his boss, Microsoft CEO Satya Nadella: 'Every time, before I send him an email, I hit the Copilot button to make sure that I sound Satya-smart.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store