
WEKA and Nebius Partner to Catalyze AI Innovation With Ultra-High-Performance Cloud Infrastructure Solution
CAMPBELL, Calif., June 11, 2025 /PRNewswire/ -- WEKA, the AI-native data platform company, and Nebius (NASDAQ: NBIS), a leading AI infrastructure company, today announced a partnership that delivers a powerful GPU-as-a-Service (GPUaaS) solution integrating WEKA's advanced data storage software with Nebius' full-stack AI cloud platform. The collaboration enables customers to scale compute and storage resources on demand with ultra-high performance and microsecond latency for efficient AI model training and precision AI inference.
Organizations that run AI model training and inference processes at scale often face challenges related to compute, memory, storage, and data management, which can impede innovation. As demand for modern AI infrastructure grows, organizations are embracing specialized neoclouds for access to turnkey infrastructure that can power their AI ambitions. In turn, neoclouds are seeking innovative ways to optimize the performance and efficiency of their GPU data center infrastructure.
Nebius AI Cloud delivers a cutting-edge, cost-optimized neocloud environment that empowers innovators of all sizes — from enterprises to startups to research institutions — to operationalize AI workloads. To fuel the premium tier of its next-generation platform, Nebius selected WEKA's high-performance storage software to turbocharge its AI Cloud performance while effortlessly scaling from petabytes to exabytes of data.
A leading research institution selected Nebius' purpose-built AI infrastructure to power its large-scale experimentation and model development efforts, reserving a multi-thousand-GPU cluster and leaning on Nebius AI Cloud's developer-friendly platform optimized for AI/ML workloads.
To further tailor the environment to its exacting operational needs, the customer requested the integration of WEKA's data platform, citing previous success with WEKA and the need for features such as user and directory quotas. With 2PB of WEKA storage deployed alongside Nebius' compute infrastructure, the institution now benefits from a high-performance, scalable, and fully managed platform that supports the rigorous demands of cutting-edge AI research.
"WEKA exceeded every expectation and requirement we had," said Danila Shtan, CTO at Nebius. "The WEKA solution not only delivers outstanding throughput, IOPS, and low latency at scale while effortlessly managing mixed read and write workloads, but it also provides exceptional metadata management and streamlined multitenancy."
"We are proud to be collaborating with Nebius to deliver high-performance, cloud-based solutions that maximize their AI innovation while minimizing infrastructure complexity," said Liran Zvibel, cofounder and CEO at WEKA. "Together, Nebius and WEKA are redefining what's possible when high-performance storage meets AI-first infrastructure, providing a unified solution that is a catalyst for enterprise AI and agentic AI innovation."
Learn more about the Nebius AI Cloud solution powered by WEKA: https://www.weka.io/customers/nebius/.
About Nebius
Nebius is a technology company building full-stack infrastructure to service the explosive growth of the global AI industry, including large-scale GPU clusters, an AI-native cloud platform, and tools and services for developers. Headquartered in Amsterdam and listed on Nasdaq, the Company has a global footprint with R&D hubs across Europe, North America and Israel.
The Nebius AI Cloud platform has been built from the ground up for intensive AI workloads. With proprietary cloud software architecture and hardware designed in-house, Nebius gives AI builders the compute, storage, managed services and tools they need to build, tune and run their models.
To learn more, visit www.nebius.com
About WEKA
WEKA is architecting a new approach to the enterprise data stack built for the era of agentic AI. The WEKA® Data Platform sets the standard for AI infrastructure, providing a cloud and AI-native foundation for enterprise AI that can be deployed anywhere with seamless data portability across on-premises, cloud, and edge environments. It transforms legacy data silos into dynamic data pipelines that dramatically increase GPU utilization and make AI model training, inference, and HPC workloads run faster and more efficiently, delivering microsecond latency performance at scale. WEKA is helping the world's most innovative enterprises and research organizations to accelerate time to market, discovery, and insights with AI, including 12 of the Fortune 50. Visit www.weka.io to learn more, or connect with WEKA on LinkedIn and X.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Korea Herald
3 days ago
- Korea Herald
Seoul shares open tad lower on Fed rate freeze, Middle East uncertainties
South Korean stocks opened slightly lower as investors were trying to digest the US Federal Reserve's rate freeze and developments in the Middle East. The benchmark Korea Composite Stock Price Index fell 4.16 points, or 0.14 percent, to 2,968.03 in the first 15 minutes of trading. Overnight, Wall Street closed mixed after the Fed decided to hold its benchmark interest rate steady at the 4.25-4.50 percent range. The Dow Jones Industrial Average went down 0.1 percent and the S&P 500 fell 0.03 percent, while the tech-heavy Nasdaq composite went up 0.13 percent. The Fed kept its projection for two rate cuts this year but remained in a wait-and-see mode amid persisting uncertainties stemming from the Donald Trump administration's tariff policies. Fed Chair Jerome Powell said uncertainties have come down after peaking in April, but tariffs are expected to heighten inflationary pressure and weigh on the US economy. Investors' eyes were also on the escalating military conflict between Israel and Iran amid lack of details on the direction of the Trump administration's plan for Tehran. In Seoul, investors' eyes are on whether the KOSPI could surpass the 3,000-point mark for the first time in over three years. Tech behemoth Samsung Electronics lost 0.5 percent, and its chipmaking rival SK hynix declined 0.61 percent. Defense giant Hanwha Aerospace dropped 1.19 percent, and major nuclear power plant manufacturer Doosan Enerbility retreated 2.13 percent. But top automaker Hyundai Motor increased 0.73 percent, and leading shipbuilder HD Korea Shipbuilding gained 1.63 percent. Top internet portal operator Naver rallied 4.83 percent, while Kakao, the operator of the country's dominant mobile messenger, soared 6.34 percent. The local currency was trading at 1,375.4 won against the greenback at 9:15 a.m., down 6 won from the previous session. (Yonhap)


Korea Herald
3 days ago
- Korea Herald
Experienced Leader Owen James Promoted to President of PRA Group Europe
Promotion Builds on Successful 13-Year Career at PRA Group, Most Recently Overseeing Record Portfolio Investments as Global Investments Officer NORFOLK, Va., June 19, 2025 /PRNewswire/ -- PRA Group, Inc. (Nasdaq: PRAA), a global leader in acquiring and collecting nonperforming loans, has promoted Owen James to president of PRA Group Europe. As president of PRA Group Europe, James will provide leadership across 15 markets in Europe, Canada and Australia. He will also be responsible for overseeing portfolio investments across Europe, while building on the continued profitability of the European business. James will report to President and CEO Martin Sjolund, who he succeeds. "I am honored to serve as president of PRA Group Europe and to build upon the success of our European business. I am excited to continue working with our senior leadership team in this new capacity to execute our strategy and further strengthen our company's presence in key strategic markets across Europe," said James. Over the past seven years, PRA Group's European business has delivered strong results and become a significant driver of the company's performance, with more than $3 billion successfully invested in portfolios across Europe. James has more than 30 years of experience in financial services. He joined PRA Group 13 years ago, through the company's acquisition of Mackenzie Hall Holdings in 2012. James performed various leadership roles in PRA Group's European business before serving as global investments officer, where he oversaw the company's global investment strategy. Prior to joining PRA Group, he worked for more than 15 years in a variety of senior roles at Intrum, a major European debt servicer and buyer. "Having worked with Owen for more than 10 years, he is the ideal person to lead PRA Group Europe to continued success. Owen has deep investment experience, strong operational know-how and is a highly respected leader in the global nonperforming loan industry. I know he will build upon PRA Group Europe's track record of success as one of the most efficient debt-buying operations," said Sjolund. About PRA Group As a global leader in acquiring and collecting nonperforming loans, PRA Group, Inc. returns capital to banks and other creditors to help expand financial services for consumers in the Americas, Europe and Australia. With thousands of employees worldwide, PRA Group companies collaborate with customers to help them resolve their debt. For more information, please visit


Korea Herald
4 days ago
- Korea Herald
WEKA Introduces NeuralMesh: An Intelligent, Adaptive Foundation For AI Innovation, Purpose-Built for The Age of Reasoning
With a Revolutionary Service-Oriented Mesh Architecture, NeuralMesh Optimizes AI Infrastructures to Create Resilient, Efficient, Massively Scalable Token Warehouses and AI Factories That Accelerate Time to First Token and Lower the Cost of AI Innovation SINGAPORE and CAMPBELL, Calif., June 18, 2025 /PRNewswire/ -- From SuperAI 2025: WEKA today unveiled a revolutionary advancement in AI data infrastructure with the debut of NeuralMesh™, a powerful new software-defined storage system featuring a dynamic mesh architecture that provides an intelligent, adaptive foundation for enterprise AI and agentic AI innovation. WEKA's NeuralMesh is purpose-built to help enterprises rapidly develop and scale AI factories and token warehouses and deploy intelligent AI agents, delivering world-class performance with microsecond latency to support real-time reasoning and response times. Unlike traditional data platforms and storage architectures, which become more fragile as AI environments grow and stall as AI workload performance demands increase, NeuralMesh does the opposite—becoming more powerful and resilient as it scales. When hardware fails, the system rebuilds in minutes, not hours. As data grows to exabytes, performance improves rather than degrades. With The Rise of Inference, Traditional Data Infrastructure Is Reaching Its Tipping Point The AI industry is shifting from AI model training to inference and real-time reasoning with unforeseen velocity. As agentic AI proliferates, AI teams require adaptive infrastructure that can respond in microseconds, not milliseconds, drawing insights from multimodal AI models across distributed global networks. These increased performance and scale requirements are straining traditional data architectures and storage, pushing them to their breaking point. As a result, organizations face mounting infrastructure costs and latent performance as their GPUs—the engines of AI innovation—sit idle, waiting for data, burning energy, and slowing token output. Ultimately, many enterprises are compelled to augment their data and GPU infrastructure by continually adding costly compute and memory resources to keep pace with their AI development needs, thereby contributing to unsustainably high innovation costs. "AI innovation continues to evolve at a blistering pace. The age of reasoning is upon us. The data solutions and architectures we relied on to navigate past technology paradigm shifts cannot support the immense performance density and scale required to support agentic AI and reasoning workloads. Across our customer base, we are seeing petascale customer environments growing to exabyte scale at an incomprehensible rate," said Liran Zvibel, cofounder and CEO at WEKA. "The future is exascale. Regardless of where you are in your AI journey today, your data architecture must be able to adapt and scale to support this inevitability or risk falling behind." NeuralMesh: Purpose-Built to Power Agentic AI Innovation and Dynamic AI Factories With NeuralMesh, WEKA has completely reimagined data infrastructure for the agentic AI era, providing a fully containerized, mesh-based architecture that seamlessly connects data, storage, compute, and AI services. NeuralMesh is the world's only intelligent, adaptive storage system purpose-built for accelerating GPUs, TPUs, and AI workloads. But NeuralMesh is more than just storage. Its software-defined microservices-based architecture doesn't just adapt to scale—it feeds on it, becoming faster, more efficient, and more resilient as it grows, from petabytes to exabytes and beyond. NeuralMesh is as flexible and composable as modern AI applications themselves, adapting effortlessly to every deployment strategy—from bare metal to multicloud and everything in between. Organizations can start small and scale seamlessly without costly replacements or complex migrations. NeuralMesh's architecture delivers five breakthrough capabilities: Unlike rigid platforms that force AI teams to work around limitations, NeuralMesh dynamically adapts to the variable needs of AI workflows, providing a flexible and intelligent foundation for enterprise and agentic AI innovation. Whether an organization is building AI factories, token warehouses, or looking to operationalize AI in their enterprise, NeuralMesh unleashes the full power of GPUs and TPUs, dramatically increasing token output while keeping energy, cloud, and AI infrastructure costs under control to deliver real business impact: "WEKA delivers exceptional performance density in a compact footprint at a very cost-effective price point, enabling us to customize AI storage solutions for each of our customers' unique requirements," said Dave Driggers, CEO and cofounder at Cirrascale Cloud Services. "Whether our clients need S3 compatibility for seamless data migration or the ability to burst to high-performance storage when computational demands spike, WEKA eliminates the data bottlenecks that constrain AI training, inference, and research workloads, enabling them to focus on developing breakthrough innovation rather than managing storage and AI infrastructure complexities." "Nebius' mission is to empower enterprises with the most advanced AI infrastructure available. Our customers' most demanding workloads require consistent, ultra-low-latency performance and exceptional throughput for training and inference at scale," said Arkady Volozh, founder and CEO of Nebius. "Our collaboration with WEKA enables us to offer outstanding performance and scalability, so that our clients can harness the full potential of AI to drive innovation and accelerate growth." "With WEKA, we now achieve 93% GPU utilization during AI model training and have increased our cloud storage capacity by 1.5x at 80% of the previous cost," said Chad Wood, HPC Engineering Lead at Stability AI. Over a Decade In The Making WEKA's NeuralMesh system is underpinned by more than 140 patents and over a decade of innovation. What started as a parallel file system for high-performance computing (HPC) and machine learning workloads, before AI applications became mainstream, evolved into a high-performance data platform for AI, a market category WEKA pioneered in 2021. But NeuralMesh is more than just the next evolutionary step in WEKA's innovation journey. It's a revolutionary leap to meet the exploding growth and unpredictable demands of the dynamic AI market in the age of reasoning. "WEKA is not just making storage faster. We've created an intelligent foundation for AI innovation that empowers enterprises to operationalize AI into all aspects of their business and enables AI agents to reason and react in real time," said Ajay Singh, Chief Product Officer at WEKA. "NeuralMesh delivers all the benefits our customers loved about the WEKA Data Platform, but with an adaptable, resilient mesh architecture and intelligent services designed for the variability and low latency requirements of real-world AI systems, while allowing growth to exascale and beyond." Availability NeuralMesh is available in limited release for enterprise and large-scale AI deployments, with general availability scheduled for fall 2025. For more information: About WEKA WEKA is transforming how organizations build, run, and scale AI workflows through NeuralMesh™, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes more fragile as AI environments expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, growing with your AI environment to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50 and the world's leading neoclouds and AI innovators, NeuralMesh maximizes GPU utilization, accelerates time to first token, and lowers the cost of AI innovation. Learn more at or connect with us on LinkedIn and X.