Latest news with #DGXCloudLepton


Forbes
5 days ago
- Business
- Forbes
Is Nvidia Competing With Its GPU Cloud Partners?
Nvidia Headquarters in Santa Clara, CA. Nvidia recently announced two new cloud initiatives. First, the company announced DGX Cloud Lepton, designed to connect artificial intelligence developers with Nvidia's wide network of cloud providers. Second, Nvidia announced a new cloud service, the Industrial AI Cloud, intended to provide AI services to manufacturing companies in Europe. While these moves pit Nvidia against its cloud partners, the larger cloud service providers (CSPs) chose to compete with Nvidia using their in-house developed GPU alternatives. Google has the TPU, Amazon has Trainium, Microsoft has Maia, etc. (Nvidia is a client of Cambrian-AI Research.) Turn about is fair play, and Nvidia is helping its cloud partners sell AI services that keep their GPUs running at high utilization, maximizing profit, while also helping developers access a broader inventory of rare and expensive GPUs. Much to the consternation of its cloud partners, Nvidia launched the new DGX Cloud Lepton service at Computex this year, and has already garnered a healthy suite of CSPs to agree to join the service. While Oracle and Google have yet to sign up publicly for Lepton, Amazon AWS and Microsoft Azure have done so. They see the benefits of having their clouds accessible and promoted by Nvidia. The smaller GPU cloud players have also joined the party, including CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Yotta Data Services, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway, Together AI, Mistral AI, SoftBank Corp. These providers offer both on-demand and long-term GPU access, supporting a wide range of AI development and deployment needs. Other CSPs won't want to miss the train, and will likely join soon. At the Paris GTC, Nvidia CEO Jensen Huang announced that Nvidia and Deutsche Telekom were building an AI Cloud for European manufacturing companies. The Industrial Cloud will provide access to state-of-the-art AI infrastructure and Nvidia's rich portfolio of software. Support will be available for CAD, CAE, Omniverse, Robotics, and Autonomous Vehicles. The cloud is fully configured to support Nvidia's optimized enterprise AI software portfolio, and should be open for business in early 2026. Nvidia's Industrial Cloud for Europe represents a major step in building sovereign, AI-powered infrastructure for the continent's industrial sector. By providing secure, high-performance compute resources and a robust AI software ecosystem, the initiative aims to propel European manufacturing into the next era of digital innovation Nvidia is partnering with Deutche Telekom to build the first Industrial AI Cloud for European ... More manufacturing companies. The Industrial Cloud will be powered by 10,000 Nvidia GPUs, including the latest DGX B200 systems and RTX PRO servers, making it one of the largest industrial AI deployments in Germany. Think of this as a manufacturing-focussed sovereign data center managed and operated by Deutsche Telekom, ensuring data sovereignty and compliance with European regulations, addressing concerns about dependency on non-European cloud providers. The lack of NVL72 racks tells us that Nvidia expects customers to fine-tune and serve AI inferencing, not create new foundation models. Users will have access to Nvidia's CUDA-X libraries and workloads accelerated by Nvidia GPUs and Omniverse, supporting a wide range of industrial applications such as simulation, digital twins, robotics, design, engineering, and factory planning. The cloud will also support applications from leading industrial software providers including Siemens, Ansys, Cadence, and Rescale, enabling advanced manufacturing workflows for companies such as BMW, Maserati, Mercedes-Benz, and Schaeffle. First, it says that Nvidia isn't afraid to compete with its cloud partners in its quest to provide access to state-of-the-art AI infrastructure to its end users. As we noted, the larger CSPs chose to develop competing AI accelerators, so they should not be surprised. Second, in reality Lepton doesn't compete with CSPs; it provides aggregated access to their massive arrays of Nvidia GPUs, not a cloud that is owned and operated by Nvidia. And the Industrial Cloud is filling a gap left by the CSPs to provide focussed and sovereign resources for the European manufacturing base. Customers will love it, and so will the ISVs whose software has been optimized to run on Nvidia GPUs.
Yahoo
12-06-2025
- Business
- Yahoo
Nvidia collaborates for sovereign LLMs
Nvidia has partnered with various model builders and cloud service providers across Europe and the Middle East to enhance the development of sovereign large language models (LLMs). The collaboration aims to accelerate AI adoption in industries such as manufacturing, robotics, healthcare, finance, energy, and creative sectors. Key partners include Barcelona Supercomputing Center (BSC), Dicta, H Company and Domyn. Other key platers include LightOn, the National Academic Infrastructure for Supercomputing in Sweden (NAISS), KBLab at the National Library of Sweden, the Technology Innovation Institute (TII), University College London plus the University of Ljubljana, and UTTER. These partners are using Nvidia Nemotron techniques to enhance their models, focusing on cost efficiency and accuracy for enterprise AI workloads, including agentic AI. The models support Europe's 24 official languages and reflect local languages and cultures, Nvidia said. Some models, developed by H Company and LightOn in France, Dicta in Israel, Domyn in Italy, in Poland, BSC in Spain, NAISS and KBLab in Sweden, TII in the UAE, and University College London in the UK, specialise in national language and culture. The optimised models will run on AI infrastructure from Nvidia Cloud Partners (NCPs) like Nebius, Nscale, and Fluidstack through the Nvidia DGX Cloud Lepton marketplace. The LLMs will be distilled using Nvidia Nemotron techniques, including neural architecture search, reinforcement learning, and post-training with Nvidia-curated synthetic data. These processes aim to reduce operational costs and improve token generation speed during inference. Developers can deploy these models as Nvidia NIM microservices on AI factories, both on-premises and across cloud platforms, supporting more than 100,000 LLMs hosted on Hugging Face. A new Hugging Face integration with DGX Cloud Lepton will allow companies to fine-tune models on local NCP infrastructure. Perplexity, an AI-powered answer engine processing over 150 million questions weekly, will integrate these models to enhance search query accuracy and AI outputs. Nvidia founder and CEO Jensen Huang said: 'Together with Europe's model builders and cloud providers, we're building an AI ecosystem where intelligence is developed and served locally to provide a foundation for Europe to thrive in the age of AI — transforming every industry across the region.' Recently, Nvidia announced multiple partnerships in the UK to boost AI capabilities, aligning with the start of London Tech Week. "Nvidia collaborates for sovereign LLMs" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio
Yahoo
11-06-2025
- Business
- Yahoo
NVIDIA DGX Cloud Lepton Connects Europe's Developers to Global NVIDIA Compute Ecosystem
NVIDIA DGX Cloud Lepton Mistral AI, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway and Together AI — Along With AWS and Microsoft Azure — Bring Compute Resources to DGX Cloud Lepton Marketplace to Meet AI Demand Hugging Face Integrates DGX Cloud Lepton Into Training Cluster as a Service, Expanding AI Researcher Access to Scalable Compute for Model Training NVIDIA and Leading European Venture Capitalists Offer Marketplace Credits to Portfolio Companies to Accelerate Startup Ecosystem PARIS, June 11, 2025 (GLOBE NEWSWIRE) -- NVIDIA GTC Paris at VivaTech -- NVIDIA today announced the expansion of NVIDIA DGX Cloud Lepton™ — an AI platform featuring a global compute marketplace that connects developers building agentic and physical AI applications — with GPUs now available from a growing network of cloud providers. Mistral AI, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway and Together AI are now contributing NVIDIA Blackwell and other NVIDIA architecture GPUs to the marketplace, expanding regional access to high-performance compute. AWS and Microsoft Azure will be the first large-scale cloud providers to participate in DGX Cloud Lepton. These companies join CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda and Yotta Data Services in the marketplace. To make accelerated computing more accessible to the global AI community, Hugging Face is introducing Training Cluster as a Service. This new offering integrates with DGX Cloud Lepton to seamlessly connect AI researchers and developers building foundation models with the NVIDIA compute ecosystem. NVIDIA is also working with leading European venture capital firms Accel, Elaia, Partech and Sofinnova Partners to offer DGX Cloud Lepton marketplace credits to portfolio companies, enabling startups to access accelerated computing resources and scale regional development. 'DGX Cloud Lepton is connecting Europe's developers to a global AI infrastructure,' said Jensen Huang, founder and CEO of NVIDIA. 'With partners across the region, we're building a network of AI factories that developers, researchers and enterprises can harness to scale local breakthroughs into global innovation.' DGX Cloud Lepton simplifies the process of accessing reliable, high-performance GPU resources within specific regions by unifying cloud AI services and GPU capacity from across the NVIDIA compute ecosystem onto a single platform. This enables developers to keep their data local, supporting data governance and sovereign AI requirements. In addition, by integrating with the NVIDIA software suite — including NVIDIA NIM™ and NeMo™ microservices and NVIDIA Cloud Functions — DGX Cloud Lepton streamlines and accelerates every stage of AI application development and deployment, at any scale. The marketplace works with a new NIM microservice container, which includes support for a broad range of large language models, including the most popular open LLM architectures and more than a million models hosted publicly and privately on Hugging Face. For cloud providers, DGX Cloud Lepton includes management software that continuously monitors GPU health in real time and automates root-cause analysis, minimizing manual intervention and reducing downtime. This streamlines operations for providers and ensures more reliable access to high-performance computing for customers. NVIDIA DGX Cloud Lepton Speeds Training and DeploymentEarly-access DGX Cloud Lepton customers using the platform to accelerate their strategic AI initiatives include: Basecamp Research, which is speeding the discovery and design of new biological solutions for pharmaceuticals, food and industrial and environmental biotechnology by harnessing its 9.8 billion-protein database to pretrain and deploy large biological foundation models. EY, which is standardizing multi-cloud access across the global organization to accelerate the development of AI agents for domain- and sector-specific solutions. Outerbounds, which enables customers to build differentiated, production-grade AI products powered by the proven reliability of open-source Metaflow. Prima Mente, which is advancing neurodegenerative disease research at scale by pretraining large brain foundation models to uncover new disease mechanisms and tools to stratify patient outcomes in clinical settings. Reflection, which is building superintelligent autonomous coding systems that handle the most complex enterprise engineering tasks. Hugging Face Developers Get Access to Scalable AI Training Across CloudsIntegrating DGX Cloud Lepton with Hugging Face's Training Cluster as a Service offering gives AI builders streamlined access to the GPU marketplace, making it easy to reserve, access and use NVIDIA compute resources in specific regions, close to their training data. Connected to a global network of cloud providers, Hugging Face customers can quickly secure the necessary GPU capacity for training runs using DGX Cloud Lepton. Mirror Physics, Project Numina and the Telethon Institute of Genetics and Medicine will be among the first Hugging Face customers to access Training Cluster as a Service, with compute resources provided through DGX Cloud Lepton. They will use the platform to advance state-of-the-art AI models in chemistry, materials science, mathematics and disease research. 'Access to large-scale, high-performance compute is essential for building the next generation of AI models across every domain and language,' said Clément Delangue, cofounder and CEO of Hugging Face. 'The integration of DGX Cloud Lepton with Training Cluster as a Service will remove barriers for researchers and companies, unlocking the ability to train the most advanced models and push the boundaries of what's possible in AI.' DGX Cloud Lepton Boosts AI Startup Ecosystem NVIDIA is working with Accel, Elaia, Partech and Sofinnova Partners to offer up to $100,000 in GPU capacity credits and support from NVIDIA experts to eligible portfolio companies through DGX Cloud Lepton. BioCorteX, Bioptimus and Latent Labs will be among the first to access DGX Cloud Lepton, where they can discover and purchase compute capacity and use NVIDIA software, services and AI expertise to build, customize and deploy applications across a global network of cloud providers. AvailabilityDevelopers can sign up for early access to NVIDIA DGX Cloud Lepton. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech, and explore GTC Paris sessions. About NVIDIANVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing. For further information, contact:Natalie HerethNVIDIA Corporation+1-360-581-1088nhereth@ Certain statements in this press release including, but not limited to, statements as to: DGX Cloud Lepton connecting Europe's developers to a global AI infrastructure; with partners across the region, NVIDIA building a network of AI factories that developers, researchers and enterprises can harness to scale local breakthroughs into global innovation; the benefits, impact, performance, and availability of NVIDIA's products, services, and technologies; expectations with respect to NVIDIA's third party arrangements, including with its collaborators and partners; expectations with respect to technology developments; and other statements that are not historical facts are forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended, which are subject to the 'safe harbor' created by those sections based on management's beliefs and assumptions and on information currently available to management and are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic and political conditions; NVIDIA's reliance on third parties to manufacture, assemble, package and test NVIDIA's products; the impact of technological development and competition; development of new products and technologies or enhancements to NVIDIA's existing product and technologies; market acceptance of NVIDIA's products or NVIDIA's partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of NVIDIA's products or technologies when integrated into systems; and changes in applicable laws and regulations, as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances. Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein. © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, DGX Cloud Lepton, NVIDIA NeMo and NVIDIA NIM are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice. A photo accompanying this announcement is available at in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Wire
11-06-2025
- Business
- Business Wire
Sofinnova Partners Collaborates With NVIDIA to Accelerate European Life Sciences Startups
PARIS--(BUSINESS WIRE)-- Sofinnova Partners ('Sofinnova"), a leading European life sciences venture capital firm based in Paris, London, and Milan, today announced a collaboration with NVIDIA to support its portfolio of life sciences startups. The collaboration delivers significant Graphics Processing Unit (GPU) credits to select Sofinnova portfolio companies, effectively giving access to the same computational firepower used by tech titans in Silicon Valley. Amidst increasing demand for computational resources driven by AI, Sofinnova's portfolio companies will be able to access NVIDIA Blackwell and other NVIDIA architecture GPUs via NVIDIA DGX Cloud Lepton, an AI platform and marketplace connecting developers to global AI infrastructure. BioCorteX, Bioptimus, Cure51 and Latent Labs —four of Europe's most promising digital medicine startups—will gain access to computing resources through DGX Cloud Lepton, enabling them to process biological data sets and run computational models that would have taken months, completing them in days. Cure51, a pioneer in decoding the biology of exceptional cancer survivors, tested NVIDIA Parabricks, a GPU-accelerated genomics toolkit, and achieved up to 17x faster processing with NVIDIA H100 GPUs and more than 2x cost savings with NVIDIA L4 GPUs compared to their CPU baseline—dramatically accelerating their ability to analyze complex genomic data and scale their survivor-based insights. "This collaboration supercharges computation for life sciences innovation," said Antoine Papiernik, Chairman and Managing Partner at Sofinnova Partners. "The convergence of biology, AI, computation, and data isn't just our investment thesis—it's the defining battleground of the next decade. From our Digital Medicine strategy to our own proprietary AI platform we recognize the transformative potential of AI across our entire domain. By securing access to NVIDIA's infrastructure, we're not just funding companies; we're empowering them with the computational backbone needed to outperform incumbents and redefine what's possible in drug discovery, precision medicine, and scalable solutions that address both human health and sustainability." Learn more about NVIDIA DGX Cloud Lepton and view the official NVIDIA announcement. ### About Sofinnova Partners Sofinnova Partners is a leading European venture capital firm in life sciences, specializing in healthcare and sustainability. Based in Paris, London and Milan, the firm brings together a team of professionals from all over the world with strong scientific, medical and business expertise. Sofinnova Partners is a hands-on company builder across the entire value chain of life sciences investments, from seed to later-stage. Founded in 1972, Sofinnova Partners is a deeply established venture capital firm in Europe, with 50 years of experience backing over 500 companies and creating market leaders around the globe. Today, Sofinnova Partners manages over €4 billion in assets. For more information, please visit:


CNBC
11-06-2025
- Business
- CNBC
Nvidia makes big play for Europe with infrastructure deals
Nvidia on Wednesday announced a slew of partnerships with European countries and companies spanning infrastructure to software as it looks to keep itself at the center of the global artificial intelligence story. Chief Executive Jensen Huang on Wednesday continued his tour of Europe with a keynote at Nvidia's GTC event in Paris, France, where he laid out some key European partnerships. Nvidia has been keen to position itself as an infrastructure company that can help countries and governments build data centers using its graphics processing units to unlock the potential of AI for local economies and populations. As part of that effort, Huang recently carried out a similar whirlwind trip to the Middle East, where Nvidia is planning to sell its latest chips as part of big data center buildouts in Saudi Arabia and the United Arab Emirates. "Every industrial revolution begins with infrastructure. AI is the essential infrastructure of our time, just as electricity and the internet once were," Huang said in a Wednesday press release. "Europe has now awakened to the importance of these AI factories, the importance of this AI infrastructure," Huang said during a separate presentation on Wednesday. AI factories is the term Nvidia uses for massive data centers containing its GPUs. Huang added that AI computing capacity in Europe will grow by a factor of 10 in the next two years. The tech giant seeks to expand its international footprint and embed itself in national level AI infrastructure. That push into new markets is even more critical as U.S. export restrictions on Nvidia's most advanced chips have lost the company revenue in China. Nvidia said it is working with country governments, regional cloud and telecommunications firms and technology centers in Europe. One of the key partnerships announced is between Nvidia and French startup Mistral, which will build an "AI cloud" that will deploy 18,000 Nvidia Grace Blackwell chips. This will allow businesses to develop and use AI through Mistral's models, Nvidia said. Nvidia also announced infrastructure projects in Italy and Armenia. Orange and Telefonica are among the telecommunications companies also working with Nvidia in areas such as deploying AI applications and large language models as part of the newly announced deals. In Germany, Nvidia said it is building what it has dubbed as an "industrial cloud" that will feature 10,000 GPUs and will be specifically designed to provide services for European manufacturers. The big focus from Nvidia in Europe is around so-called "sovereign AI," the idea that data centers and servers that are providing services to users in the European Union, are actually located regionally rather than abroad. Nvidia also announced so-called "tech centers" in Europe, which will focus on advanced research, upskilling workforces and accelerating scientific breakthroughs in countries including the U.K., France, Spain and Germany. Nvidia also expanded a product called DGX Cloud Lepton — something of a marketplace for GPUs — with new cloud providers and integrated it with AI model repository Hugging Face. DGX Cloud Lepton works by allowing developers to access GPUs across the world to run AI applications. While Nvidia is best-known for its hardware — its infamous GPUs — the technology giant has ramped up its focus on its software offering to help keep the company at the center of fast-moving AI development. That software push has continued into Europe. Last year, Nvidia announced a product called Nvidia NIM, which is effectively a pre-packaged AI model that can be quickly deployed and that lets developers build apps on it. Nvidia on Wednesday announced any large language model available on Hugging Face can also be deployed as NIM. Rather than creating their own models, developers can easily access these options via Nvidia's NIM service. Nvidia's strategy is to link its hardware to all of this software, giving it an edge over rivals in a bid to cement its dominance so far in AI.