logo
#

Latest news with #NVIDIAHopperGPUs

AI Innovators Worldwide Choose Oracle for AI Training and Inferencing
AI Innovators Worldwide Choose Oracle for AI Training and Inferencing

Yahoo

time4 days ago

  • Business
  • Yahoo

AI Innovators Worldwide Choose Oracle for AI Training and Inferencing

Fireworks AI, Hedra, Numenta, and Soniox experience accelerated performance and cost efficiency with Oracle Cloud Infrastructure AUSTIN, Texas, June 18, 2025 /PRNewswire/ -- AI innovators across the world are using Oracle Cloud Infrastructure (OCI) AI infrastructure and OCI Supercluster to train AI models and deploy AI inference and applications. Fireworks AI, Hedra, Numenta, Soniox, and hundreds of other leading AI innovators have selected OCI for its scalability, performance, cost efficiency, choice of compute instances, and control over where to run their AI workloads. As industries rapidly adopt AI to help drive innovation and efficiency, the AI companies that are providing these services require reliable, secure, and highly available cloud and AI infrastructure that enables them to quickly and economically scale out GPU instances. With OCI AI infrastructure, AI companies gain access to high-performance GPU clusters and the scalable computing power needed for AI training, AI inference, digital twins, and massively parallel HPC applications. "Among AI innovators, OCI has rapidly become the destination of choice for training and inferencing needs of all sizes," said Chris Gandolfo, executive vice president, Oracle Cloud Infrastructure and AI. "OCI AI infrastructure delivers ultra high-speed networking, optimized storage, and cutting-edge GPUs that AI companies rely on to power the next wave of innovation." Global AI Innovators Choose Oracle Fireworks AI is an inference platform that empowers developers and businesses to build highly optimized and production-ready generative AI applications, serving over 100 state-of-the-art open models in text, image, audio, embedding, and multi-modal formats. Fireworks AI uses OCI Compute bare metal instances accelerated by NVIDIA Hopper GPUs and OCI Compute with AMD MI300X GPUs to help it serve over two trillion inference tokens daily on its platform and scale its services globally. "Developers rely on Fireworks AI to integrate generative AI into their products, optimized for latency, throughput and cost per token," said Lin Qiao, co-founder and CEO, Fireworks AI. "With OCI AI infrastructure, we can deliver the ultra-fast response times and production-grade stability that developers expect. We're able to process AI workloads efficiently, minimize downtime, and help ensure AI applications run smoothly at scale so that our customers can focus on innovation without worrying about the underlying infrastructure." Hedra, an AI-driven video creation company, enables users to create videos with life-like characters. By deploying its multimodal foundation models for generative image, video, and audio on OCI Compute bare metal instances accelerated by NVIDIA Hopper GPUs, Hedra reduced its GPU costs, experienced faster training speeds, and reduced its model iteration time. "Creating expressive character videos at scale requires immense computational power and efficient multimodal processing," said Michael Lingelbach, founder and CEO, Hedra. "OCI handles our model training and inferencing across video, audio, and image data, while providing the rapid processing required for real-time character rendering and meeting the high storage demands of large datasets. This enabled us to release our latest model, Character-3, and content creation platform, Hedra Studio, quickly and without a hitch." Numenta is an AI technology company focused on maximizing the performance and efficiency of deep learning systems. By using OCI Compute bare metal instances accelerated by NVIDIA GPUs, Numenta gained access to a range of reliable and high-performance training instances, achieving faster training speeds and increased cycles of learning. "OCI provides the high-performance infrastructure and strong operational support we need to push the boundaries of AI without compromising speed or accuracy," said Dan Steere, CEO, Numenta. "With OCI, we've been able to confidently accelerate the development of our next-generation technology, taking significant steps forward in Efficient Intelligence™." Soniox, an AI company at the forefront of audio and speech AI, pioneers foundational AI models for audio, speech, and language comprehension. With its new universal multilingual speech AI model hosted on OCI, Soniox uses OCI Compute bare metal instances accelerated by NVIDIA Hopper GPUs to train its model to seamlessly recognize and understand speech across 60 languages in real-time with low latency and higher accuracy. "A high-performance infrastructure that provides improved accuracy, speed, and cost efficiency was top-of-mind when selecting a cloud provider to support our growth," said Klemen Simonic, founder and CEO, Soniox. "OCI gives us access to the latest AI innovations that allow us to push the boundaries of speech recognition and audio understanding while significantly reducing deployment time and operational costs." Additional Resources Learn more about Oracle Cloud Infrastructure Learn more about OCI AI infrastructure Learn more about OCI Generative AI Learn more about Oracle's AI strategy About OracleOracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at TrademarksOracle, Java, MySQL and NetSuite are registered trademarks of Oracle Corporation. NetSuite was the first cloud company—ushering in the new era of cloud computing. View original content to download multimedia: SOURCE Oracle

Nebius Group (NBIS) and Saturn Cloud Partner to Deliver Turnkey AI/ML Infrastructure Solution Built on NVIDIA Hopper GPUs
Nebius Group (NBIS) and Saturn Cloud Partner to Deliver Turnkey AI/ML Infrastructure Solution Built on NVIDIA Hopper GPUs

Yahoo

time5 days ago

  • Business
  • Yahoo

Nebius Group (NBIS) and Saturn Cloud Partner to Deliver Turnkey AI/ML Infrastructure Solution Built on NVIDIA Hopper GPUs

Nebius Group N.V. (NASDAQ:NBIS) is one of the . On June 11, Nebius Group N.V. (NASDAQ:NBIS) announced a partnership with Saturn Cloud to deliver a turnkey AI/ML infrastructure solution that is built on NVIDIA Hopper GPUs, with support for the NVIDIA AI Enterprise software stack. Saturn Cloud is the MLOps platform for AI/ML engineers. The collaboration would enable AI engineers to access an enterprise-grade AI/ML infrastructure-in-a-box with on-demand access to an enterprise-ready MLOps platform and NVIDIA Hopper GPUs. A computer screen showcasing Artificial Intelligence and Machine Learning algorithms at work. The solution would combine Saturn Cloud's engineer-loved MLOps platform with the flexibility and power of Nebius's AI cloud, allowing anyone to instantly sign up and run jobs and deployments, use Jupyter notebooks or other IDEs, and manage cloud resources on NVIDIA Hopper GPUs through Nebius Group's (NASDAQ:NBIS) infrastructure. The solution is a compelling option for all use case types because of its notably lower cost compared to traditional cloud service providers. Nebius Group N.V. (NASDAQ:NBIS) takes the ninth spot on our list of the top hot large-cap stocks to invest in. Nebius Group N.V. (NASDAQ:NBIS) is a technology company that provides services and infrastructure to AI builders across the globe. The company's offerings include Nebius AI, which is an AI-centric cloud platform that offers full-stack infrastructure. This includes cloud services, developer tools, and large-scale GPU clusters, cloud services, and developer tools. While we acknowledge the potential of NBIS as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Unstoppable Stocks That Could Double Your Money. Disclosure: None.

Sensible Biotechnologies Slashes Time Taken to Optimize Novel Cell-Based mRNA Therapeutics by More Than 90% with AI Platform Built on NVIDIA Technology
Sensible Biotechnologies Slashes Time Taken to Optimize Novel Cell-Based mRNA Therapeutics by More Than 90% with AI Platform Built on NVIDIA Technology

Business Wire

time12-06-2025

  • Business
  • Business Wire

Sensible Biotechnologies Slashes Time Taken to Optimize Novel Cell-Based mRNA Therapeutics by More Than 90% with AI Platform Built on NVIDIA Technology

BOSTON--(BUSINESS WIRE)-- Sensible Biotechnologies, a biotechnology company pioneering a cell-based platform for the design and manufacturing of mRNA therapeutics, today announced an initiative to accelerate the development of next-generation mRNA medicines through AI-driven molecular design powered by NVIDIA accelerated computing and AI. 'By combining our cell-based platform with NVIDIA accelerated computing and AI, we're laying the foundation for programmable, high-performance RNA medicines.' Miroslav Gasparek, CEO and co-founder of Sensible Biotechnologies. Sensible has developed a unique design, optimization, and manufacturing process for producing mRNA drugs within living cells. The mRNA platform, which can include a range of novel modifications, creates a pure and low-immunogenicity drug product, unlocking new therapeutic areas with safe, effective, affordable and repeatable mRNA therapeutics. This replaces traditional in vitro transcription methods, which are unable to meet the demands of next-generation therapies for cancer, rare diseases, gene editing, and other unmet medical needs. To enable predictive and precise mRNA sequence design, Sensible's proprietary closed-loop AI-powered design and optimization platform models RNA folding from raw sequence to stable conformation. It integrates NVIDIA technology, including NVIDIA BioNeMo framework and NVIDIA NIM microservices on the NVIDIA DGX Cloud platform with NVIDIA Hopper GPUs, to apply physics-based simulations and machine learning to understand how RNA folds and behaves in complex biological environments. Using DGX Cloud with NVIDIA H100 80 GB and double-precision performance, Sensible achieves over 11,000 simulation steps per second. This performance enables the team to screen multiple RNA candidates in parallel, reducing simulation runtimes from overnight to under an hour per sequence. By incorporating NVIDIA technology, Sensible has cut its mRNA optimization cycles from 15 days to just one – a reduction of more than 90%. This efficiency enables faster iteration and informed sequence optimization, giving Sensible the ability to rapidly manipulate mRNA design space and advance the next generation of life-saving medicines. 'Producing therapeutic mRNA inside living cells requires solving intricate biological problems, from RNA folding to packaging,' said Krishna Motheramgari, PhD, Principal Computational Scientist at Sensible Biotechnologies. 'With advanced NVIDIA accelerated computing and AI, we can simulate these dynamics rapidly and accurately, accelerating our design cycle dramatically.' 'This technology integration brings us closer to our vision of rationally designed mRNA therapeutics,' said Miroslav Gasparek, Chief Executive Officer and co-founder of Sensible Biotechnologies. 'By combining our cell-based platform with NVIDIA accelerated computing and AI, we're laying the foundation for programmable, high-performance RNA medicines.' About Sensible Biotechnologies, Inc. Sensible Biotechnologies is developing the first cell-based platform for the design, optimization, and manufacturing of highly functional, low-immunogenic mRNA, unlocking the therapeutic potential of mRNA for pharma and biotech partners across a wide range of applications. Based in Boston, Oxford and Bratislava, Sensible has raised more than $13M from leading life science and deep tech investors, including Recode Ventures, BlueYard Capital, Kaya VC, Backed VC, Y Combinator, ZAKA VC, and Onsight – the VC fund founded by the co-founder of BioNTech, Christoph Huber, and the co-founder of ARM, Hermann Hauser.

Nebius and Saturn Cloud Launch First-in-Class AI MLOps Cloud with Support for NVIDIA AI Enterprise
Nebius and Saturn Cloud Launch First-in-Class AI MLOps Cloud with Support for NVIDIA AI Enterprise

Yahoo

time11-06-2025

  • Business
  • Yahoo

Nebius and Saturn Cloud Launch First-in-Class AI MLOps Cloud with Support for NVIDIA AI Enterprise

Collaboration gives AI engineers enterprise-grade AI/ML infrastructure-in-a-box with on-demand access to NVIDIA Hopper GPUs and an enterprise-ready MLOps platform Amsterdam, Netherlands--(Newsfile Corp. - June 11, 2025) - Leading AI infrastructure provider Nebius (NASDAQ: NBIS) is partnering with Saturn Cloud, the MLOps platform for AI/ML engineers, to deliver a turnkey AI/ML infrastructure solution built on NVIDIA Hopper GPUs and with support for the NVIDIA AI Enterprise software stack. The solution brings together the power and flexibility of Nebius's AI cloud with Saturn Cloud's engineer-loved MLOps platform. It enables anyone to sign up instantly to use Jupyter notebooks or other IDEs, run jobs and deployments, and manage cloud resources on NVIDIA Hopper GPUs via Nebius's infrastructure. The service is available at significantly lower cost than traditional cloud service providers, providing a compelling solution for all types of use cases. Companies can deploy an enterprise-grade application of Saturn Cloud in a Nebius virtual private cloud environment, compliant with enterprise IT security standards, as well as access to powerful, cost-efficient GPUs. This is complete with enterprise-grade SLAs, single sign-on (SSO), and other IT security integrations that can be found on the respective company documentation pages. Additionally, individuals and teams can join 100,000+ users of Saturn Cloud's hosted tiers, which leverage Nebius hardware to enable instantaneous sign up and deployment of cloud resources. By just providing a credit card, users can benefit from the competitive pricing of Nebius's AI cloud with the full suite of Saturn Cloud's MLOps tool chain. In addition to competitive pricing and fully integrated MLOps, the solution offers access to Nebius's AI Cloud accelerated by NVIDIA. Teams can harness the computational power driving today's most sophisticated AI systems without making long-term investments in accelerated computing systems. Capacity reservations are also possible and deliver additional cost-savings. "The combination of Nebius's full-stack AI infrastructure and Saturn Cloud's MLOps platform opens up access to enterprise-grade AI capabilities for organizations of all sizes," said Danila Shtan, CTO of Nebius. "Nebius offers Saturn Cloud customers flexible access to highly performant, built-for-AI cloud at high levels of reliability and cost efficiency." "Together, Nebius and Saturn Cloud enable organizations to become AI-driven," said M. Sebastian Metti, CEO of Saturn Cloud. "Previously limited by GPU economics or DevOps, our customers can build and run scalable, GPU-accelerated AI workloads without significant financial or engineering investment." Beyond cost-effective access to powerful compute, the offering enhances Saturn Cloud's MLOps platform with support for NVIDIA AI Enterprise software, including NVIDIA NIM microservices, NVIDIA NeMo, NVIDIA RAPIDS, and much more, all running on Nebius's high-performance infrastructure. The joint solution is now generally available to customers globally. AI developers can access these capabilities immediately by signing up for Saturn Cloud Pro and start developing on NVIDIA Hopper GPUs in minutes. Enterprise deployments are also available for teams who would like to use Saturn Cloud within their Nebius accounts. About Nebius Nebius is a technology company building full-stack cloud infrastructure to serve the global AI industry, including large-scale GPU clusters, cloud platforms, and tools and services for developers. Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America and Israel. The team includes around 400 highly skilled hardware and software engineers, as well as an in-house AI R&D team. Nebius's AI cloud platform delivers a true hyperscale cloud experience for AI innovators. With fully featured cloud software and hardware designed in-house, Nebius gives AI builders the compute, storage, managed services and tools they need to build, tune and run models and apps in one place. To learn more please visit About Saturn Cloud Saturn Cloud is the world's only MLOps platform with a multi-cloud offering of GPUs on-demand. AI/ML and data science teams can now simplify the development, deployment, and management of machine learning models at scale – all at an affordable price, with no commitment required. Contacts Nebius: media@ Saturn Cloud: media@ To view the source version of this press release, please visit Sign in to access your portfolio

Nebius and Saturn Cloud Launch First-in-Class AI MLOps Cloud with Support for NVIDIA AI Enterprise
Nebius and Saturn Cloud Launch First-in-Class AI MLOps Cloud with Support for NVIDIA AI Enterprise

Associated Press

time11-06-2025

  • Business
  • Associated Press

Nebius and Saturn Cloud Launch First-in-Class AI MLOps Cloud with Support for NVIDIA AI Enterprise

Amsterdam, Netherlands--(Newsfile Corp. - June 11, 2025) - Leading AI infrastructure provider Nebius (NASDAQ: NBIS) is partnering with Saturn Cloud, the MLOps platform for AI/ML engineers, to deliver a turnkey AI/ML infrastructure solution built on NVIDIA Hopper GPUs and with support for the NVIDIA AI Enterprise software stack. The solution brings together the power and flexibility of Nebius's AI cloud with Saturn Cloud's engineer-loved MLOps platform. It enables anyone to sign up instantly to use Jupyter notebooks or other IDEs, run jobs and deployments, and manage cloud resources on NVIDIA Hopper GPUs via Nebius's infrastructure. The service is available at significantly lower cost than traditional cloud service providers, providing a compelling solution for all types of use cases. Companies can deploy an enterprise-grade application of Saturn Cloud in a Nebius virtual private cloud environment, compliant with enterprise IT security standards, as well as access to powerful, cost-efficient GPUs. This is complete with enterprise-grade SLAs, single sign-on (SSO), and other IT security integrations that can be found on the respective company documentation pages. Additionally, individuals and teams can join 100,000+ users of Saturn Cloud's hosted tiers, which leverage Nebius hardware to enable instantaneous sign up and deployment of cloud resources. By just providing a credit card, users can benefit from the competitive pricing of Nebius's AI cloud with the full suite of Saturn Cloud's MLOps tool chain. In addition to competitive pricing and fully integrated MLOps, the solution offers access to Nebius's AI Cloud accelerated by NVIDIA. Teams can harness the computational power driving today's most sophisticated AI systems without making long-term investments in accelerated computing systems. Capacity reservations are also possible and deliver additional cost-savings. 'The combination of Nebius's full-stack AI infrastructure and Saturn Cloud's MLOps platform opens up access to enterprise-grade AI capabilities for organizations of all sizes,' said Danila Shtan, CTO of Nebius. 'Nebius offers Saturn Cloud customers flexible access to highly performant, built-for-AI cloud at high levels of reliability and cost efficiency.' 'Together, Nebius and Saturn Cloud enable organizations to become AI-driven,' said M. Sebastian Metti, CEO of Saturn Cloud. 'Previously limited by GPU economics or DevOps, our customers can build and run scalable, GPU-accelerated AI workloads without significant financial or engineering investment.' Beyond cost-effective access to powerful compute, the offering enhances Saturn Cloud's MLOps platform with support for NVIDIA AI Enterprise software, including NVIDIA NIM microservices, NVIDIA NeMo, NVIDIA RAPIDS, and much more, all running on Nebius's high-performance infrastructure. The joint solution is now generally available to customers globally. AI developers can access these capabilities immediately by signing up for Saturn Cloud Pro and start developing on NVIDIA Hopper GPUs in minutes. Enterprise deployments are also available for teams who would like to use Saturn Cloud within their Nebius accounts. About Nebius Nebius is a technology company building full-stack cloud infrastructure to serve the global AI industry, including large-scale GPU clusters, cloud platforms, and tools and services for developers. Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America and Israel. The team includes around 400 highly skilled hardware and software engineers, as well as an in-house AI R&D team. Nebius's AI cloud platform delivers a true hyperscale cloud experience for AI innovators. With fully featured cloud software and hardware designed in-house, Nebius gives AI builders the compute, storage, managed services and tools they need to build, tune and run models and apps in one place. To learn more please visit About Saturn Cloud Saturn Cloud is the world's only MLOps platform with a multi-cloud offering of GPUs on-demand. AI/ML and data science teams can now simplify the development, deployment, and management of machine learning models at scale – all at an affordable price, with no commitment required. Contacts Nebius: [email protected] Saturn Cloud: [email protected] To view the source version of this press release, please visit

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store