logo
#

Latest news with #NVIDIA

F5 expands performance, multi-tenancy, and security capabilities for fast-evolving AI landscape with NVIDIA
F5 expands performance, multi-tenancy, and security capabilities for fast-evolving AI landscape with NVIDIA

Time of India

time2 hours ago

  • Business
  • Time of India

F5 expands performance, multi-tenancy, and security capabilities for fast-evolving AI landscape with NVIDIA

F5, the global leader in delivering and securing every app and API, today announced new capabilities for F5 BIG-IP Next for Kubernetes accelerated with NVIDIA BlueField-3 DPUs and the NVIDIA DOCA software framework, underscored by customer Sesterce's validation deployment. Sesterce is a leading European operator specializing in next-generation infrastructures and sovereign AI, designed to meet the needs of accelerated computing and artificial the F5 Application Delivery and Security Platform, BIG-IP Next for Kubernetes running natively on NVIDIA BlueField-3 DPUs delivers high-performance traffic management and security for large-scale AI infrastructure, unlocking greater efficiency, control, and performance for AI applications. In tandem with the compelling performance advantages announced along with general availability earlier this year, Sesterce has successfully completed validation of the F5 and NVIDIA solution across a number of key capabilities, including the following areas: - Enhanced performance, multi-tenancy, and security to meet cloud-grade expectations, initially showing a 20% improvement in GPU utilization. - Integration with NVIDIA Dynamo and KV Cache Manager to reduce latency for the reasoning of large language model ( LLM ) inference systems and optimization of GPUs and memory resources. - Smart LLM routing on BlueField DPUs, running effectively with NVIDIA NIM microservices for workloads requiring multiple models, providing customers the best of all available models. - Scaling and securing Model Context Protocol ( MCP ) including reverse proxy capabilities and protections for more scalable and secure LLMs, enabling customers to swiftly and safely utilize the power of MCP servers. - Powerful data programmability with robust F5 iRules capabilities, allowing rapid customization to support AI applications and evolving security requirements. 'Integration between F5 and NVIDIA was enticing even before we conducted any tests,' said Youssef El Manssouri, CEO and Co-Founder at Sesterce. 'Our results underline the benefits of F5's dynamic load balancing with high-volume Kubernetes ingress and egress in AI environments. This approach empowers us to more efficiently distribute traffic and optimize the use of our GPUs while allowing us to bring additional and unique value to our customers. We are pleased to see F5's support for a growing number of NVIDIA use cases, including enhanced multi-tenancy, and we look forward to additional innovation between the companies in supporting next-generation AI infrastructure.' Highlights of new solution capabilities include: LLM Routing and Dynamic Load Balancing with BIG-IP Next for Kubernetes With this collaborative solution, simple AI-related tasks can be routed to less expensive, lightweight LLMs in supporting generative AI while reserving advanced models for complex queries. This level of customizable intelligence also enables routing functions to leverage domain-specific LLMs, improving output quality and significantly enhancing customer experiences. F5's advanced traffic management ensures queries are sent to the most suitable LLM, lowering latency and improving time to first token. 'Enterprises are increasingly deploying multiple LLMs to power advanced AI experiences—but routing and classifying LLM traffic can be compute-heavy, degrading performance and user experience,' said Kunal Anand, Chief Innovation Officer at F5. 'By programming routing logic directly on NVIDIA BlueField-3 DPUs, F5 BIG-IP Next for Kubernetes is the most efficient approach for delivering and securing LLM traffic. This is just the beginning. Our platform unlocks new possibilities for AI infrastructure, and we're excited to deepen co-innovation with NVIDIA as enterprise AI continues to scale.' Optimizing GPUs for Distributed AI Inference at Scale with NVIDIA Dynamo and KV Cache Integration Earlier this year, NVIDIA Dynamo was introduced , providing a supplementary framework for deploying generative AI and reasoning models in large-scale distributed environments. NVIDIA Dynamo streamlines the complexity of running AI inference in distributed environments by orchestrating tasks like scheduling, routing, and memory management to ensure seamless operation under dynamic workloads. Offloading specific operations from CPUs to BlueField DPUs is one of the core benefits of the combined F5 and NVIDIA solution. With F5, the Dynamo KV Cache Manager feature can intelligently route requests based on capacity, using Key-Value (KV) caching to accelerate generative AI use cases by speeding up processes based on retaining information from previous operations (rather than requiring resource-intensive recomputation). From an infrastructure perspective, organizations storing and reusing KV cache data can do so at a fraction of the cost of using GPU memory for this purpose. 'BIG-IP Next for Kubernetes accelerated with NVIDIA BlueField-3 DPUs gives enterprises and service providers a single point of control for efficiently routing traffic to AI factories to optimize GPU efficiency and to accelerate AI traffic for data ingestion, model training, inference, RAG, and agentic AI,' said Ash Bhalgat, Senior Director of AI Networking and Security Solutions, Ecosystem and Marketing at NVIDIA. 'In addition, F5's support for multi-tenancy and enhanced programmability with iRules continue to provide a platform that is well-suited for continued integration and feature additions such as support for NVIDIA Dynamo Distributed KV Cache Manager.' Improved Protection for MCP Servers with F5 and NVIDIA Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how applications provide context to LLMs. Deploying the combined F5 and NVIDIA solution in front of MCP servers allows F5 technology to serve as a reverse proxy, bolstering security capabilities for MCP solutions and the LLMs they support. In addition, the full data programmability enabled by F5 iRules promotes rapid adaptation and resilience for fast-evolving AI protocol requirements, as well as additional protection against emerging cybersecurity risks. 'Organizations implementing agentic AI are increasingly relying on MCP deployments to improve the security and performance of LLMs,' said Greg Schoeny, SVP, Global Service Provider at World Wide Technology. 'By bringing advanced traffic management and security to extensive Kubernetes environments, F5 and NVIDIA are delivering integrated AI feature sets—along with programmability and automation capabilities—that we aren't seeing elsewhere in the industry right now.' F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs is generally available now. For additional technology details and deployment benefits, go to and further details can also be found in a companion blog from F5 .

The Week in AI: "All incumbents are gonna get nuked."
The Week in AI: "All incumbents are gonna get nuked."

Globe and Mail

time15 hours ago

  • Business
  • Globe and Mail

The Week in AI: "All incumbents are gonna get nuked."

Welcome back to The Week in AI. I'm Kevin Cook, your field guide and storyteller for the fascinating arena of artificial intelligence. On Friday, my colleague Ethan Feller and I ran through a dozen developments that are transforming the economy right before our eyes. Here were 7 of the highlights... 1) Jensen at NVIDIA GTC Paris: "We are going to sell hundreds of billions worth of GB200/300." CEO Jensen Huang has forecast spending on AI-enabled data centers will double to $2 trillion over the next four to five years. As Grace Blackwell systems deploy, with 208 billion transistors per GPU -- or nearly 15 trillion per GB200 NVL72 rack system -- NVIDIA NVDA engineers are building the roadmap for Rubin and Feynman systems with likely orders of magnitude greater power. This is something I've talked about repeatedly for the past year: Wall Street analysts and investors are vastly underestimating the potential of the AI economy and the upgrades in infrastructure that need to occur to support self-driving cars, humanoid robots, and other autonomous machines. And this doesn't include sovereign nation-states that need to build their own AI infrastructure for security and growth. If you ever need clarity about the AI Revolution, or just to recalibrate your expectations and convictions, there is one place you need to visit: the NVIDIA Newsroom -- especially around a GPU Tech Conference (GTC). (I show you where in the video.) For last week's Paris GTC, they rolled out 6 press releases and 19 blogs covering as many new innovations and partnerships across industry, enterprise, science and healthcare. Nobody Wanted AI GPUs in 2016 Jensen also retells the story of the first DGX-1 in 2016. It was the mini supercomputer about the size of a college dorm fridge and it held 8 Volta GPUs with 21 billion transistors each. And nobody wanted it. Except a little startup called OpenAI. I like to use this story as an example of how NVIDIA has been in a very unique position ever since. They don't have to find "product-market fit" like most companies. Instead, they have been inventing a stack that developers didn't know they needed. Get the whole story in the replay of last Friday's The Week in AI: The Reasoning Wars, Sam's Love Letter, Zuck's Land Grab. Even if you don't have time for the 60-minute replay, at least do a quick scroll of the comments where I post all the relevant links to the topics we discussed. With over 25 links, you are guaranteed to find something that answers your top questions about the AI revolution! 2) The New Civil War In AI: Not Safety, But Efficacy There are many exciting debates going on in "the revolution" right now. A recent hot conflict is over whether or not the LLMs (large language models) are doing real reasoning, and even thinking. This one heated up after Apple AAPL researchers released their paper "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models." We are amazed by the research, writing, pattern-finding and puzzle-solving of these models. But Apple researchers found some limitations where the models "give up" on large problems without enough context. And it's worth pondering if they are simply "token prediction" machines that eventually get wrapped around their own axles. I've experienced this with some of the "vibe-coding" app developer tools like Replit and Bolt. But then other analysts and papers quickly responded and surfaced with the "limitations" of the Apple research, suggesting that the imposed cost expenditure limits imposed were the defining factor in the models giving up. One of the rebuttal papers was titled "The Illusion of the Illusion of Thinking." Again, all these links are in the comments section of The Week in AI. 3) Google Offers Buyouts: AI Headcount Crunch Beginning? My third topic was once again about the employment impact from generative AI and agentic AI being adopted in corporations. I ran a query on ChatGPT for the "top 100 jobs most likely to be disrupted" in the next 3 years. You can find the link in the comments of the X Space. Another tangible angle on job displacement was the revolutionary ad during the NBA finals by the prediction market platform Kalshi. It was created using the new Veo 3 graphics creator from Google by a filmmaker named PJ Ace. Ethan and I discussed how this innovation is certain to disrupt advertising, marketing, and film as the machines can do in minutes what it used to take a team of people weeks. And wait until you see the new Veo 3 ad from a Los Angeles dentist that is taking social media by storm. We'll talk about that in this Friday's Space. Welcome to the Machine But the most eye-opening news flash for me was the story on a company called Mechanize. While lots of job displacement will happen organically, this outfit is like a mercenary going after headcount. The New York Times titled their article "This A.I. Company Wants to Take Your Job." And here's how an X post described the piece about the startup that wants to automate white-collar work "as fast as possible"... "Mechanize wants to abolish all jobs. They make no secret of this. They are developing an AI program that is extremely promising and is being financed by everyone from Google to Stripe." Then there is Anthropic co-founder Ben Mann saying we'll know AI is transformative when it passes the "Economic Turing Test: Give an AI agent a job for a month. Let the hiring manager choose: human or machine? When they pick the machine more often than not, we've crossed the threshold." I have several posts in the comments of the "The Week in AI" X Space on the employment wars. Plus, just about every post is from a particular source of AI insight or expertise whose account you should be following on X. 4) Marc Andreessen: "All incumbents are gonna get nuked. Everything gets rebuilt." Translation: AI isn't an economic upgrade. It's a total reset. Which brings me to my favorite part of our Friday X Space... Cooker's RANT of the WEEK: "The Magical AI Transformation Won't Be So Gentle." Here I take the other side of Sam Altman's blog post from last week titled "The Gentle Singularity." I call it his "love letter" not to make fun of him, but to highlight his optimism in the face of brewing storms. A few weeks ago it was Anthropic CEO Dario Amodei warning us about the rapid disruption of work and its impacts on citizens and families, not just the economy. Then the old wise-man of AI, Geoffrey Hinton, shared these sentiments in a recent interview... The best-case future is a "symbiosis between people and AI" -- where machines handle the mundane, and humans live more interesting lives. But in the kind of society we have now, he warns, AI won't free most people. It will concentrate power, and as massive productivity increases create joblessness, it will mostly benefit the rich. This sober view instantly made me think of the 2016 book by Yuval Noah Harari Homo Deus in which the historian described how technology usually gets concentrated in the hands of the rich and powerful. It's just how economics works, no matter the political flavor. In this way, AI can move quickly beyond issues of personal safety, to those of economic security. In the X Space replay and the comments below it, I discuss the implications of "post-labor economics" as well as share more expert resources on these topics. Be sure to catch the replay of The Week in AI to hear my sense of the "not-so-gentle" transition we are headed into. 5) Apple WWDC: The Non-Event of the Week in AI For what to expect (or not) from Apple in AI innovation, I always turn to Robert Scoble on X @Scobleizer. Here were some of his summary posts... Cynical take on Apple's WWDC: just doing things Microsoft did back in 2003. Liquid glass. Menus on tablets. Dark take on it: it's way behind in AI, and didn't demonstrate any attempt to catch up. Light take: Lots of new AI features, like your phone will wait on hold for you now. Hopeful take: the new design joins Apple Vision Pro into its ecosystem, showing that the Apple Vision Pro is the future of Apple. Scoble adds: I really hate the recorded product demos and the old people showing new features and attempting to be "hip." On a more Apple-positive note, Scoble is looking forward to the next devices which should be coming in the AR space... Later this year both Apple and Google are introducing heavyweight category wearables. Lighter than the first Vision Pro. We will judge them by who has the best AI inside. That is more important than resolution. Google, today, looks like it is way ahead and pulling further away because this is a game of exponents. I will buy both anyway. :-) (end of @Scobleizer rants) Many experts are sensing that Alphabet GOOGL is "firing on all cylinders across AI" as we've discussed previously. From Gemini 2.5 Pro and the astonishing new Veo 3 to building AI capabilities with their own with TPUs (instead of relying on NVIDIA GPUs), they're the only vertically-integrated player across all realms of tech. Google will probably also figure out the shift from classic search to generative search, as Daniel Newman of the Futurum technology research group says. Reports of Google's demise have been greatly exaggerated according to @DanielNewmanUV and I wish I was listening before I sold my shares on the last "search is dead" scare. 6) Zuck Splashes the Pot with $14.3 Billion Meta Platforms META plunked down that amount for only 49% of a private company called Scale AI. But the price tag made it the biggest pure-AI acquisition, following OpenAI's $6 billion purchase of Jony Ive's company. Just like Sam wasn't waiting around to find out what AI-native device Apple will build, so too Zuck isn't waiting around for permission to have access to the premier company in the data supply chain -- what some are calling the oil refinery of the AI economy. What does that mean? Well if you think about data as various grades of crude oil, it needs to be cleaned and prepped in a number of ways before it can be "mined and modeled" for quality results. That's where Scale AI comes in with data prep and labeling because major AI models need structured and labeled training data to generate knowledge tokens, insights, and deep learning. Scale AI is a San Francisco-based artificial intelligence company founded in 2016 by Alexandr Wang and Lucy Guo. The company specializes in providing high-quality data labeling, annotation, and model evaluation services that are essential for training advanced AI models, including large language models (LLMs) and generative AI systems. Scale AI is known for its robust data engine, which powers AI development for leading tech firms, government agencies, and startups worldwide. Its research division, the Safety, Evaluation and Alignment Lab (SEAL), focuses on evaluating and aligning AI models for safety and reliability 7) AMD Unveils AI Server Rack, Sam on Stage with Lisa I am still shaking my head at all the stuff that happened last week! As if all of the above wasn't enough, Advanced Micro Devices AMD held its annual Advancing AI conference last Thursday with a product roadmap for hyperscale inferencing that caught investor attention. In addition to leaps forward in performance for the existing Instinct MI350 Series GPU systems, AMD CEO Lisa Su unveiled the Helios AI Rack-scale architecture supporting up to 72 MI400 GPUs, with 432GB of HBM4 memory per GPU and 19.6 TB/sec bandwidth. Available in 2026, this is clearly an answer to NVIDIA's GB200/300 series rack systems. AI Market Growth: CEO Lisa Su projected an 80% increase in AI inference demand by 2026, driven by the rapid adoption and expansion of AI applications in enterprise and cloud environments. Roadmap: AMD reaffirmed its commitment to an annual cadence of AI chip releases, with the MI400 and MI450 series already in development and expected to challenge Nvidia's flagship offerings in 2026 and beyond. And then Sam Altman showed up during Lisa's keynote. Since he clearly can't get enough compute or GPUs, he's as tight with Lisa as he is with Jensen. Lisa welcomed the founder and CEO of OpenAI as a key design partner for AMD's upcoming MI450 GPU who will help shape the next generation of AMD's AI hardware. OpenAI will use AMD GPUs and Helios servers for advanced AI workloads, including ChatGPT. And AMD's other happy customers continue to come back for more with Meta deploying AMD Instinct MI300X GPUs for Llama 3/4 inference and collaborating on future MI350/MI400 platforms. Meanwhile Microsoft Azure runs proprietary and open-source models on AMD Instinct MI300X GPUs in production and Oracle Cloud Infrastructure will deploy zettascale AI clusters with up to 131,072 MI355X GPUs, offering massive AI compute capacity to customers. This event made AMD shares a clear buy last week -- and this week if you can still grab some under $130! OLD RANT: The Fundamental Difference Finally, did you hear what another OpenAI co-founder said at the University of Toronto commencement address? Ilya Sutskever, the OpenAI architect and deep learning pioneer who in 2024 started his own model firm, Safe Superintelligence, spoke these words to the new grads... "The day will come when AI will do all the things we can do. The reason is the brain is a biological computer, so why can't the digital computer do the same things? "It's funny that we are debating if AI can truly think or give the illusion of thinking, as if our biological brain is superior or fundamentally different from a digital brain." I had to dig out my old rant about the fundamental difference(s) between human brains and computer "thinking." If you haven't heard me on this, you owe it to yourself so you can easily explain the differences to other "intelligence experts" telling you how consciousness works. Bottom line: To stay informed in AI, listen to The Week in AI replay, or just go to that post to see all the links and sources. And be sure to follow me on X @KevinBCook so you see the announcement for the new live Space every Friday. Only $1 to See All Zacks' Buys and Sells We're not kidding. Several years ago, we shocked our members by offering them 30-day access to all our picks for the total sum of only $1. No obligation to spend another cent. Thousands have taken advantage of this opportunity. Thousands did not - they thought there must be a catch. Yes, we do have a reason. We want you to get acquainted with our portfolio services like Surprise Trader, Stocks Under $10, Technology Innovators, and more, that closed 256 positions with double- and triple-digit gains in 2024 alone. See Stocks Now >> Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Apple Inc. (AAPL): Free Stock Analysis Report Advanced Micro Devices, Inc. (AMD): Free Stock Analysis Report NVIDIA Corporation (NVDA): Free Stock Analysis Report Alphabet Inc. (GOOGL): Free Stock Analysis Report Meta Platforms, Inc. (META): Free Stock Analysis Report

NVIDIA (NasdaqGS:NVDA) Collaborates With Tech Soft 3D And Trend Micro For AI Solutions
NVIDIA (NasdaqGS:NVDA) Collaborates With Tech Soft 3D And Trend Micro For AI Solutions

Yahoo

time16 hours ago

  • Business
  • Yahoo

NVIDIA (NasdaqGS:NVDA) Collaborates With Tech Soft 3D And Trend Micro For AI Solutions

NVIDIA recently announced a collaboration with Tech Soft 3D and a partnership with Dell Technologies and Trend Micro, focusing on enhancing interoperability and AI-powered cybersecurity solutions, respectively. These strategic moves likely supported the company's notable 23% price increase over the last quarter. Additional factors such as the company's Q1 earnings report, which revealed significant revenue and net income growth, might have also bolstered this trend, despite a broadly flat market. NVIDIA's proactive expansions in AI and digital innovation align with industry growth forecasts, contributing positively to its market performance. Every company has risks, and we've spotted 1 possible red flag for NVIDIA you should know about. The best AI stocks today may lie beyond giants like Nvidia and Microsoft. Find the next big opportunity with these 27 smaller AI-focused companies with strong growth potential through early-stage innovation in machine learning, automation, and data intelligence that could fund your retirement. The recent collaborations NVIDIA announced, focusing on enhancing AI-powered cybersecurity and interoperability solutions, could substantially impact the company's future revenue and earnings potential. These partnerships aim to expand NVIDIA's presence in the cybersecurity and AI sectors, aligning with trends that support growth in data center and AI workloads. The quarterly price increase of 23% is influenced by these strategic alliances, adding to the company's robust performance over the past five years, where total returns reached a very large percentage. Over this longer period, NVIDIA's shares exhibited phenomenal growth, outpacing many within the broader market. Over the past year, NVIDIA's returns contrasted with the broader US market, which saw a more modest 9.9% gain. Analysts anticipate these partnerships with Tech Soft 3D and Dell Technologies, combined with NVIDIA's expansion into the automotive sector through alliances with Toyota and Uber, will positively influence revenue and earnings forecasts. With revenue at US$148.52 billion and earnings at US$76.77 billion, the projected growth trends appear promising. As analysts predict future growth trajectories, the current share price indicates expectations of further price appreciation. Based on the consensus analyst price target of US$172.65, the share price reflects a discount, highlighting potential upside. This price movement demonstrates optimism around the anticipated financial performance, driven by NVIDIA's strategic initiatives and continued innovation across its key sectors. Our valuation report unveils the possibility NVIDIA's shares may be trading at a premium. This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned. Companies discussed in this article include NasdaqGS:NVDA. This article was originally published by Simply Wall St. Have feedback on this article? Concerned about the content? with us directly. Alternatively, email editorial-team@

Schneider Electric Launches New Data Centre Solutions to Meet Challenges of High-Density AI and Accelerated Compute Applications
Schneider Electric Launches New Data Centre Solutions to Meet Challenges of High-Density AI and Accelerated Compute Applications

Yahoo

time18 hours ago

  • Business
  • Yahoo

Schneider Electric Launches New Data Centre Solutions to Meet Challenges of High-Density AI and Accelerated Compute Applications

Innovative prefabricated data centre architecture provides critical IT infrastructure for high-density computing clusters. New rack PDUs and rack systems are built for increased size and weight support, and feature direct-to-chip liquid cooling. Schneider Electric launches new Open Compute Project (OCP) inspired rack system to support NVIDIA MGX architecture. MISSISSAUGA, Ontario, June 19, 2025--(BUSINESS WIRE)--Schneider Electric, the leader in the digital transformation of energy management and automation, today announced new data centre solutions specifically engineered to meet the intensive demands of next-generation AI cluster architectures. Evolving its EcoStruxure™ Data Centre Solutions portfolio, Schneider Electric introduced a Prefabricated Modular EcoStruxure Pod Data Centre solution that consolidates infrastructure for liquid cooling, high-power busway and high-density NetShelter Racks. In addition, EcoStruxure Rack Solutions incorporate detailed rack configurations and frameworks designed to accelerate High Performance Computing (HPC) and AI data centre deployments. The new EcoStruxure Pod Data Centre and EcoStruxure Rack Solutions are now available globally. Organizations are deploying AI clusters and grappling with extreme rack power densities, which are projected to reach 1MW and beyond. Schneider Electric's new line of solutions equips customers with integrated, data-validated, and easily scaled white space solutions that address new challenges in pod and rack design, power distribution and thermal management. "The sheer power and density required for AI clusters create bottlenecks that demand a new approach to data centre architecture," said Himamshu Prasad, senior vice president of EcoStruxure IT, Transactional & Edge and Energy Storage Centre of Excellence at Schneider Electric. "Customers need integrated infrastructure solutions that not only handle extreme thermal loads and dynamic power profiles but also deploy rapidly, scale predictably, and operate efficiently and sustainably. Our innovative next-generation EcoStruxure solutions that support NVIDIA technology address these critical requirements head on." New Product Overview Prefabricated Modular EcoStruxure Pod Data Centre: Prefabricated, scalable pod architecture enables operators to deploy high-density racks, supporting pods up to 1MW and beyond, at scale. Engineered-to-order, the new pod infrastructure offers flexibility and supports liquid cooling, power busway, complex cabling, as well as hot aisle containment, InRow and rear door heat exchanger cooling architectures. The Prefabricated Modular EcoStruxure Pod Data Centre is now shipping pre-designed and pre-assembled with all components for rapid deployment to support high-density workloads. EcoStruxure Rack Solutions: These reliable, high-density rack systems adapt to EIA, ORV3 and NVIDIA MGX modular design standards approved by leading IT chip and server manufacturers. Configurations accommodate a wide array of power and cooling distribution schemes and employ Motivair by Schneider Electric in-rack liquid cooling, as well as new and expanded rack and power distribution products, including: NetShelter SX Advanced Enclosure: This new line features taller, deeper, and stronger racks to support increased weight, cabling and infrastructure. NetShelter SX Advanced features a reinforced shipload rating and is safeguarded with shock packaging, ensuring secure transport of AI servers and liquid cooling systems. NetShelter Rack PDU Advanced: These power distribution units have been updated to support the high-current power needs of AI servers. Designed for efficient rack layouts, the NetShelter Rack PDU Advanced offers compact vertical and horizontal models with higher counts of dedicated circuits. Intelligent operational features now enabled by Schneider Electric's Network Management Card enhance security and provide seamless integration to EcoStruxure IT. NetShelter Open Architecture: This Open Compute Project (OCP) inspired rack architecture is available as a configure-to-order solution and includes open rack standards, power shelf and in-rack busbar. As part of this, a new Schneider Electric rack system has also been developed to support the NVIDIA GB200 NVL72 system that utilizes the NVIDIA MGX architecture in its rack design, integrating Schneider Electric into NVIDIA's HGX and MGX ecosystems for the first time. "Schneider Electric's innovative solutions provide the reliable, scalable infrastructure our customers need to accelerate their AI initiatives," said Vladimir Troy, vice president of data centre engineering, operations, enterprise software and cloud services at NVIDIA. "Together, we're addressing the rapidly growing demands of AI factories — from kilowatt to megawatt-scale racks—and delivering future-proof solutions that maximize scalability, density and efficiency." The new solutions and suite of engineered data centre reference designs equip data centre operators and Schneider Electric's partner ecosystem with the infrastructure and information needed to deploy powerful AI clusters faster and more reliably while addressing common barriers to adoption, including: Reliable power and cooling for AI workloads Deployment complexity and risk Speed to market and supply chain resilience Skills gap in managing advanced infrastructure These enhanced EcoStruxure offerings add to Schneider Electric's robust line of fully integrated, end-to-end AI infrastructure solutions — spanning advanced hardware, intelligent software, services such as EcoCare™ and EcoConsult for Data Centres, and strategic industry partnerships with key IT players. Schneider Electric is the partner of choice for building efficient, resilient, scalable and AI-optimized data centres. About Schneider Electric Schneider's purpose is to create Impact by empowering all to make the most of our energy and resources, bridging progress and sustainability for all. At Schneider, we call this Life Is On. Our mission is to be the trusted partner in Sustainability and Efficiency. We are a global industrial technology leader bringing world-leading expertise in electrification, automation and digitization to smart industries, resilient infrastructure, future-proof data centres, intelligent buildings, and intuitive homes. Anchored by our deep domain expertise, we provide integrated end-to-end lifecycle AI-enabled Industrial IoT solutions with connected products, automation, software and services, delivering digital twins to enable profitable growth for our customers. We are a people company with an ecosystem of 150,000 colleagues and more than a million partners operating in over 100 countries to ensure proximity to our customers and stakeholders. Learn more at or follow them on Instagram, X, Facebook, and LinkedIn at @SchneiderElectricCA. For media resources, visit Schneider Electric's online newsroom, View source version on Contacts For more information: Jodi Smith-MeisnerSchneider Electric Samiha FarihaSchneider Electric Canada1-647-268-6687sfariha@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Schneider Electric Launches New Data Centre Solutions to Meet Challenges of High-Density AI and Accelerated Compute Applications
Schneider Electric Launches New Data Centre Solutions to Meet Challenges of High-Density AI and Accelerated Compute Applications

National Post

time18 hours ago

  • Business
  • National Post

Schneider Electric Launches New Data Centre Solutions to Meet Challenges of High-Density AI and Accelerated Compute Applications

Article content Innovative prefabricated data centre architecture provides critical IT infrastructure for high-density computing clusters. New rack PDUs and rack systems are built for increased size and weight support, and feature direct-to-chip liquid cooling. Schneider Electric launches new Open Compute Project (OCP) inspired rack system to support NVIDIA MGX architecture. Article content MISSISSAUGA, Ontario — Schneider Electric, the leader in the digital transformation of energy management and automation, today announced new data centre solutions specifically engineered to meet the intensive demands of next-generation AI cluster architectures. Evolving its EcoStruxure™ Data Centre Solutions portfolio, Schneider Electric introduced a Prefabricated Modular EcoStruxure Pod Data Centre solution that consolidates infrastructure for liquid cooling, high-power busway and high-density NetShelter Racks. In addition, EcoStruxure Rack Solutions incorporate detailed rack configurations and frameworks designed to accelerate High Performance Computing (HPC) and AI data centre deployments. The new EcoStruxure Pod Data Centre and EcoStruxure Rack Solutions are now available globally. Article content Organizations are deploying AI clusters and grappling with extreme rack power densities, which are projected to reach 1MW and beyond. Schneider Electric's new line of solutions equips customers with integrated, data-validated, and easily scaled white space solutions that address new challenges in pod and rack design, power distribution and thermal management. Article content 'The sheer power and density required for AI clusters create bottlenecks that demand a new approach to data centre architecture,' said Himamshu Prasad, senior vice president of EcoStruxure IT, Transactional & Edge and Energy Storage Centre of Excellence at Schneider Electric. 'Customers need integrated infrastructure solutions that not only handle extreme thermal loads and dynamic power profiles but also deploy rapidly, scale predictably, and operate efficiently and sustainably. Our innovative next-generation EcoStruxure solutions that support NVIDIA technology address these critical requirements head on.' Article content Prefabricated Modular EcoStruxure Pod Data Centre: Prefabricated, scalable pod architecture enables operators to deploy high-density racks, supporting pods up to 1MW and beyond, at scale. Engineered-to-order, the new pod infrastructure offers flexibility and supports liquid cooling, power busway, complex cabling, as well as hot aisle containment, InRow and rear door heat exchanger cooling architectures. The Prefabricated Modular EcoStruxure Pod Data Centre is now shipping pre-designed and pre-assembled with all components for rapid deployment to support high-density workloads. EcoStruxure Rack Solutions: These reliable, high-density rack systems adapt to EIA, ORV3 and NVIDIA MGX modular design standards approved by leading IT chip and server manufacturers. Configurations accommodate a wide array of power and cooling distribution schemes and employ Motivair by Schneider Electric in-rack liquid cooling, as well as new and expanded rack and power distribution products, including: NetShelter SX Advanced Enclosure: This new line features taller, deeper, and stronger racks to support increased weight, cabling and infrastructure. NetShelter SX Advanced features a reinforced shipload rating and is safeguarded with shock packaging, ensuring secure transport of AI servers and liquid cooling systems. NetShelter Rack PDU Advanced: These power distribution units have been updated to support the high-current power needs of AI servers. Designed for efficient rack layouts, the NetShelter Rack PDU Advanced offers compact vertical and horizontal models with higher counts of dedicated circuits. Intelligent operational features now enabled by Schneider Electric's Network Management Card enhance security and provide seamless integration to EcoStruxure IT. NetShelter Open Architecture: This Open Compute Project (OCP) inspired rack architecture is available as a configure-to-order solution and includes open rack standards, power shelf and in-rack busbar. As part of this, a new Schneider Electric rack system has also been developed to support the NVIDIA GB200 NVL72 system that utilizes the NVIDIA MGX architecture in its rack design, integrating Schneider Electric into NVIDIA's HGX and MGX ecosystems for the first time. Article content 'Schneider Electric's innovative solutions provide the reliable, scalable infrastructure our customers need to accelerate their AI initiatives,' said Vladimir Troy, vice president of data centre engineering, operations, enterprise software and cloud services at NVIDIA. 'Together, we're addressing the rapidly growing demands of AI factories — from kilowatt to megawatt-scale racks—and delivering future-proof solutions that maximize scalability, density and efficiency.' Article content The new solutions and suite of engineered data centre reference designs equip data centre operators and Schneider Electric's partner ecosystem with the infrastructure and information needed to deploy powerful AI clusters faster and more reliably while addressing common barriers to adoption, including: These enhanced EcoStruxure offerings add to Schneider Electric's robust line of fully integrated, end-to-end AI infrastructure solutions — spanning advanced hardware, intelligent software, services such as EcoCare™ and EcoConsult for Data Centres, and strategic industry partnerships with key IT players. Schneider Electric is the partner of choice for building efficient, resilient, scalable and AI-optimized data centres. Article content About Schneider Electric Article content Schneider's purpose is to create Impact by empowering all to make the most of our energy and resources, bridging progress and sustainability for all. At Schneider, we call this Life Is On. Our mission is to be the trusted partner in Sustainability and Efficiency. We are a global industrial technology leader bringing world-leading expertise in electrification, automation and digitization to smart industries, resilient infrastructure, future-proof data centres, intelligent buildings, and intuitive homes. Anchored by our deep domain expertise, we provide integrated end-to-end lifecycle AI-enabled Industrial IoT solutions with connected products, automation, software and services, delivering digital twins to enable profitable growth for our customers. We are a people company with an ecosystem of 150,000 colleagues and more than a million partners operating in over 100 countries to ensure proximity to our customers and stakeholders. Learn more at or follow them on Instagram, X, Facebook, and LinkedIn at @SchneiderElectricCA. For media resources, visit Schneider Electric's online newsroom, Article content Article content Article content Article content Article content Contacts Article content For more information: Article content

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store