
F5 expands performance, multi-tenancy, and security capabilities for fast-evolving AI landscape with NVIDIA
F5, the global leader in delivering and securing every app and API, today announced new capabilities for F5 BIG-IP Next for Kubernetes accelerated with NVIDIA BlueField-3 DPUs and the NVIDIA DOCA software framework, underscored by customer Sesterce's validation deployment. Sesterce is a leading European operator specializing in next-generation infrastructures and sovereign AI, designed to meet the needs of accelerated computing and artificial intelligence.Extending the F5 Application Delivery and Security Platform, BIG-IP Next for Kubernetes running natively on NVIDIA BlueField-3 DPUs delivers high-performance traffic management and security for large-scale AI infrastructure, unlocking greater efficiency, control, and performance for AI applications. In tandem with the compelling performance advantages announced along with general availability earlier this year, Sesterce has successfully completed validation of the F5 and NVIDIA solution across a number of key capabilities, including the following areas:
- Enhanced performance, multi-tenancy, and security to meet cloud-grade expectations, initially showing a 20% improvement in GPU utilization.
- Integration with
NVIDIA Dynamo
and KV Cache Manager to reduce latency for the reasoning of large language model (
LLM
) inference systems and optimization of GPUs and memory resources.
- Smart LLM routing on
BlueField
DPUs, running effectively with
NVIDIA NIM
microservices for workloads requiring multiple models, providing customers the best of all available models.
- Scaling and securing Model Context Protocol (
MCP
) including reverse proxy capabilities and protections for more scalable and secure LLMs, enabling customers to swiftly and safely utilize the power of MCP servers.
- Powerful data programmability with robust
F5 iRules
capabilities, allowing rapid customization to support AI applications and evolving security requirements.
'Integration between F5 and NVIDIA was enticing even before we conducted any tests,' said Youssef El Manssouri, CEO and Co-Founder at Sesterce. 'Our results underline the benefits of F5's dynamic load balancing with high-volume Kubernetes ingress and egress in AI environments. This approach empowers us to more efficiently distribute traffic and optimize the use of our GPUs while allowing us to bring additional and unique value to our customers. We are pleased to see F5's support for a growing number of NVIDIA use cases, including enhanced multi-tenancy, and we look forward to additional innovation between the companies in supporting next-generation AI infrastructure.'
Highlights of new solution capabilities include:
LLM Routing and Dynamic Load Balancing with BIG-IP Next for Kubernetes
With this collaborative solution, simple AI-related tasks can be routed to less expensive, lightweight LLMs in supporting generative AI while reserving advanced models for complex queries. This level of customizable intelligence also enables routing functions to leverage domain-specific LLMs, improving output quality and significantly enhancing customer experiences. F5's advanced traffic management ensures queries are sent to the most suitable LLM, lowering latency and improving time to first token.
'Enterprises are increasingly deploying multiple LLMs to power advanced AI experiences—but routing and classifying LLM traffic can be compute-heavy, degrading performance and user experience,' said Kunal Anand, Chief Innovation Officer at F5. 'By programming routing logic directly on NVIDIA BlueField-3 DPUs, F5 BIG-IP Next for Kubernetes is the most efficient approach for delivering and securing LLM traffic. This is just the beginning. Our platform unlocks new possibilities for AI infrastructure, and we're excited to deepen co-innovation with NVIDIA as enterprise AI continues to scale.'
Optimizing GPUs for Distributed AI Inference at Scale with NVIDIA Dynamo and KV Cache Integration
Earlier this year,
NVIDIA Dynamo was introduced
, providing a supplementary framework for deploying generative AI and reasoning models in large-scale distributed environments. NVIDIA Dynamo streamlines the complexity of running AI inference in distributed environments by orchestrating tasks like scheduling, routing, and memory management to ensure seamless operation under dynamic workloads. Offloading specific operations from CPUs to BlueField DPUs is one of the core benefits of the combined F5 and NVIDIA solution. With F5, the Dynamo KV Cache Manager feature can intelligently route requests based on capacity, using Key-Value (KV) caching to accelerate generative AI use cases by speeding up processes based on retaining information from previous operations (rather than requiring resource-intensive recomputation). From an infrastructure perspective, organizations storing and reusing KV cache data can do so at a fraction of the cost of using GPU memory for this purpose.
'BIG-IP Next for Kubernetes accelerated with NVIDIA BlueField-3 DPUs gives enterprises and service providers a single point of control for efficiently routing traffic to AI factories to optimize GPU efficiency and to accelerate AI traffic for data ingestion, model training, inference, RAG, and agentic AI,' said Ash Bhalgat, Senior Director of AI Networking and Security Solutions, Ecosystem and Marketing at NVIDIA. 'In addition, F5's support for multi-tenancy and enhanced programmability with iRules continue to provide a platform that is well-suited for continued integration and feature additions such as support for NVIDIA Dynamo Distributed KV Cache Manager.'
Improved Protection for MCP Servers with F5 and NVIDIA
Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how applications provide context to LLMs. Deploying the combined F5 and NVIDIA solution in front of MCP servers allows F5 technology to serve as a reverse proxy, bolstering security capabilities for MCP solutions and the LLMs they support. In addition, the full data programmability enabled by F5 iRules promotes rapid adaptation and resilience for fast-evolving AI protocol requirements, as well as additional protection against emerging cybersecurity risks.
'Organizations implementing agentic AI are increasingly relying on MCP deployments to improve the security and performance of LLMs,' said Greg Schoeny, SVP, Global Service Provider at World Wide Technology. 'By bringing advanced traffic management and security to extensive Kubernetes environments, F5 and NVIDIA are delivering integrated AI feature sets—along with programmability and automation capabilities—that we aren't seeing elsewhere in the industry right now.'
F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs is generally available now. For additional technology details and deployment benefits, go to
www.f5.com
and further details can also be found in a
companion blog from F5
.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Who Is Mark Walter? Billionaire changing sports by owning teams from NBA to Formula 1
Who Is Mark Walter? Billionaire changing sports by owning teams from NBA to Formula 1 (Image via Getty) Mark Walter, who is a billionaire investor and the CEO of Guggenheim Partners, is no stranger to headlines, especially not for buying major sports ownership. He has teams in several leagues and different countries, and every year he has quietly built a sports empire that includes teams in baseball, women's hockey, European football, and motorsports. Now, he is (somewhat inappropriately) making headlines again. A new deal with the Los Angeles Lakers has valued the team at an unprecedented $10 billion. Which is extremely shocking for fans and insiders, but just reaffirming that Mark Walter is changing the entire landscape of sports globally again. Mark Walter now owns part of the Los Angeles Lakers After $10 billion deal On Monday, June 17, 2025, Mark Walter and his firm, Guggenheim Partners, agreed to acquire a major stake in the Los Angeles Lakers at a record $10 billion valuation, according to Front Office Sports. This deal is now the highest team valuation ever recorded in the NBA . Mark Walter is already the primary owner of the Los Angeles Dodgers, which he and Guggenheim Baseball Management bought in 2012 for $2.15 billion. Mark Walter is also part-owner of the Los Angeles Sparks, the WNBA team, continuing his strong focus on top LA franchises. With this new Los Angeles Lakers deal, Mark Walter now holds a major share across basketball, baseball, and women's sports — all within the city of Los Angeles. Mark Walter's sports holdings include Chelsea FC and a 2026 F1 team Mark Walter's investments go beyond the U.S. In 2022, he co-led the group BlueCo along with Todd Boehly to buy Chelsea FC, a top soccer club in London. They also own RC Strasbourg, a football club in France. In 2023, Mark Walter expanded into women's hockey by purchasing the Premier Hockey Federation and merging it with other leagues, creating the Professional Women's Hockey League (PWHL). The league's championship trophy is named the Walter Cup in his honor. In motorsports, Mark Walter's group TWG Motorsports will launch a Cadillac Formula 1 team in 2026. He's also involved in IndyCar, Formula E, NASCAR, and other racing series through deals with Andretti Global, Spire Motorsports, and Wayne Taylor Racing. As one sports executive told Front Office Sports, 'No one is doing what Mark Walter is doing — he's building something nobody else even imagined.' Also Read: Who is Mark Walter? The billionaire who acquired Los Angeles Lakers in biggest sports team deal in history Game On Season 1 kicks off with Sakshi Malik's inspiring story. Watch Episode 1 here


Mint
an hour ago
- Mint
Swiss Seek European Arms Procurement Ties to Bolster Defense
(Bloomberg) -- The Swiss government is seeking closer collaboration with other European nations in defense procurement, following warnings by senior officials over the country's struggling arms industry. As part of a new defense strategy announced on Friday, Switzerland aims to buy at least 30% of its arms from Europe. Swiss forces should also use the same weapon systems as their neighbors, or at least compatible ones, the government said in a statement. 'It's a give and take,' the government said. 'Swiss companies need to gain access to international cooperation and the supply chains of foreign system suppliers.' Russia's full-scale invasion of Ukraine and concern over US President Donald Trump's commitment to NATO's mutual defense clause is forcing Europe to rearm, making it more challenging for small countries like Switzerland to place orders. At the same time, the Swiss defense industry has been frozen out of procurement efforts as the country's rules — rooted in its neutrality stance — mean it has blocked requests to send arms and ammunition to Ukraine. As neither a member of NATO nor the European Union, the Federal Council reiterated that neutral Switzerland is finding it difficult to order arms, as suppliers ship to bigger countries first. The strategy update will likely put further pressure on parliament to ease stringent rules, which forbid the re-export of war material with more than 50% of domestically produced components. Swiss defense exports have slumped by 30% from a record in 2022, as arms producers have started to shift production abroad to circumvent the restrictions. 'Because it refuses to re-export Swiss war material to Ukraine, it is no longer regarded as a reliable partner by European countries,' the government said. 'An increasing number of countries are therefore excluding Switzerland from procurement projects and supply chains.' Earlier this month, NATO member states adopted the alliance's most ambitious military ramp-up since the Cold War. Last week, the Swiss Upper House approved proposals that would reinstate the government's power to permit passing on such arms to conflict zones. The bill still needs approval in parliament's Lower House. Though small in its contribution to economic output, the Swiss policy of ''armed neutrality' has magnified importance of the country's defense sector: Swiss law enshrines the need to maintain an industrial capacity 'adapted to the requirements of its national defense.' Senior officials have expressed fears that a dwindling domestic defense industry lowers Switzerland's negotiating heft with foreign suppliers. More stories like this are available on


Time of India
an hour ago
- Time of India
Nvidia, Foxconn in talks to deploy humanoid robots at Houston AI server making plant
By Wen-Yee Lee TAIPEI: Taiwan's Foxconn and U.S. artificial intelligence chips maker Nvidia are in talks to deploy humanoid robots at a new Foxconn factory in Houston that will produce Nvidia AI servers, two sources familiar with the matter said. This would be the first time that an Nvidia product will be made with the assistance of humanoid robots and would be Foxconn's first AI server factory to use them on a production line, the sources said. A deployment, expected to be finalised in the coming months, would mark a milestone in the adoption of the human-like robots that promises to transform manufacturing processes. Foxconn is developing its own humanoid robots with Nvidia and has also trialed humanoids made by China's UBTech. The sources said it was not clear what type of humanoid robots are being planned for use in the Houston factory , what they will look like or how many will be deployed initially. They said the two companies are aiming to have the humanoid robots at work by the first quarter of next year when Foxconn's new Houston factory will begin production of Nvidia's GB300 AI servers. And while it was not clear what exactly the robots will be doing at the factory, Foxconn has been training them to pick and place objects, insert cables and do assembly work, according to a company presentation in May. Foxconn's Houston factory was ideally suited to deploy humanoid robots because it will be new and have more space than other existing AI server manufacturing sites, one of the sources said. Nvidia and Foxconn declined to comment. The sources did not wish to be identified as they are not authorised to speak to the media. Leo Guo, general manager of the robotics business unit at Foxconn Industrial Internet, a subsidiary of Foxconn that is in charge of the group's AI server business, said last month at an industry event in Taipei that Foxconn plans to showcase at the company's annual technology event in November two versions of humanoid robots that it has developed. One of those will be with legs and the other will use a wheeled autonomous mobile robot (AMR) base, which would cost less than the version with legs, he said, without disclosing details. Nvidia announced in April that it planned to build AI supercomputer manufacturing factories in Texas, partnering with Foxconn in Houston and Wistron in Dallas. Both sites are expected to ramp up production within 12 to 15 months. For Nvidia, using humanoid robots in the manufacturing of its AI servers represents a further push into the technology as it already supplies humanoid makers with a platform they can use to build such robots. Nvidia CEO Jensen Huang predicted in March that their wide use in manufacturing facilities was less than five years away. Automakers such as Germany's Mercedes-Benz and BMW have tested the use of humanoids on production lines, while Tesla is developing its own. China has also thrown its weight behind humanoids, betting that many factory tasks will eventually be performed by such robots.