
iGenius, Vertiv And Nvidia Partner On AI Supercomputer Colosseum
Vertiv has announced a groundbreaking collaboration with NVIDIA and renowned AI pioneer iGenius to deploy Colosseum, one of the world's largest NVIDIA DGX AI supercomputers with NVIDIA Grace Blackwell Superchips. Set to deploy in 2025 in Italy, Colosseum will redefine the digital landscape through a first-of-its-kind sovereign AI data center for regulated workloads.
Designed to address the demands of highly regulated industries such as finance, healthcare, and public administration, Colosseum will embody a fusion of transformative computational power, energy efficiency, and data sovereignty, while balancing stringent data security requirements.
Colosseum, a NVIDIA DGX SuperPOD, is the latest advancement in a long-standing collaboration between Vertiv and NVIDIA. It is strategically positioned in southern Italy to address regional government requirements, marking a significant milestone in Europe's AI landscape.
'Harnessing the power of NVIDIA's cutting-edge accelerated computing and Vertiv's innovative infrastructure expertise, Colosseum stands as a testament to the transformative potential of sovereign AI,' said Uljan Sharka, CEO of iGenius. 'We're demonstrating how modular systems and software-specific infrastructure enable a new era of mission-critical AI.'
Modular by Design. Engineered for Efficiency.
Colosseum combines Vertiv's infrastructure management expertise, NVIDIA accelerated computing, and the NVIDIA Omniverse Blueprint for AI factory design and operations. The deployment will leverage Vertiv's 360AI reference architecture infrastructure platform for data center power and cooling that is designed for the NVIDIA GB200 NVL72 , which was co-developed with NVIDIA and released in late 2024.
This modular and scalable system positions iGenius to deploy one of the fastest hyperscale AI supercomputers, and one of the largest to support sovereign AI.
Vertiv has also extended its reference design library on its AI Hub with the co-developed data center power and cooling design for NVIDIA GB300 NVL72. By staying one GPU generation ahead, Vertiv enables customers to plan infrastructure before silicon lands, with deployment-ready designs that anticipate increased rack power densities and repeatable templates for AI factories at scale.
'The unit of compute is no longer the chip — it's the system, the AI Factory,' said Karsten Winther, president of Vertiv, EMEA. 'Through our collaboration with NVIDIA and visionary AI player iGenius, we are proving the efficiency and system-level maturity of delivering the data center as a unit of compute, unlocking rapid adoption of AI-native power and cooling infrastructure as a catalyst for AI at scale.'
Simulate with NVIDIA Omniverse. Deliver with Speed.
'AI is reshaping the data center landscape, demanding new levels of scale, efficiency and adaptability for global AI factories,' said Charlie Boyle, vice president of DGX platforms at NVIDIA. 'With physically-based digital twins enabled by NVIDIA Omniverse technologies and Vertiv's modular design for the iGenius DGX SuperPOD data center, Colosseum sets a new standard for building supercomputers for the era of AI.'
Colosseum was co-designed as a physically accurate digital twin developed with NVIDIA Omniverse technologies, enabling real-time collaboration between Vertiv, iGenius and NVIDIA, to accelerate system-level decisions and compress the design-to-deploy cycle. The Omniverse Blueprint enables real-time simulations, allowing engineers to test and refine designs instantly, rather than waiting for lengthy simulation processes, reducing simulation times from months to hours. Vertiv manufacturing and factory integration processes reduce deployment time by up to 50% compared to traditional data center builds.
This collaborative 3D design process validated the entire infrastructure stack, enabling predictive modeling of thermal load, electrical flow, and site layout — for 132kW liquid-cooled racks to modular power systems — before a single module was built.
Designed with Intelligence. Unified by Software.
Vertiv's AI-ready prefabricated modular data center solution is designed, manufactured, delivered, installed and commissioned by Vertiv. It includes power, cooling, management, monitoring, service and maintenance offerings, with power and cooling capacity supporting up to 132kW/rack initially, with an ability to scale up as required for future designs. The building shell integrates prefabricated white space inside while deploying full modular grey space outside. This approach offers exceptional scalability and energy efficiency, transforming the way data centers are built and deployed.
Colosseum will leverage NVIDIA Mission Control for data center operations and orchestration and Vertiv Unify to simplify and synchronize building management for AI factories. Vertiv Unify provides: ● Real-time orchestration across power, cooling, and compute
● Digital twin synchronization for closed-loop optimization
● AI-ready capabilities that support autonomous decision-making
Through its integration of NVIDIA Omniverse technologies, Vertiv Unify enables real-time updates between physical systems and digital models — allowing predictive maintenance, what-if simulations, and scenario testing before operational risk occurs.
The Blueprint for AI Factories Globally
Colosseum is more than a data center. It's the template for scalable, repeatable, sovereign AI factories. By combining cloud-scale density, local data control, and modular deployment, it signals the next phase of AI: where inference must be secure, fast, compliant, and distributed.
This is not a one-off project — it's a reference point. iGenius is building a blueprint with Colosseum designed to be repeated globally, with Vertiv and NVIDIA aligned on future platform support, including DGX GB300 systems and beyond. The future of sovereign AI is no longer theoretical — it's being built now. 0 0
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tahawul Tech
2 days ago
- Tahawul Tech
SandboxAQ improves drug discovery with data creation
SandboxAQ, an artificial intelligence startup, recently released a wealth of data in hopes it will speed up the discovery of new medical treatments. The goal is to help scientists predict whether a drug will bind to its target in the human body. But while the data is backed up by real-world scientific experiments, it did not come from a lab. Instead, SandboxAQ, which has raised nearly $1 billion in venture capital, generated the data using Nvidia's chips and will feed it back into AI models that it hopes scientists can use to rapidly predict whether a small-molecule pharmaceutical will bind to the protein that researchers are targeting, a key question that must be answered before a drug candidate can move forward. For example, if a drug is meant to inhibit a biological process like the progression of a disease, scientists can use the tool to predict whether the drug molecule is likely to bind to the proteins involved in that process. The approach is an emerging field that combines traditional scientific computing techniques with advancements in AI. In many fields, scientists have long had equations that can precisely predict how atoms combine into molecules. But even for relatively small three-dimensional pharmaceutical molecules, the potential combinations become far too vast to calculate manually, even with today's fastest computers. So SandboxAQ's approach was to use existing experimental data to calculate about 5.2 million new, 'synthetic' three-dimensional molecules – molecules that haven't been observed in the real world, but were calculated with equations based on real-world data. That synthetic data, which SandboxAQ is releasing publicly, can be used to train AI models that can predict whether a new drug molecule is likely to stick to the protein researchers are targeting in a fraction of the time it would take to calculate it manually, while retaining accuracy. SandboxAQ will charge money for its own AI models developed with the data, which it hopes will get results that rival running lab experiments, but virtually. 'This is a long-standing problem in biology that we've all, as an industry, been trying to solve for', said Nadia Harhen, general manager of AI simulation at SandboxAQ. 'All of these computationally generated structures are tagged to a ground-truth experimental data, and so when you pick this data set and you train models, you can actually use the synthetic data in a way that's never been done before'. Source: Reuters Image Credit: Stock Image


Web Release
2 days ago
- Web Release
VAST Data Powers Smarter, Evolving AI Agents with NVIDIA Data Flywheel
VAST Data, the AI Operating System company, announced today that it is delivering a complete data and compute platform that enables AI intelligence to continuously evolve. The VAST AI OS, combined with NVIDIA AI Enterprise, which includes NeMo microservices that power a data flywheel for continuous model improvement, creates a unified environment where AI pipelines can constantly learn, adapt, and improve. This reference workflow provides a self-optimizing foundation for scalable AI, laying the groundwork for billions of intelligent agents to fine-tune and evolve from their data and experiences. This solution provides enterprises with a converged software platform for data management, database services, and AI compute orchestration. Additionally, VAST AI OS AgentEngine uniquely shares feedback by providing the critical capability to map the intricate web of agent-data interactions through production logs. This granular traceability allows the flywheel to dissect these multi-step interactions, accurately identifying which specific elements require adjustment to enhance outcomes, accelerating model performance and accuracy at scale. 'AI-powered businesses need thinking machines designed for a future where billions of AI agents learn from their own experiences, fine-tune in real time, and create new possibilities through collaboration,' said Jeff Denworth, Co-Founder of VAST Data. 'By unifying NVIDIA's AI software and hardware technologies within the core of the VAST AI Operating System, we are giving customers the foundation to operationalize continuous improvements in AI intelligence at scale, with the security, governance, and service delivery tools required to manage these intelligent agents and the data they rely upon.' As AI moves from isolated projects to always-on infrastructure, businesses need systems that evolve in real time with every data point while addressing new security and governance challenges around fine-tuned models, agent interactions, and decentralized data pipelines. This collaboration makes it possible to run continuous, automated AI pipelines – from ingestion to inference to retraining – all managed within the VAST AI Operating System. Among the first to embrace this strategy is CACEIS, one of Europe's largest asset servicing firms. In collaboration with VAST and NVIDIA, CACEIS is exploring a real-time AI platform concept designed to securely capture, transcribe, and analyze 100% of client meetings. The vision is for the system to instantly generate meeting minutes, surface actionable insights, and deliver anonymized trend data — all seamlessly integrated into their sovereign CRM. With an end-to-end security model at its foundation, the platform is being designed to safeguard client privacy and data integrity while continuously evolving through every interaction. 'AI will be a game-changer, highlighting trends in current needs by analysing meeting reports so we can better serve clients,' said Arnaud Misset, Chief Digital Officer, CACEIS. Using VAST's AgentEngine that leverages the NVIDIA AI-Q Blueprint, CACEIS is developing a platform proof of concept that would enable AI agents to assist relationship managers in real time and help uncover new business opportunities. Built by NVIDIA NeMo microservices and the NVIDIA data flywheel blueprint, the envisioned CACEIS AI factory would continuously capture data and insights from every customer interaction. These feedback loops are intended to drive ongoing model refinement and training, allowing the system to improve and adapt with each meeting. As AI agents learn from one another and from human counterparts, this concept sets the stage for new ideas, collective intelligence, and enterprise-wide knowledge sharing to take shape. This capability was showcased during NVIDIA's Kari Briski's presentation at GTC Europe in Paris. Watch the demo replay here. This collaboration signals a broader shift in enterprise AI, from one-size-fits-all models to dynamic ecosystems of intelligent agents that continuously fine-tune, collaborate, and generate new ideas from their own data and interactions. Managing these agents at scale requires fine-grained security, governance, and access controls to ensure they operate safely and within defined boundaries. It also demands scalable, dynamic infrastructure capable of handling the varied and unpredictable demands of AI agents as they interact with diverse data tools and services. The VAST AI Operating System provides this real-time data infrastructure, along with compute orchestration, QoS tools that enforce fairness as different agents run within the environment, and a security framework to refine intelligence and operationalize AI innovation in a secure, scalable, and adaptive system. 'Data flywheels leverage each interaction with an AI agent to continuously improve system intelligence and value,' said Kari Briski, Vice President of Generative AI Software at NVIDIA. 'CACEIS is an exemplary pioneer with a vision of building an agentic AI data flywheel with NVIDIA and VAST to supercharge productivity for financial services in Europe.' Additional Resources: ? VAST + NVIDIA ? DEMO: AI Agents Unlocked: CACEIS Redefines Client Conversations With VAST Data and NVIDIA ? NVIDIA BLOG: Chat with Your Enterprise Data: Open-Source AI-Q NVIDIA Blueprint Puts Institutional Knowledge at Your Fingertips ? NVIDIA BLOG: Sovereign AI Agents Think Local, Act Global With NVIDIA AI Factories ? NVIDIA BLOG: Build Efficient AI Agents Through Model Distillation With NVIDIA's Data Flywheel Blueprint


Web Release
3 days ago
- Web Release
Path Tracing Comes to DOOM: The Dark Ages, Plus DLSS 4 with Multi Frame Generation launching with FBC: Firebreak and a new GeForce Game Ready Driver
This week, DLSS 4 with Multi Frame Generation and full ray tracing is launching in FBC: Firebreak, while DOOM: The Dark Ages receives a path tracing upgrade that adds DLSS Ray Reconstruction, amplifying image quality in the critically acclaimed shooter. NVIDIA is also releasing a new GeForce Game Ready Driver that includes day-zero support for FBC: Firebreak, DOOM: The Dark Ages' new update and REMATCH, a new multiplayer sports game featuring DLSS 4 with Multi Frame Generation. Remedy Entertainment's new three-player cooperative first-person shooter, FBC: Firebreak, is set in the same Federal Bureau of Control players reclaimed from supernatural invaders in the graphically spectacular Control. This time, as one of the FBC's fearless first responders, gamers and their teams are on call to confront everything from reality-warping Corrupted Items to otherworldly monsters, no matter the odds. FBC: Firebreak on PC features the full suite of RTX technology developed for Remedy's Alan Wake 2, giving GeForce RTX gamers the definitive PC gaming experience. Activate DLSS 4 for the highest levels of performance, enable DLSS Ray Reconstruction to enhance ray tracing fidelity and frame rates and crank the Ray Tracing Preset to max to enable full ray tracing. All GeForce RTX gamers benefit from NVIDIA RTX Mega Geometry when ray tracing is enabled, which reduces CPU and GPU Bounding Volume Hierarchies build and update times while also reducing VRAM consumption. On average, DLSS 4 with Multi Frame Generation, DLSS Super Resolution and DLSS Ray Reconstruction multiply performance by 9.3X at 4K max settings on GeForce RTX 50 Series desktop GPUs. Gamers can play FBC: Firebreak with full ray tracing at almost 200 frames per second on the GeForce RTX 5070 Ti, at nearly 250 frames per second on the GeForce RTX 5080 and at 360 frames per second on the GeForce RTX 5090, the fastest consumer gaming graphics card available. Bethesda Softworks and id Software are adding path tracing and DLSS Ray Reconstruction to DOOM: The Dark Ages on June 18th, making the battle against Hell all the more immersive. Path tracing takes the quality of ray-traced lighting to the next level, reflecting additional detail and game elements on surfaces. NVIDIA Spatial Hash Radiance Cache (SHaRC) technology is leveraged to performantly compute path-traced light, while NVIDIA Shader Execution Reordering further accelerates performance on GeForce RTX GPUs, and DLSS Ray Reconstruction further enhances image quality and performance. Using DLSS 4 with Multi Frame Generation, DLSS Super Resolution and DLSS Ray Reconstruction, performance at 4K is multiplied by an average of 6.8X on the GeForce RTX 5090 and GeForce RTX 5080, enabling Ultra Preset, path traced DOOM: The Dark Ages gameplay at up to 230 frames per second. Wired Productions and Caged Element's Warhammer 40,000: Speed Freeks is an action combat racing game that recently exited Early Access. Now, GeForce RTX gamers joining the high-Orktane racing will discover support for DLSS Super Resolution, significantly accelerating frame rates. Sloclap, creators of the critically acclaimed Sifu, are launching REMATCH on June 19th with Advanced Access available now via the purchase of Pro and Elite editions of the game. This 5v5 multiplayer football/soccer sports game sees players compete online in fast-paced, skill-based matches, free from offsides and fouls. When the game boots up, players can enable DLSS 4 with Multi Frame Generation, DLSS Frame Generation and DLSS Super Resolution to accelerate the frame rates of each football match. Editor's Notes: