
Dell Technologies Unveils New AI Factory Upgrades
Home » Tech Value Chain » Global Brands » Dell Technologies Unveils New AI Factory Upgrades
Dell Technologies has announced new innovations across its Dell AI Factory in partnership with NVIDIA. These updates aim to help enterprises accelerate AI adoption and reduce time to value.
The company reported that demand for accessible AI skills and technologies is growing rapidly as enterprises shift from experimentation to implementation.
To meet this demand, Dell and NVIDIA have introduced enhanced AI infrastructure, solutions, and services. These updates are designed to streamline deployment and improve scalability, efficiency, and performance.
Dell revealed its latest PowerEdge servers that support NVIDIA's new Blackwell GPUs: The air-cooled Dell PowerEdge XE9780 and XE9785 simplify integration in existing data centers.
The liquid-cooled XE9780L and XE9785L accelerate rack-scale deployment and support up to 256 NVIDIA Blackwell Ultra GPUs per IR7000 rack.
These servers are successors to Dell's fastest ramping solution, the PowerEdge XE9680. They enable up to 4x faster large language model training with the 8-way NVIDIA HGX B300.
Dell also introduced the PowerEdge XE9712 with NVIDIA GB300 NVL72. This server offers up to 50x more inference output and 5x better throughput. It includes Dell PowerCool technology to enhance power efficiency.
By July 2025, the PowerEdge XE7745 will support NVIDIA RTX Pro 6000 Blackwell Server Edition GPUs. This platform meets the needs of physical and agentic AI applications, including robotics, digital twins, and multi-modal AI.
Dell plans to support the NVIDIA Vera CPU and the NVIDIA Vera Rubin platform through a new PowerEdge XE server, built for Dell Integrated Rack Scalable Systems.
For networking, Dell expanded its portfolio with PowerSwitch SN5600 and SN2201 Ethernet, part of the NVIDIA Spectrum-X platform. These are joined by NVIDIA Quantum-X800 InfiniBand switches, which offer up to 800 Gbps throughput. Dell ProSupport and Deployment Services will assist customers at every stage of deployment.
The company confirmed that Dell AI Factory with NVIDIA now supports the NVIDIA Enterprise AI Factory validated design. This includes Dell and NVIDIA compute, networking, storage, and AI Enterprise software for a complete AI solution.
Dell Technologies also revealed updates to its AI Data Platform. The platform now provides AI apps with constant access to high-quality data. New features include: Dell ObjectScale with NVIDIA BlueField-3 and Spectrum-4 integration for better scalability.
A new solution combining PowerScale, Project Lightning, and PowerEdge XE servers with KV cache and NVIDIA's NIXL Libraries, ideal for distributed inference.
Dell Object Scale will support S3 over RDMA. This delivers 230% higher throughput, 80% lower latency, and 98% reduced CPU load.
The company also introduced an integrated solution using the NVIDIA AI Data Platform. It is designed to accelerate insights and support agentic AI tools.
On the software side, Dell announced that the NVIDIA AI Enterprise platform is now available directly. It includes NVIDIA NIM, NeMo microservices, Blueprints, NeMo Retriever for RAG, and Llama Nemotron reasoning models.
Additionally, Red Hat OpenShift is now available on the Dell AI Factory with NVIDIA, offering flexibility and security for business-critical AI deployments.
Dell has also launched new Managed Services for the AI Factory. These services cover the entire NVIDIA AI stack, including 24/7 monitoring, updates, and proactive support.
Michael Dell, Chairman and CEO, said the goal is to make AI more accessible. 'With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle at any scale,' he stated.
Jensen Huang, CEO of NVIDIA, added, 'AI factories are the infrastructure of modern industry. With Dell Technologies, we're offering the broadest line of Blackwell AI systems for use in the cloud, enterprise, and edge.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
an hour ago
- Arabian Post
Hyperscalers Form ASIC Coalition to Challenge NVIDIA Dominance
Cloud computing giants AWS, Google, Microsoft, Meta and OpenAI are accelerating in-house development of custom application‑specific integrated circuits, aiming to erode NVIDIA's dominance in high‑performance AI datacentres. Industry reports highlight a projected annual growth rate of around 50% for ASIC purchases by hyperscalers, marking a strategic pivot in the AI hardware landscape. NVIDIA's premium-priced solutions—including Blackwell GPUs—have placed pressure on hyperscalers to secure more cost‑efficient, scalable systems. With single GPUs ranging from $70,000 to $80,000 and fully configured servers tallying up to $3 million, these companies are betting on internal design to manage costs and supply risks. Amazon Web Services has notably moved ahead with its in‑house chips—Trainium for training and Inferentia for inference—reporting 30 – 40% greater cost efficiency compared with NVIDIA hardware. AWS is also collaborating with Marvell and Taiwan's Alchip on next‑generation Trainium versions. Internal indications suggest AWS may deploy as many as half‑a‑million ASIC units in its data centres, an expansive scale‑up that could rival NVIDIA's installed base. ADVERTISEMENT Google, meanwhile, has scaled its TPU v6 Trillium chips, transitioning from single‑supplier to dual‑supplier design by partnering with MediaTek. With deployments reportedly hitting 100,000‑unit clusters to support Gemini 2.0 workloads, Google claims competitive cost-performance metrics relative to NVIDIA GPUs. Microsoft's forthcoming Maia 200 chip, co‑designed with GUC using TSMC's 3 nm process, is scheduled for commercial release in 2026. Meta's Meta Training and Inference Accelerator, developed alongside Broadcom, Socionext and GUC, is expected in early 2026 on TSMC's 3 nm node, featuring HBM3e memory—another step towards self‑sufficiency in AI compute. OpenAI has also announced a proprietary training processor, with mass production anticipated at TSMC by 2026. Market projections reflect this tectonic shift. ASICs are poised to claim between $100 billion and $130 billion of custom AI accelerator spend by 2030, with Broadcom estimating a market of $60 billion to $90 billion by 2027. Traditional ASIC powerhouses—Broadcom, Marvell, MediaTek, Alchip and GUC—are experiencing surging demand as they support hyperscaler transitions. Despite these developments, hyperscalers continue to reserve capacity for NVIDIA chips, recognising the GPU giant's entrenched ecosystem—especially its CUDA software stack—and the steep technical barriers to immediate elimination of GPU dependencies. The trend resembles historical transitions in specialised compute. Just as cryptocurrency mining moved from GPUs to ASICs for lower costs and greater efficiency, hyperscalers now aim to fragment the AI compute supply chain and diversify their hardware portfolios. ADVERTISEMENT TSMC stands to benefit significantly, serving as the foundry for both NVIDIA's mass‑market GPUs and hyperscaler ASICs. Its chairman emphasises that the competition between NVIDIA and cloud‑designed chips is ultimately beneficial to TSMC, ensuring a broad customer base. Broadcom has emerged as a frontrunner, with its ASIC and networking chipset revenues soaring 220% to $12.2 billion in 2024. Hyperscalers are investing in clusters featuring up to one million custom XPUs over open‑Ethernet networks—an architecture that places Broadcom and Marvell in strategic positions. Networking ASICs are expected to account for 15–20% of AI data‑centre silicon budgets, rising from the 5–10% range. Revenue trends reflect these structural shifts. Marvell has secured a multi‑year AI chip deal with AWS and anticipates its AI silicon revenue jumping from $550 million in 2024 to over $2.5 billion in 2026. Broadcom, similarly, is redirecting significant investment toward hyperscaler ASIC demand. Nevertheless, NVIDIA retains a commanding lead in AI training and general‑purpose GPU compute. Its end‑to‑end platform—from hardware to software—remains deeply embedded in the AI ecosystem. Custom ASICs, by contrast, offer task‑specific gains but lack the breadth of software compatibility that NVIDIA enables. Analysts caution that the AI compute landscape is evolving toward a more fragmented, mixed‑architecture model combining GPUs and ASICs. Hyperscalers' shift signals strategic recognition of rising costs, supply constraints, and performance demands. Yet, they also underscore persistent obstacles: software ecosystem maturity, long development cycles, and the complexity of large‑scale deployment. Questions remain regarding the timeframe in which hyperscalers can meaningfully shift workloads away from NVIDIA GPUs. Industry roadmaps project new ASIC deployments through 2026–27. Analysts expect GPU market share erosion may begin toward the end of the decade, provided in-house ASICs deliver consistent performance and efficiency. The stage is set for a multi‑year contest in datacentre compute. NVIDIA faces increasing pressure from hyperscalers building bespoke chips to optimise workloads and control supply. The next evolution of AI infrastructure may look less like a GPU‑centric world and more like a diverse ecosystem of specialised, interlocking processors.


Zawya
an hour ago
- Zawya
Techies highlight AI-driven Innovations at SRTI Park Event under slogan ‘Born in Sharjah, Built for the World'
SHARJAH, UAE: An array of advances and innovations in AI were showcased by tech experts and inhouse companies at an event organized by the Sharjah Research Technology and Innovation Park (SRTI Park) at a Business Breakfast event hosted under the slogan 'Born in Sharjah, Built for the World'. Techies and experts from companies like Al Hathboor Bikal, Nvidia, HP and Qamia shared insights into the world of AI and quantum computing, and referred to the pioneering innovations being created at SRTI Park. The speakers shared insights into AI Factory, a structured system for end-to-end lifecycle management of AI deployment and development. It serves as an AI-centered decision-making engine that optimizes operations by utilizing machine learning algorithms. Speaking at the event, Hussain Al Mahmoudi, CEO, SRTI Park, referred to UAE's ambitions to be a leader in AI, and stressed the need for more investment in this field. He said, 'This event is a platform to demonstrate how far Sharjah has come to create innovations for the world. Our aim is to connect these great companies with investors. We are constantly fine-tuning our strategy, and attracting startups in our focus areas, which include healthcare, sustainability, advanced manufacturing, mobility, and transport.' The panel speakers included Ahmed Mustafa, Regional AI Adoption and Development Lead, NVIDIA; Mujahid Khaled, Sales Lead, AI and HPC, MEA, Hewlett Packard Enterprise; Dr. Raouf Dridi, CEO, Qamia; and Raj Sandhu, CEO, Al Hathboor Bikal AHB). The panelists highlighted three elements of building an AI factory: people, technology, and economy. They emphasized the importance of partnerships with data scientists, product managers, and CXOs. They pointed out that the technology stack required for an AI factory includes data centers, intelligent compute stacks, orchestration platforms, and machine learning operations. A particular reference was made to the startups and how they could expand beyond their local markets, reaching global audiences, using these advanced technologies. Among the products in the limelight was AHB's Dialog XR, an LLM-Based Chatbot built entirely at SRTI Park. In a presentation, CEO Raj Sandhu explained that the domain knowledge-based LLM creates a conversational interaction platform that can be used as a multi-disciplinary tool for internal as well as external purposes to elevate organizational performance. He provided insights into how AI factories could drive regional innovation and contribute to globally scalable solutions. Sandhu explained the concept of the AI factory and its focus on data centers, HPC, and accelerated compute. He said the Park's proximity to universities allows for collaboration with global institutions, offering HPC as a service and rentable GPUs. As part of the event, SRTI Park officials introduced the attendees to the ecosystem's infrastructure and offering, and took them on a tour of the facilities, including the SoiLab (Sharjah Open Innovation Lab). SRTI Park is one of the fastest growing technology parks in the Middle East, innovatively and dynamically shaping the future of research and technology. SRTI Park is driving an innovation ecosystem that promotes research & development and supports enterprise activity through the triple helix collaboration of industry, government and academia. It provides an environment conducive to creativity and innovation by creating a sustainable park with world-class infrastructure and services.


Tahawul Tech
an hour ago
- Tahawul Tech
U.S. invests in domestic semiconductor manufacturing
Texas Instruments has enacted an immense $60 billion investment to increase its fabrication facilities which is expected to lead to the creation of 60,000 domestic jobs in building and operation. The investment is a part of their commitment to US chip production and covers seven fabs across three sites in the US states of Texas and Utah. Texas Instruments branded the plan as the 'largest investment in foundational semiconductor manufacturing in US history'. Up to $40 billion is to go towards four facilities in Texas, with two more fabs to be added to a pair already being built. Texas Instruments said another site in the state 'continues to ramp to full production'. Texas Instruments is already building a second 300mm wafer fab in Utah while upping the output of an existing facility at the site, with the pair to ultimately be connected. The US government welcomed the investment: Secretary of Commerce Howard Lutnick noted President Donald Trump 'has made it a priority to increase semiconductor manufacturing' in the nation. Lutnick said the collaboration between the nation and Texas Instruments would 'support US chip manufacturing for decades to come'. Texas Instruments president and CEO Haviv Ilan said the 300mm capacity being created will help 'deliver the analogue and embedded processing chips that are vital for nearly every type of electronic system'. Companies including Apple, Nvidia and SpaceX were tipped by Texas Instruments as being among the key beneficiaries of its investment. Although the move was welcomed by the current US leadership, the investment was spurred under the previous administration as part of the broader CHIPS and Science Act initiated in 2022. Source: Mobile World Live Image Credit: Stock Image