
AMD launches EPYC 4005 Series for entry-level server market
AMD has introduced its EPYC 4005 Series processors, targeting the entry-level server market with new offerings based on its 'Zen 5' architecture.
The EPYC 4005 Series is designed for small businesses and hosted IT service providers, aiming to deliver enterprise-grade features while focusing on affordability and reduced deployment complexity. AMD has positioned the range as an option for those seeking performance, dependability, and efficiency within budget constraints.
The processors make use of the AM5 socket, which is also employed in the existing EPYC 4004 Series, allowing for deployment in a variety of form factors including servers, blades, and towers. The company has highlighted that the new solutions strip away what it describes as "unnecessary features and complexity," in a bid to control costs for enterprise customers at the entry level.
In testing performed using the Phoronix suite, AMD reports that the EPYC 4565P 16-core processor outperformed the top-of-stack 6th generation Intel Xeon 6300P by approximately 1.83 times according to AMD's own benchmarking data.
Derek Dicker, Corporate Vice President of the Enterprise and HPC Business Group at AMD, said: "Growing businesses and dedicated hosters often face significant constraints around budget, complexity, and deployment timelines."
"With the latest AMD EPYC 4005 Series CPUs, we are delivering the right balance of performance, simplicity, and affordability, giving our customers and system partners the ability to deploy enterprise-class solutions that solve everyday business challenges."
The EPYC 4005 Series launches with broad partner support, with server, cloud, and hardware vendors such as Altos, ASRock Rack, Gigabyte, Lenovo, MiTAC, MSI, New Egg, OVHcloud, Supermicro, and Vultr participating in the rollout.
Senthil Reddy, Executive Director of Product Management for Infrastructure Solutions Group at Lenovo, commented: "With AMD EPYC 4005 Series processors, Lenovo is providing tailored solutions that prepare small businesses for the AI era."
"Together, we're enabling cost-effective, reliable systems that provide enterprise-class features for growing businesses."
Yaniv Fdida, Chief Product and Technology Officer of OVHcloud, stated: "The AMD EPYC 4005 Series CPUs deliver the compute performance and energy efficiency that our customers have come to expect, in a streamlined platform that supports cost-effective, always-on services. Coupled with OVHcloud's Open and Trusted Cloud infrastructure, these solutions provide outstanding performance price ratio and scalability for innovative and demanding workloads."
Vik Malyala, President and Managing Director EMEA and SVP, Technology & AI at Supermicro, added: "We're excited to expand our portfolio with systems powered by AMD EPYC 4005 Series processors, bringing new levels of value to customers seeking efficient, cost-optimized performance. From our 3U MicroCloud multi-node platforms to our 1U and 2U mainstream server families, these solutions offer a compelling mix of performance, power efficiency, and deployment flexibility. With support for technologies like PCIe 5.0 and DDR5 memory, we're enabling IT administrators to deliver more services at lower latency."
J.J. Kardwell, CEO of Vultr, also commented: "Vultr is pleased to announce the immediate availability of Bare Metal and Cloud Compute instances featuring AMD EPYC 4005 Series processors."
"The AMD EPYC 4005 Series provides straightforward deployment, scalability, high clock speed, energy efficiency, and best-in-class performance. Whether you are a business striving to scale reliably or a developer crafting the next groundbreaking innovation, these solutions are designed to deliver exceptional value and meet demanding requirements now and in the future."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
5 days ago
- Techday NZ
Vultr launches early access to AMD Instinct MI355X GPU for AI
Vultr has announced the availability of the AMD Instinct MI355X GPU as part of its cloud infrastructure services. As one of the first cloud providers to integrate the new AMD Instinct MI355X GPU, Vultr is now taking pre-orders for early access, with global availability scheduled for the third quarter of the year. The GPU forms part of AMD's latest focus on high-capacity computational demands, catering to artificial intelligence (AI) workloads as well as enterprise-scale applications. Product features The AMD Instinct MI355X GPU is based on AMD's 4th Generation CDNA architecture. According to Vultr, this GPU features 288 GB of HBM3E memory, delivers up to 8 TB/s of memory bandwidth, and supports expanded datatypes such as FP6 and FP4. These improvements are designed to address complex tasks ranging from AI training and inference to scientific simulations within high-performance computing (HPC) environments. For customers operating within higher-density data environments, the Instinct MI355X supports direct liquid cooling (DLC). This enhancement offers increased thermal efficiency, which is intended to unlock greater computing performance per rack and facilitate advanced, scalable cooling strategies. The GPU is also supported by the latest version of AMD's ROCm software, which further optimises tasks related to AI inference, training, and compatibility with various frameworks. This results in improved throughput and reduced latency for critical operations. AMD and Vultr partnership Vultr's portfolio already includes other AMD offerings, such as the AMD EPYC 9004 Series and EPYC 7003 Series central processing units (CPUs), as well as previous GPU models like the Instinct MI325X and MI300X. Customers using the MI355X in combination with AMD EPYC 4005 Series CPUs will benefit from a fully supported computing stack across both processing and acceleration functions, streamlining high-powered workloads from end to end. Negin Oliver, Corporate Vice President of Business Development, Data Centre GPU Business at AMD, stated: "AMD is the trusted AI solutions provider of choice, enabling customers to tackle the most ambitious AI initiatives, from building large-scale AI cloud deployments to accelerating AI-powered scientific discovery. AMD Instinct MI350 series GPUs paired with AMD ROCm software provide the performance, flexibility, and security needed to deliver tailored AI solutions that meet the diverse demands of the modern AI landscape." The collaboration builds on Vultr's efforts to support a range of AMD solutions tailored for enterprise, HPC, and AI sectors, reinforcing the company's capacity to cater to evolving customer workloads. Cloud market implications J.J. Kardwell, Chief Executive Officer of Vultr, highlighted the alignment of the new GPU with market requirements. Kardwell commented: "AMD MI355X GPUs are designed to meet the diverse and complex demands of today's AI workloads, delivering exceptional value and flexibility. As AI development continues to accelerate, the scalability, security, and efficiency these GPUs deliver are more essential than ever. We are proud to be among the first cloud providers worldwide to offer AMD MI355X GPUs, empowering our customers with next-generation AI infrastructure." AMD is recognised as a member of the Vultr Cloud Alliance, which supports a collaborative ecosystem of technology providers focused on offering integrated cloud computing solutions. The introduction of the MI355X GPU follows a period of upgrades across AMD's GPU lineup, including a greater emphasis on catering to both inferencing and enterprise-scale workloads. Vultr's offering is aimed at organisations seeking advanced compute resources for AI-driven applications and scientific tasks requiring significant computational capacity. Vultr's global network reportedly serves hundreds of thousands of customers across 185 countries, supplying services in cloud compute, GPU, bare metal infrastructure and cloud storage. The addition of AMD's latest GPU to its infrastructure underlines Vultr's commitment to providing a variety of options for businesses and developers pursuing AI and HPC advancements.


Techday NZ
13-06-2025
- Techday NZ
Oracle unveils AMD-powered zettascale AI cluster for OCI cloud
Oracle has announced it will be one of the first hyperscale cloud providers to offer artificial intelligence (AI) supercomputing powered by AMD's Instinct MI355X GPUs on Oracle Cloud Infrastructure (OCI). The forthcoming zettascale AI cluster is designed to scale up to 131,072 MI355X GPUs, specifically architected to support high-performance, production-grade AI training, inference, and new agentic workloads. The cluster is expected to offer over double the price-performance compared to the previous generation of hardware. Expanded AI capabilities The new announcement highlights several key hardware and performance enhancements. The MI355X-powered cluster provides 2.8 times higher throughput for AI workloads. Each GPU features 288 GB of high-bandwidth memory (HBM3) and eight terabytes per second (TB/s) of memory bandwidth, allowing for the execution of larger models entirely in memory and boosting both inference and training speeds. The GPUs also support the FP4 compute standard, a four-bit floating point format that enables more efficient and high-speed inference for large language and generative AI models. The cluster's infrastructure includes dense, liquid-cooled racks, each housing 64 GPUs and consuming up to 125 kilowatts per rack to maximise performance density for demanding AI workloads. This marks the first deployment of AMD's Pollara AI NICs to enhance RDMA networking, offering next-generation high-performance and low-latency connectivity. Mahesh Thiagarajan, Executive Vice President, Oracle Cloud Infrastructure, said: "To support customers that are running the most demanding AI workloads in the cloud, we are dedicated to providing the broadest AI infrastructure offerings. AMD Instinct GPUs, paired with OCI's performance, advanced networking, flexibility, security, and scale, will help our customers meet their inference and training needs for AI workloads and new agentic applications." The zettascale OCI Supercluster with AMD Instinct MI355X GPUs delivers a high-throughput, ultra-low latency RDMA cluster network architecture for up to 131,072 MI355X GPUs. AMD claims the MI355X provides almost three times the compute power and a 50 percent increase in high-bandwidth memory over its predecessor. Performance and flexibility Forrest Norrod, Executive Vice President and General Manager, Data Center Solutions Business Group, AMD, commented on the partnership, stating: "AMD and Oracle have a shared history of providing customers with open solutions to accommodate high performance, efficiency, and greater system design flexibility. The latest generation of AMD Instinct GPUs and Pollara NICs on OCI will help support new use cases in inference, fine-tuning, and training, offering more choice to customers as AI adoption grows." The Oracle platform aims to support customers running the largest language models and diverse AI workloads. OCI users leveraging the MI355X-powered shapes can expect significant performance increases—up to 2.8 times greater throughput—resulting in faster results, lower latency, and the capability to run larger models. AMD's Instinct MI355X provides customers with substantial memory and bandwidth enhancements, which are designed to enable both fast training and efficient inference for demanding AI applications. The new support for the FP4 format allows for cost-effective deployment of modern AI models, enhancing speed and reducing hardware requirements. The dense, liquid-cooled infrastructure supports 64 GPUs per rack, each operating at up to 1,400 watts, and is engineered to optimise training times and throughput while reducing latency. A powerful head node, equipped with an AMD Turin high-frequency CPU and up to 3 TB of system memory, is included to help users maximise GPU performance via efficient job orchestration and data processing. Open-source and network advances AMD emphasises broad compatibility and customer flexibility through the inclusion of its open-source ROCm stack. This allows customers to use flexible architectures and reuse existing code without vendor lock-in, with ROCm encompassing popular programming models, tools, compilers, libraries, and runtimes for AI and high-performance computing development on AMD hardware. Network infrastructure for the new supercluster will feature AMD's Pollara AI NICs that provide advanced RDMA over Converged Ethernet (RoCE) features, programmable congestion control, and support for open standards from the Ultra Ethernet Consortium to facilitate low-latency, high-performance connectivity among large numbers of GPUs. The new Oracle-AMD collaboration is expected to provide organisations with enhanced capacity to run complex AI models, speed up inference times, and scale up production-grade AI workloads economically and efficiently.


Techday NZ
11-06-2025
- Techday NZ
AMD supercomputers lead Top500 rankings with record exaflops
El Capitan and Frontier, both powered by AMD processors and accelerators, have retained the top two positions on the latest Top500 list of the world's most powerful supercomputers. Supercomputing leadership The recently released Top500 rankings show that El Capitan, based at Lawrence Livermore National Laboratory, remains the fastest system globally, registering a High Performance Linpack (HPL) score of 1.742 exaflops. Frontier, situated at Oak Ridge National Laboratory, holds the second position with an HPL result of 1.353 exaflops. Both supercomputers were constructed by HPE and utilise AMD hardware at their core. El Capitan uses AMD Instinct MI300A accelerated processing units (APUs), integrating CPU and GPU functionality within a single package, aimed at supporting large-scale artificial intelligence and scientific workloads. Frontier leverages AMD EPYC CPUs alongside AMD Instinct MI250X GPUs for a variety of advanced computational research needs, including modelling in energy, climate, and next-generation artificial intelligence. Broader AMD presence AMD technologies now underpin 172 supercomputing systems out of the 500 included in the latest Top500 list. This figure represents more than a third of all the high-performance systems measured. Notably, 17 new systems joined the list this year running on AMD processors, five of which use the latest 5th Gen AMD EPYC architecture. The expanded presence spans institutions such as the University of Stuttgart's High-Performance Computing Center, where the Hunter system is powered by AMD Instinct MI300A APUs; the University of Hull's Viper supercomputer; and Italy's new EUROfusion Pitagora system at CINECA, powered by 5th Gen AMD EPYC CPUs. Performance and efficiency In addition to sheer computational power, AMD's showing on the Top500 list extends to energy efficiency. According to the most recent Green500 list, 12 of the 20 most energy-efficient supercomputers globally use AMD EPYC processors and AMD Instinct accelerators. El Capitan and Frontier ranked 26th and 32nd respectively on the Green500 index, reflecting their performance-per-watt capabilities given their computing output. This was echoed in alternative benchmarks. On the HPL-MxP test, which measures mixed-precision computing suited for artificial intelligence workloads, El Capitan debuted at the top, reaching 16.7 exaflops, with Frontier in third place and LUMI, another AMD system, in fourth. The HPCG (High-Performance Conjugate Gradient) test, a complementary performance metric for scientific applications, saw El Capitan post the highest benchmark score of 17.4 petaflops, marking it out for memory bandwidth enabled by the Instinct MI300A architecture. Institutional perspectives "From El Capitan to Frontier, AMD continues to power the world's most advanced supercomputers, delivering record-breaking performance and leadership energy efficiency," said Forrest Norrod, Executive Vice President and General Manager, Data Center Solutions Group, AMD. "With the latest Top500 list, AMD not only holds the top two spots but now powers 172 of the world's fastest systems—more than ever before—underscoring our accelerating momentum and the trust HPC leaders place in our CPUs and GPUs to drive scientific discovery and AI innovation." Rob Neely, Associate Director for Weapon Simulation and Computing at Lawrence Livermore National Laboratory, described the impact of El Capitan: "El Capitan is a transformative national resource that will dramatically expand the computational capabilities of the NNSA labs at Livermore, Los Alamos and Sandia in support of our national security and science missions. With AMD's advanced APU architecture, we can now perform simulations with the precision and confidence we set as a goal 15 years ago, when the path to exascale was difficult to foresee. As a bonus, this platform is a true 'two-fer' - an HPC and AI powerhouse that will fundamentally reshape how we fulfill our mission." Future direction The distinction on the Top500 and Green500 lists coincides with a broader shift within high performance computing, as artificial intelligence and traditional HPC workloads increasingly converge. AMD's presence in the sector demonstrates demand for scalable and efficient compute platforms amid growing power requirements for data-intensive scientific and industrial workloads. The results also indicate the use of a portfolio that includes CPUs, GPUs, and APUs to accelerate developments across domains ranging from nuclear safety and climate modelling, to training large language models and generative artificial intelligence inference.