logo
#

Latest news with #InstinctMI350X

AMD: 256-Core Epyc CPUs Are Coming in 2026
AMD: 256-Core Epyc CPUs Are Coming in 2026

Yahoo

time5 days ago

  • Business
  • Yahoo

AMD: 256-Core Epyc CPUs Are Coming in 2026

The next generation of AMD's Epyc data center CPUs is on track and slated for 2026. The processors didn't get as much love at AMD's recent 2025 Advancing AI event as, say, the new Instinct MI350X and MI355X AI GPUs, but AMD made clear that Epyc is also a priority. The company pointed to 2026 for the launch of Epyc Venice and noted that the upcoming CPU will have 256 Zen 6 cores, along with significant bandwidth for AI workloads. AMD's current 5th Gen Epyc Turin CPUs are capable processors that are powering a growing number of data centers. But the 6th generation sounds like it will dwarf the current CPUs when it comes to performance. At least, that's how things look for now, before the public gets its hands on the processors. AMD says that the new Venice Epycs can provide up to a 70% performance increase over Turin CPUs. But, as Tom's Hardware notes, AMD hasn't provided much information about how the testing was done. Credit: AMD Venice will support 1.6TBps in bandwidth, up from Turin's 614GBps. The processors will also have twice the CPU-to-GPU bandwidth. AMD announced in April that its 6th-generation Epyc CPU had been taped out, meaning that the processor is ready for manufacturing. TSMC will produce them using its new N2 (2nm-class) node in Taiwan. The foundry's growing fab cluster in Arizona, known as Fab 21, isn't yet capable of producing chips with the N2 process, though it likely will be down the road. Dr. Lisa Su holding the current "Turin" Epyc server CPU. Credit: AMD As we covered recently, the success of Epyc CPUs has helped drive AMD's stunning success in pulling server CPU market share away from Intel. In 2017, AMD CEO Dr. Lisa Su started AMD on a path that led from 0% server CPU market share to nearly 40% at the end of Q1 this year. As its competitor, Intel, struggled to get aboard the AI train and saw its CEO exit at the end of 2024, AMD has worked with foundry TSMC to produce several generations of powerful Epyc processors. It's worth noting, though, that Intel has a new CEO, Lip-Bu Tan, and it's not just sitting around while AMD scores wins. Intel recently dropped the prices on its Xeon 6 CPUs. In some cases, the company dropped the MSRPs by as much as 30%—which is downright stunning and a direct challenge to AMD's Epyc chips.

AMD Advancing AI event: 6 key details you need to know
AMD Advancing AI event: 6 key details you need to know

Hindustan Times

time7 days ago

  • Business
  • Hindustan Times

AMD Advancing AI event: 6 key details you need to know

At AMD's Advancing AI event in San Jose, California, United States, the brand unveiled a range of hardware as well as software-centric announcements, which at large serves as the vision for its open AI ecosystem. Here are the key announcements: AMD Instinct MI350 series GPUs announced: AMD announced the Instinct MI350X and Instinct MI355X GPUs and platforms, which the company claims allow for a four times generation-on-generation AI compute boost for better AI solutions across various industries. End-to-end open standards rack-scale infrastructure: At the keynote, AMD also showed its open standards rack-scale AI infrastructure, which is already rolling out with AMD Instinct MI350 series accelerators, 5th Gen AMD EPYC processors, and AMD Pensando Pollara NICs in hyperscaler deployments, including Oracle's Cloud Infrastructure. The company says the broad availability for the same is going to be the second half of 2025. AMD unveils its next-generation AI rack called Helios: AMD stated that this is going to be built on the next-gen AMD Instinct MI400 series GPUs. When you compare this to the former generation, it can allow for 10 times more performance when running inference on mixture-of-experts models. AMD announced the broad availability of its Developer Cloud: The company says this is purpose-built for fast, high-performance AI development and allows for developers to have access to a fully managed cloud environment with tools, allowing for fast development. The company says the combination of ROCm 7 and the AMD Developer Cloud enables the company to expand access to next-generation compute. It is already collaborating with AI leaders like Hugging Face, OpenAI, and Grok. AMD's latest version of its AI software stack, ROCm 7: The company says that this is to serve the growing demands of generative AI and the demand for more compute. The company says that this is going to enhance the experience for developers and features improved support for industry-standard frameworks and more. AMD also revealed its partner ecosystem: Seven out of ten of the largest AI model builders, including the likes of Meta, OpenAI, Microsoft, and xAI, are now on board with AMD for training their AI models, the company said. For instance, it was detailed how the Instinct MI300X is used for Llama 3 and for Llama 4 inference. Sam Altman, OpenAI CEO, also discussed hardware, software, and more. Other companies like Microsoft and Red Hat also joined them. MOBILE FINDER: iPhone 16 LATEST Price, Specs And More

AMD Unveils Vision For Open AI Ecosystem
AMD Unveils Vision For Open AI Ecosystem

Channel Post MEA

time13-06-2025

  • Business
  • Channel Post MEA

AMD Unveils Vision For Open AI Ecosystem

AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event. Dr. Lisa Su, chairman and CEO of AMD, emphasized the company's role in accelerating AI innovation. 'We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI,' Su said. AMD announced a broad portfolio of hardware, software and solutions to power the full spectrum of AI: AMD unveiled the Instinct MI350 Series GPUs , setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions. , setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions. AMD demonstrated end-to-end, open-standards rack-scale AI infrastructure —already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025. —already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025. AMD also previewed its next generation AI rack called 'Helios .' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs. .' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs. The latest version of the AMD open-source AI software stack, ROCm 7 , is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. , is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. The Instinct MI350 Series exceeded AMD's five-year goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity. of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity. AMD also announced the broad availability of the AMD Developer Cloud for the global developer and open-source communities. Purpose-built for rapid, high-performance AI development, users will have access to a fully managed cloud environment with the tools and flexibility to get started with AI projects – and grow without limits. With ROCm 7 and the AMD Developer Cloud, AMD is lowering barriers and expanding access to next-gen compute. Strategic collaborations with leaders like Hugging Face, OpenAI and Grok are proving the power of co-developed, open solutions. Broad Partner Ecosystem Showcases AI Progress Powered by AMD Today, seven of the 10 largest model builders and Al companies are running production workloads on Instinct accelerators. Among those companies are Meta, OpenAI, Microsoft and xAI, who joined AMD and other partners at Advancing AI, to discuss how they are working with AMD for AI solutions to train today's leading AI models, power inference at scale and accelerate AI exploration and development: Meta detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform. detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform. OpenAI CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms. CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms. Oracle Cloud Infrastructure (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale. (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale. HUMAIN discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide. discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide. Microsoft announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure. announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure. Cohere shared that its high-performance, scalable Command models are deployed on Instinct MI300X, powering enterprise-grade LLM inference with high throughput, efficiency and data privacy. Red Hat d escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments. escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments. Astera Labs highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure. highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure. Marvell joined AMD to highlight its collaboration as part of the UALink Consortium developing an open interconnect, bringing the ultimate flexibility for AI infrastructure.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store