logo
#

Latest news with #AMDInstinctGPUs

Watch These AMD Price Levels as Stock Hits 5-Month High Following Last Week's AI Showcase
Watch These AMD Price Levels as Stock Hits 5-Month High Following Last Week's AI Showcase

Yahoo

time4 days ago

  • Business
  • Yahoo

Watch These AMD Price Levels as Stock Hits 5-Month High Following Last Week's AI Showcase

AMD shares continued gaining ground Tuesday, boosted by upbeat Wall Street commentary following the chipmaker's "Advancing AI" event last week. The stock broke out from a pennant pattern earlier this month and closed above the closely watched 200-day moving average in Monday's trading session. Investors should watch crucial overhead areas on AMD's chart around $145, $160 and $175, while also monitoring support levels near $115 and $ Micro Devices (AMD) shares continued gaining ground Tuesday, boosted by upbeat Wall Street commentary following the chipmaker's "Advancing AI" event last week. Piper Sandler on Monday raised its price target for the stock and expressed enthusiasm for AMD's recently unveiled Helios server rack architecture, which will combine the company's next-generation AMD MI400 chips into one larger system. The investment bank pointed out that the hardware, anticipated for release in 2026, is "pivotal" for the growth of AMD Instinct GPUs. Meanwhile, analysts at Bank of America speculate that the chipmaker could announce Amazon (AMZN) as a partner after the tech giant's cloud unit, Amazon Web Services (AWS), was a key sponsor of last week's event. AMD shares rose 0.6% to around $127 on Tuesday, after surging nearly 8% yesterday to pace S&P 500 advancers. The stock is up 66% from its early-April low, though has gained just 5% since the start of 2025 amid uncertainty over chip export curbs and the company's ability to capture a greater share of the lucrative AI chip market that's now dominated by Nvidia (NVDA). Below, we break down the technicals on AMD's chart and point out crucial price levels worth watching out for. After hitting their May high, AMD shares formed a pennant, a chart pattern that signals a continuation of the stock's uptrend that started in early April. Indeed, the stock broke out from the pattern earlier this month and staged a volume-backed close above the closely watched 200-day moving average in Monday's trading session. Moreover, the relative strength index indicates bullish momentum, generating a reading just below the indicator's overbought threshold. Let's identify three crucial overhead areas on AMD's chart to watch and also point out support levels worth monitoring. Follow-through buying could trigger an initial rally toward $145. This area may provide overhead selling pressure near several peaks and troughs that formed on the chart between April and December last year. A decisive close above this crucial area may see the shares test resistance around $160. Investors could seek to lock in profits in this location near a trendline that connects a range of corresponding trading activity that developed on the chart from April to October last year. The next overhead area to watch sits at $175. The shares may run into sellers in this region near prominent peaks that emerged in May and October last year. During retracements in the stock, it's firstly worth monitoring the $115 level. The stock could encounter support near last week's retest of the pennant pattern's breakout point, which also closely aligns with a range of price action stretching back to mid-January. Finally, selling below this level opens the door for a drop to lower support around $108. Investors could seek to accumulate AMD shares in this location near the low of the pennant pattern and early-February low. The comments, opinions, and analyses expressed on Investopedia are for informational purposes only. Read our warranty and liability disclaimer for more info. As of the date this article was written, the author does not own any of the above securities. Read the original article on Investopedia

AMD Stock Soars as Piper Sandler Raises Price Target After 'Advancing AI' Event
AMD Stock Soars as Piper Sandler Raises Price Target After 'Advancing AI' Event

Yahoo

time5 days ago

  • Business
  • Yahoo

AMD Stock Soars as Piper Sandler Raises Price Target After 'Advancing AI' Event

Advanced Micro Devices shares jumped nearly 10% to lead S&P 500 gainers Monday as Piper Sandler raised their price target for the chipmaker's stock. The move comes after AMD's "Advancing AI" event last week, which saw the reveal of next-generation server rack architecture. Bank of America analysts expect a partnership to be announced between AMD and Amazon Web Micro Devices (AMD) shares popped nearly 10% to lead S&P 500 gainers Monday as Piper Sandler analysts raised their price target for the stock coming out of the chipmaker's "Advancing AI" event. Piper raised its target to $140 from $125 and maintained an "overweight" rating for AMD stock. Shares of AMD were at about $127 in recent trading, making Piper's target a roughly 10% premium. The analysts came away "enthused" by the firm's newly unveiled Helios server rack architecture, which it called "pivotal" for the growth of AMD Instinct GPUs. Helios will combine next-generation AMD MI400 chips into one larger system, the company said, and is expected in 2026. AMD highlighted its partnerships with ChatGPT maker OpenAI, Meta Platforms (META), Oracle (ORCL), Microsoft (MSFT), and others at the event. Bank of America analysts believe there's another high-profile partner announcement to come: Amazon (AMZN). Amazon Web Services (AWS) was "a key sponsor for the event," BofA said. However, AWS typically uses its own events to announce new engagements, making a future reveal likely, the bank added. BofA maintained a "buy" rating and price target of $130 following the event. For comparison, the analyst consensus price target from Visible Alpha is about $124. Read the original article on Investopedia Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Oracle And AMD Collaborate To Deliver Breakthrough Performance In AI Workloads
Oracle And AMD Collaborate To Deliver Breakthrough Performance In AI Workloads

Channel Post MEA

time6 days ago

  • Business
  • Channel Post MEA

Oracle And AMD Collaborate To Deliver Breakthrough Performance In AI Workloads

Oracle and AMD have announced that AMD Instinct MI355X GPUs will be available on Oracle Cloud Infrastructure (OCI) to give customers more choice and more than 2X better price-performance for large-scale AI training and inference workloads compared to the previous generation. Oracle will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train, and inference AI at scale. 'To support customers that are running the most demanding AI workloads in the cloud, we are dedicated to providing the broadest AI infrastructure offerings,' said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure. 'AMD Instinct GPUs, paired with OCI's performance, advanced networking, flexibility, security, and scale, will help our customers meet their inference and training needs for AI workloads and new agentic applications.' To support new AI applications that require larger and more complex datasets, customers need AI compute solutions that are specifically designed for large-scale AI training. The zettascale OCI Supercluster with AMD Instinct MI355X GPUs meets this need by providing a high-throughput, ultra-low latency RDMA cluster network architecture for up to 131,072 MI355X GPUs. AMD Instinct MI355X delivers nearly triple the compute power and a 50 percent increase in high-bandwidth memory than the previous generation. 'AMD and Oracle have a shared history of providing customers with open solutions to accommodate high performance, efficiency, and greater system design flexibility,' said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. 'The latest generation of AMD Instinct GPUs and Pollara NICs on OCI will help support new use cases in inference, fine-tuning, and training, offering more choice to customers as AI adoption grows.' AMD Instinct MI355X Coming to OCI AMD Instinct MI355X-powered shapes are designed with superior value, cloud flexibility, and open-source compatibility—ideal for customers running today's largest language models and AI workloads. With AMD Instinct MI355X on OCI, customers will be able to benefit from: Significant performance boost: Helps customers increase performance for AI deployments with up to 2.8X higher throughput. To enable AI innovation at scale, customers can expect faster results, lower latency, and the ability to run larger AI workloads. Helps customers increase performance for AI deployments with up to 2.8X higher throughput. To enable AI innovation at scale, customers can expect faster results, lower latency, and the ability to run larger AI workloads. Larger, faster memory: Allows customers to execute large models entirely in memory, enhancing inference and training speeds for models that require high memory bandwidth. The new shapes offer 288 gigabytes of high-bandwidth memory 3 (HBM3) and up to eight terabytes per second of memory bandwidth. Allows customers to execute large models entirely in memory, enhancing inference and training speeds for models that require high memory bandwidth. The new shapes offer 288 gigabytes of high-bandwidth memory 3 (HBM3) and up to eight terabytes per second of memory bandwidth. New FP4 support: Allows customers to deploy modern large language and generative AI models cost-effectively with the support of the new 4-bit floating point compute (FP4) standard. This enables ultra-efficient and high-speed inference. Allows customers to deploy modern large language and generative AI models cost-effectively with the support of the new 4-bit floating point compute (FP4) standard. This enables ultra-efficient and high-speed inference. Dense, liquid-cooled design: Enables customers to maximize performance density at 125 kilowatts per rack for demanding AI workloads. With 64 GPUs per rack at 1,400 watts each, customers can expect faster training times with higher throughput and lower latency. Enables customers to maximize performance density at 125 kilowatts per rack for demanding AI workloads. With 64 GPUs per rack at 1,400 watts each, customers can expect faster training times with higher throughput and lower latency. Built for production-scale training and inference: Supports customers deploying new agentic applications with a faster time-to-first token (TTFT) and high tokens-per-second throughput. Customers can expect improved price performance for both training and inference workloads. Supports customers deploying new agentic applications with a faster time-to-first token (TTFT) and high tokens-per-second throughput. Customers can expect improved price performance for both training and inference workloads. Powerful head node: Assists customers in optimizing their GPU performance by enabling efficient job orchestration and data processing with an AMD Turin high-frequency CPU with up to three terabytes of system memory. Assists customers in optimizing their GPU performance by enabling efficient job orchestration and data processing with an AMD Turin high-frequency CPU with up to three terabytes of system memory. Open-source stack: Enables customers to leverage flexible architectures and easily migrate their existing code with no vendor lock-in through AMD ROCm. AMD ROCm is an open software stack that includes popular programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Enables customers to leverage flexible architectures and easily migrate their existing code with no vendor lock-in through AMD ROCm. AMD ROCm is an open software stack that includes popular programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Network innovation with AMD Pollara: Provides customers with advanced RoCE functionality that enables innovative network fabric designs. Oracle will be the first to deploy AMD Pollara AI NICs on backend networks, providing advanced RoCE functions such as programmable congestion control and support for open industry standards from the Ultra Ethernet Consortium (UEC) for high-performance and low latency networking.

Oracle unveils AMD-powered zettascale AI cluster for OCI cloud
Oracle unveils AMD-powered zettascale AI cluster for OCI cloud

Techday NZ

time13-06-2025

  • Business
  • Techday NZ

Oracle unveils AMD-powered zettascale AI cluster for OCI cloud

Oracle has announced it will be one of the first hyperscale cloud providers to offer artificial intelligence (AI) supercomputing powered by AMD's Instinct MI355X GPUs on Oracle Cloud Infrastructure (OCI). The forthcoming zettascale AI cluster is designed to scale up to 131,072 MI355X GPUs, specifically architected to support high-performance, production-grade AI training, inference, and new agentic workloads. The cluster is expected to offer over double the price-performance compared to the previous generation of hardware. Expanded AI capabilities The new announcement highlights several key hardware and performance enhancements. The MI355X-powered cluster provides 2.8 times higher throughput for AI workloads. Each GPU features 288 GB of high-bandwidth memory (HBM3) and eight terabytes per second (TB/s) of memory bandwidth, allowing for the execution of larger models entirely in memory and boosting both inference and training speeds. The GPUs also support the FP4 compute standard, a four-bit floating point format that enables more efficient and high-speed inference for large language and generative AI models. The cluster's infrastructure includes dense, liquid-cooled racks, each housing 64 GPUs and consuming up to 125 kilowatts per rack to maximise performance density for demanding AI workloads. This marks the first deployment of AMD's Pollara AI NICs to enhance RDMA networking, offering next-generation high-performance and low-latency connectivity. Mahesh Thiagarajan, Executive Vice President, Oracle Cloud Infrastructure, said: "To support customers that are running the most demanding AI workloads in the cloud, we are dedicated to providing the broadest AI infrastructure offerings. AMD Instinct GPUs, paired with OCI's performance, advanced networking, flexibility, security, and scale, will help our customers meet their inference and training needs for AI workloads and new agentic applications." The zettascale OCI Supercluster with AMD Instinct MI355X GPUs delivers a high-throughput, ultra-low latency RDMA cluster network architecture for up to 131,072 MI355X GPUs. AMD claims the MI355X provides almost three times the compute power and a 50 percent increase in high-bandwidth memory over its predecessor. Performance and flexibility Forrest Norrod, Executive Vice President and General Manager, Data Center Solutions Business Group, AMD, commented on the partnership, stating: "AMD and Oracle have a shared history of providing customers with open solutions to accommodate high performance, efficiency, and greater system design flexibility. The latest generation of AMD Instinct GPUs and Pollara NICs on OCI will help support new use cases in inference, fine-tuning, and training, offering more choice to customers as AI adoption grows." The Oracle platform aims to support customers running the largest language models and diverse AI workloads. OCI users leveraging the MI355X-powered shapes can expect significant performance increases—up to 2.8 times greater throughput—resulting in faster results, lower latency, and the capability to run larger models. AMD's Instinct MI355X provides customers with substantial memory and bandwidth enhancements, which are designed to enable both fast training and efficient inference for demanding AI applications. The new support for the FP4 format allows for cost-effective deployment of modern AI models, enhancing speed and reducing hardware requirements. The dense, liquid-cooled infrastructure supports 64 GPUs per rack, each operating at up to 1,400 watts, and is engineered to optimise training times and throughput while reducing latency. A powerful head node, equipped with an AMD Turin high-frequency CPU and up to 3 TB of system memory, is included to help users maximise GPU performance via efficient job orchestration and data processing. Open-source and network advances AMD emphasises broad compatibility and customer flexibility through the inclusion of its open-source ROCm stack. This allows customers to use flexible architectures and reuse existing code without vendor lock-in, with ROCm encompassing popular programming models, tools, compilers, libraries, and runtimes for AI and high-performance computing development on AMD hardware. Network infrastructure for the new supercluster will feature AMD's Pollara AI NICs that provide advanced RDMA over Converged Ethernet (RoCE) features, programmable congestion control, and support for open standards from the Ultra Ethernet Consortium to facilitate low-latency, high-performance connectivity among large numbers of GPUs. The new Oracle-AMD collaboration is expected to provide organisations with enhanced capacity to run complex AI models, speed up inference times, and scale up production-grade AI workloads economically and efficiently.

AMD Acquires AI Software Startup Brium
AMD Acquires AI Software Startup Brium

Channel Post MEA

time05-06-2025

  • Business
  • Channel Post MEA

AMD Acquires AI Software Startup Brium

AMD has announced the acquisition of Brium, a company specializing in compiler technology and AI software. The move is intended to expand AMD's ability to deliver optimized AI solutions and foster the development of an open AI software ecosystem. Anush Elangovan, Corporate VP Software Development at AMD, says Brium adds advanced software capabilities that will enahancing the efficiency and flexibility of AMD's AI platform. Brium's work in compiler technology, model execution frameworks, and end-to-end AI inference optimization are considered critical to these improvements. The acquisition of Brium is the latest in a series of targeted investments, following the acquisitions of Silo AI, and Mipsology, that together advance our ability to support the open-source software ecosystem and deliver optimized performance on AMD hardware. What makes Brium unique is their ability to optimize the entire inference stack before the model reaches the hardware. This reduces dependence on specific hardware configurations and enables faster, more efficient out of the box AI performance across a wide range of deployments. The Brium team is expected to contribute to key projects like OpenAI Triton, WAVE DSL, and SHARK/IREE. This work is essential to enabling faster, more efficient execution of AI models on AMD Instinct GPUs and also includes a focus on new precision formats such as MX FP4 and FP6. The acquisition of Brium strengthens this vision by bringing in deep expertise to accelerate the open-source tools that power our AI software stack. Brium's cross-domain capabilities, with expertise in libraries, compilers, build systems, distributed systems and machine learning techniques, should result in more integrated solutions for developers and customers. As AI becomes increasingly central to industries such as healthcare, life sciences, finance, and manufacturing, AMD is committed to meeting the specialized needs of customers in these verticals. This acquisition of Brium brings exactly the kind of expertise needed to advance this mission. Their successful porting of the Deep Graph Library (DGL) to AMD Instinct platform is a clear example of how they enable cutting-edge AI applications in health sciences. AMD says this acquisition is another step forward in AMD's mission to empower developers with an open, scalable AI software platform that unlocks the full potential of their hardware. The company will continue to invest in people, tools, and technologies that strengthen its ability to support the AI developer community and enable breakthroughs across industries.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store