
AMD Stakes Future on Open AI Infrastructure
Advanced Micro Devices projected bold expectations for its artificial intelligence trajectory during its Advancing AI event in San Jose on 12 June 2025, emphasising system-level openness and ecosystem collaboration. CEO Dr Lisa Su unveiled the Instinct MI350 accelerator series, introduced plans for the Helios rack-scale AI server launching in 2026, and fortified AMD's software stack to challenge incumbent leaders in the sector.
Top-tier AI customers including OpenAI, Meta, Microsoft, Oracle, xAI and Crusoe pledged significant investments. OpenAI's CEO Sam Altman joined Su onstage, confirming the firm's shift to MI400-class chips and collaboration on MI450 design. Crusoe disclosed a $400 million commitment to the platform.
MI350 Series, which includes the MI350X and MI355X, are shipping to hyperscalers now, with a sharp generational performance leap — delivering about four times the compute capacity of prior-generation chips, paired with 288 GB of HBM3e memory and up to 40% better token‑per‑dollar performance than Nvidia's B200 models. Initial deployments are expected in Q3 2025 in both air‑ and liquid‑cooled configurations, with racks supporting up to 128 GPUs, producing some 2.6 exaflops FP4 compute.
ADVERTISEMENT
Looking further ahead, AMD previewed 'Helios'—a fully integrated rack comprising MI400 GPUs, Zen 6‑based EPYC 'Venice' CPUs and Pensando Vulcano NICs, boasting 72 GPUs per rack, up to 50% more HBM memory bandwidth and system‑scale networking improvements compared to current architectures. Helios is poised for market launch in 2026, with an even more advanced MI500‑based variant expected around 2027.
Dr Su underscored openness as AMD's competitive lever. Unlike Nvidia's proprietary NVLink interface, AMD's designs will adhere to open industry standards—extending availability of networking architectures to rivals such as Intel. Su argued this approach would accelerate innovation, citing historical parallels from open Linux and Android ecosystems.
On the software front, the ROCm 7 stack is being upgraded with enterprise AI and MLOps features, including integrated tools from VMware, Red Hat, Canonical and others. ROCm Enterprise AI, launching in Q3 or early Q4, aims to match or exceed Nvidia's CUDA-based offerings in usability and integration.
Strategic acquisitions underpin AMD's infrastructure ambitions. The purchase of ZT Systems in March 2025 brought over 1,000 engineers to accelerate rack-scale system builds. Meanwhile, AMD has onboarded engineering talent from Untether AI and Lamini to enrich its AI software capabilities.
Market reaction was muted; AMD shares fell roughly 1–2% on the event day, with analysts noting that while the announcements are ambitious, immediate market share gains are uncertain.
Financially, AMD projects AI data centre revenues growing from over $5 billion in 2024 to tens of billions annually, anticipating the AI chip market reaching around $500 billion by 2028.
These developments position AMD as a serious contender in the AI infrastructure arena. Its push for rack‑scale systems and open‑standard platforms aligns with the growing trend toward modular, interoperable computing. Competition with Nvidia will intensify through 2026 and 2027, centred on performance per dollar in large‑scale deployments.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tahawul Tech
a day ago
- Tahawul Tech
OpenAI enters into lucrative deal with U.S. government
OpenAI recently secured a $200 million contract with the US Department of Defence (DoD) to deploy AI tools across administrative functions, part of an initiative to integrate AI tech within government operations. In a statement, the company explained it is launching the OpenAI for Government initiative to provide AI expertise to public servants and support the US government in deploying the technology for the public good. The overriding aim of its work is to enhance the capabilities of government workers, helping to cut down on red tape and paperwork. Its first partnership under the new initiative will be a pilot programme with the DoD through its Chief Digital and Artificial Intelligence Office. It will help identify and prototype how frontier AI could transform administrative operations including improving how US military members and their families access healthcare, streamlining programme and data acquisition, and supporting proactive cyber defence. OpenAI added the initiative consolidates its existing government projects under one area, including the development of a version of ChatGPT for government workers, along with its work with space agency NASA and the Air Force Research Laboratory. Through the initiative, OpenAI will offer US federal, state and local governments access to its most advanced ChatGPT models, custom set-ups for national security, hands on support and an insight into future AI advancements. OpenAI added it is just getting started and is looking forward to helping US government leaders 'harness AI to better serve the public'. Source: Mobile World Live Image Credit: Stock Image/OpenAI


Arabian Post
2 days ago
- Arabian Post
AI Copyright Quietly Redrawing Legal Lines
Twelve consolidated copyright suits filed by US authors and news outlets against OpenAI and Microsoft have landed in the Southern District of New York, elevating the question of whether the extent of human input in AI training crosses the threshold of lawful fair use. The judicial panel cited shared legal and technical claims involving unauthorised use of copyrighted material, notably books and newspapers, as justifying centralised legal proceedings. The US Copyright Office added its authoritative voice in May, questioning whether AI training on copyrighted texts can be deemed fair use, particularly in commercial contexts. The Office clarified that while transformative use may be permissible in research, mass replication or competition with original works likely exceeds established boundaries. Its report highlighted that the crux lies in purpose, source, market impact and guards on output — variables which may render AI models liable under copyright law. A pivotal case involving Thomson Reuters and Ross Intelligence offers early legal clarity: a court ruled that Ross improperly used Westlaw content, rejecting its fair use defence. The judgement centred on the need for AI systems to 'add something new' and avoid copying wholesale, reinforcing the rights of content owners. This ruling is being cited alongside the US Copyright Office's latest guidance as foundational in shaping how courts may assess generative AI. ADVERTISEMENT Legal practitioners are now navigating uncharted terrain. Lawyers such as Brenda Sharton from Dechert and Andy Gass of Latham & Watkins are at the cutting edge in helping judges understand core AI mechanics — from training data ingestion to output generation — while balancing copyright protection and technological progress. Their work emphasises that this lifetime of litigation may not be resolvable in a single sweeping judgment, but will evolve incrementally. At the heart of many discussions lies the condition for copyright protection: human authorship. The US Copyright Office reaffirmed in a February report that merely issuing a prompt does not satisfy the originality requirement. It stated that current systems offer insufficient control for human authors to claim sole credit, and that copyright should be considered case‑by‑case, grounded in Feist's minimum creativity standard. Critics argue this stance lacks clarity, as no clear threshold for the level of human input has been defined. Certain jurisdictions are taking diverse approaches. China's Beijing Internet Court recently ruled in Li v Liu that an AI–generated image was copyrightable because the plaintiff had provided substantial prompts and adjustments — around 30 prompts and over 120 negative prompts — demonstrating skill, judgment and aesthetic choice. In the United Kingdom, the Copyright, Designs and Patents Act 1988 attributes authorship to the person who undertakes 'arrangements necessary' for a computer‑generated work, hinting that both programmers and users may qualify as authors depending on context. In contrast, India's legal framework remains unsettled. Courts have emphasised human creativity in ruling on computer‑generated works, as seen in Rupendra Kashyap v Jiwan Publishing and Navigators Logistics Ltd v Kashif Qureshi. ANI, India's largest news agency, has brought forward a high‑profile case against OpenAI, with hearings held on 19 November 2024 and 28 January 2025. The Delhi High Court has appointed an amicus curiae to navigate this untested area of copyright, with Indian lawyers emphasising that the outcome could shape licensing practices and data‑mining norms. India reserves copyright protection for creations exhibiting 'minimal degree of creativity' under its Supreme Court rulings such as Eastern Book Co v Modak. In February 2025, experts noted that determining whether AI training qualifies as fair dealing or whether generative AI outputs amount to derivative works will be pivotal. Currently, scraping content for AI training falls outside clear exemptions under Indian law, though the Delhi case could catalyse policy reform. ADVERTISEMENT Amid these legal fires, signs point toward statutory intervention. In the US, the Generative AI Copyright Disclosure Act would require developers to notify the Copyright Office of copyrighted works used in training models at least 30 days before public release. While UK policymakers are consulting on a specialised code of practice, India lacks similar formal mechanisms. The evolving legal framework confronts a fundamental philosophical and commercial dilemma: making space for generative AI's potential innovation without undermining creators' rights. AI developers contest that mass text and data mining fuels advanced models, while authors and journalists argue such training must be controlled to safeguard original expression. Courts appear poised to strike a balance by scrutinising the nuance of human input, purpose and impact — not by enacting sweeping exclusions.


Tahawul Tech
2 days ago
- Tahawul Tech
SandboxAQ improves drug discovery with data creation
SandboxAQ, an artificial intelligence startup, recently released a wealth of data in hopes it will speed up the discovery of new medical treatments. The goal is to help scientists predict whether a drug will bind to its target in the human body. But while the data is backed up by real-world scientific experiments, it did not come from a lab. Instead, SandboxAQ, which has raised nearly $1 billion in venture capital, generated the data using Nvidia's chips and will feed it back into AI models that it hopes scientists can use to rapidly predict whether a small-molecule pharmaceutical will bind to the protein that researchers are targeting, a key question that must be answered before a drug candidate can move forward. For example, if a drug is meant to inhibit a biological process like the progression of a disease, scientists can use the tool to predict whether the drug molecule is likely to bind to the proteins involved in that process. The approach is an emerging field that combines traditional scientific computing techniques with advancements in AI. In many fields, scientists have long had equations that can precisely predict how atoms combine into molecules. But even for relatively small three-dimensional pharmaceutical molecules, the potential combinations become far too vast to calculate manually, even with today's fastest computers. So SandboxAQ's approach was to use existing experimental data to calculate about 5.2 million new, 'synthetic' three-dimensional molecules – molecules that haven't been observed in the real world, but were calculated with equations based on real-world data. That synthetic data, which SandboxAQ is releasing publicly, can be used to train AI models that can predict whether a new drug molecule is likely to stick to the protein researchers are targeting in a fraction of the time it would take to calculate it manually, while retaining accuracy. SandboxAQ will charge money for its own AI models developed with the data, which it hopes will get results that rival running lab experiments, but virtually. 'This is a long-standing problem in biology that we've all, as an industry, been trying to solve for', said Nadia Harhen, general manager of AI simulation at SandboxAQ. 'All of these computationally generated structures are tagged to a ground-truth experimental data, and so when you pick this data set and you train models, you can actually use the synthetic data in a way that's never been done before'. Source: Reuters Image Credit: Stock Image