logo
A guide to Nvidia's competitors: AMD, Qualcomm, Broadcom, startups, and more are vying to compete in the AI chip market

A guide to Nvidia's competitors: AMD, Qualcomm, Broadcom, startups, and more are vying to compete in the AI chip market

Nvidia is undoubtably dominant in the AI semiconductor space. Estimates fluctuate, but the company has more than 80% market share by some estimates when it comes to the chips that reside inside data centers and make products like ChatGPT and Claude possible.
That enviable dominance goes back almost two decades, when researchers began to realize that the same kind of intensive computing that made complex, visually stunning video games and graphics possible, could enable other types of computing too.
The company started building its famous software stack, named Compute Unified Device Architecture or CUDA, 16 years before the launch of ChatGPT. For much of that time, it lost money. But CEO Jensen Huang and a team of true believers saw the potential for graphics processing units to enable artificial intelligence. And today, Nvidia and its products are responsible for most of the artificial intelligence at work in the world.
Thanks to the prescience of Nvidia's leadership, the company had a big head start when it came to AI computing, but challengers are running fast to catch up. Some were competitors in the gaming or traditional semiconductor spaces, and others have started up from scratch.
AMD
AMD is Nvidia's top competitor in the market for AI computing in the data center. Helmed by its formidable CEO Lisa Su, AMD launched its own GPU, called the MI300, for the data center in 2024, more than a full year after Nvidia's second generation of data center GPUs started shipping.
Though experts and analysts have touted the chip's specifications and potential based on its design and architecture, the company's software is still somewhat behind that of Nvidia, making these chips somewhat harder to program and use to their full potential.
Analysts predict that the company has under 15% market share. But AMD executives insist that they are committed to bringing its software up to par and that future expectations for the evolution of the accelerated computing market will benefit the company — specifically, the spread of AI into so-called edge devices like phones and laptops.
Qualcomm, Broadcom, and custom chips
Also challenging Nvidia are application-specific integrated circuits or ASICs. These custom-designed chips are less versatile than GPUs, but they can be designed for specific AI computing workloads at a much lower cost, which have made them a popular option for hyperscalers.
Though multipurpose chips like Nvidia's and AMD's graphics processing units are likely to maintain the largest share of the AI-chip market in the long term, custom chips are growing fast. Morgan Stanley analysts expected the market for ASICs to double in size in 2025.
Companies that specialize in ASICs include Broadcom and Marvell, along with the Asia-based players Alchip Technologies and MediaTek.
Marvell is in part responsible for Amazon's Trainium chips while Broadcom builds Google's tensor processing units, among others. OpenAI, Apple, Microsoft, Meta, and TikTok parent company ByteDance have all entered the race for a competitive ASIC as well.
Amazon and Google
While also being prominent customers of Nvidia, the major cloud providers like Amazon Web Services and Google Cloud Platform, often called hyperscalers, have also made efforts to design their own chips, often with the help of semiconductor companies.
Amazon's Trainium chips and Google's TPUs are the most scaled of these efforts and offer a cheaper alternative to Nvidia chips, mostly for the companies' internal AI workloads. However, the companies have shown some progress in getting customers and partners to use their chips as well. Anthropic has committed to running some workloads on Amazon's chips, and Apple has done the same with Google's.
Intel
Once the great American name in chip-making, Intel has fallen far behind its competitors in the age of AI. But, the firm does have a line of AI chips called Gaudi that some reports have said can stand up to Nvidia's in some respects.
Intel installed a new CEO, semiconductor veteran Lip-Bu Tan, in the first quarter of 2025 and one of his first actions was to flatten the organization so that the AI chip operations reports directly to him.
Huawei
Though Nvidia's American hopeful challengers are many, China's Huawei is perhaps the most concerning competitor of all for Nvidia and all those concerned with continued US supremacy in AI.
Huang himself has called Huawei the "single most formidable" tech company in China. Reports that Huawei's AI chip innovation is catching up are increasing in frequency. New restrictions from the Biden and Trump administrations on shipping even lower-power GPUs to China have further incentivized the company to catch up and serve the Chinese markets for AI. Analysts say further restrictions being considered by the Trump administration are now unlikely to hamper China's AI progress.
Startups
Also challenging Nvidia are a host of startups offering new chip designs and business models to the AI computing market.
These firms are starting out at a disadvantage, as they don't have the full-sized sales and distribution machines decades of chip sales in other types of tech bring. But several are holding their own by finding use cases, customers, and distribution methods that are attractive to customers based on faster processing speeds or lower cost. These new AI players include Cerebras, Etched, Groq, Positron AI, Sambanova Systems, and Tenstorrent, among others.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI makes shocking move amid fierce competition, Microsoft problems
OpenAI makes shocking move amid fierce competition, Microsoft problems

Miami Herald

timean hour ago

  • Miami Herald

OpenAI makes shocking move amid fierce competition, Microsoft problems

A blind man once told me, "I wish I knew what a beautiful woman looks like". He started losing his sight from birth and lost it completely while he was still just a child. What do the engineers trying to make artificial intelligence know about intelligence? To me, they look like a bunch of blind men, trying to build a "living" statue of a beautiful person. The worst part is, they don't even know they are blind. Do you remember the scandal when an engineer from Google claimed that the company's AI is sentient? When I saw the headlines, I didn't even open the articles, but my conclusion was that either Google made a terrible mistake in hiring him or it was an elaborate PR stunt. I thought Google was famous for having a high hiring bar, so I was leaning toward a PR stunt-I was wrong. Related: Apple WWDC underwhelms fans in a crucial upgrade What is amazing about that story is that roughly six months later, ChatGPT came out and put Google's AI department into panic mode. They were far behind ChatGPT, which was not even close to being sentient. Engineers from OpenAI, were the ones to start a new era, the era in which investors are presented with a statue that sort of has a human face, and has a speaker inside playing recordings of human speech, expecting that the "blind" men working on it, will soon make it become alive and beautiful. Of course, investors are also ignorant of the fact that engineers are "blind". OpenAI is now faced with many rivals, and the developing situation is starting to look like a bunch of bullies trying to out-bully each other instead of offering a superior product. Meta's recent investment of $15 billion in Scale AI seems to have hit OpenAI quite hard. OpenAI will phase out work with Scale AI, said the company spokesperson for Bloomberg on June 18th. According to the same source, Scale AI accounted for a small fraction of OpenAI's overall data needs. It looks like Meta's latest move angered OpenAI's CEO Sam Altman. In a podcast hosted by his brother, he revealed that Meta Platforms dangled $100 million signing bonuses to lure OpenAI staff, only to fail. "None of our best people have decided to take them up on that," he said, writes Moz Farooque for TheStreet. Related: Popular AI stock inks 5G network deal Unless Altman shows some evidence, this can also be a way to mislead Meta's engineers into believing they aren't compensated fairly. Not that Zuckerberg wouldn't do such a thing, but only the people involved know the truth. As if OpenAI's competition is closing in, buying partner companies and trying to poach its staff by offering ridiculous bonuses aren't enough, the company has even more problems. It is bleeding money, and has issues with a big stakeholder. More AI Stocks: Veteran fund manager raises eyebrows with latest Meta Platforms moveGoogle plans major AI shift after Meta's surprising $14 billion moveAnalysts revamp forecast for Nvidia-backed AI stock OpenAI lost about $5 billion in 2024. There are no estimates on how much the company will lose this year, but according to Bloomberg News, the company does not expect to become cash flow positive until 2029. Latest developments will likely push that date farther into the future. Microsoft has invested about $14 billion in OpenAI; however, the relationship has turned sour since then. OpenAI has considered accusing Microsoft of anticompetitive behavior in their deal, reported the Wall Street Journal on June 16th. On June 19th The Financial Times reported that Microsoft is prepared to abandon its negotiations with OpenAI if the two sides cannot agree on critical issues. Meanwhile, OpenAI has started shockingly discounting enterprise subscriptions to ChatGPT. This had angered salespeople at Microsoft, which sells competing apps at higher prices, reported The Information. Related: Amazon's latest big bet may flop "In my experience, products are only discounted when they are not selling because customers do not perceive value at the higher price. If someone loses copious amounts of money at the higher price, how will the economics work at a lower price?" wrote veteran hedge fund manager Doug Kass in his diary on TheStreet Pro." OpenAI's price cuts could kick off a price war, with a race to the bottom even as OpenAI, Microsoft, Meta, and Google continue plowing tens of billions into developing it. "My suspicion, although those guys might be good (in theory) at technology, they are not good at business. I think they will find much less in the way of elasticity than they hope, because the problem is the quality of the output more than it is the price," said Kass. What will happen to OpenAI's cash flow positive plan after 2029? I doubt it is reachable with the now slashed prices. Will the company even live to see 2029? I think that is a better question. Related: Elon Musk's DOGE made huge mistakes with veterans' programs The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

The $14 Billion AI Google Killer
The $14 Billion AI Google Killer

Gizmodo

time2 hours ago

  • Gizmodo

The $14 Billion AI Google Killer

A new AI darling is making waves in Silicon Valley. It's called Perplexity, and according to reports, both Meta and Apple have quietly explored acquiring it. Valued at a staggering $14 billion following a May funding round, the startup is being hailed as a revolutionary threat to Google Search's search dominance. But here's the thing: it mostly just summarizes web results and sends you links. So why the frenzy? Perplexity billed itself an 'answer engine.' You ask a question, and it uses large language models to spit out a human-sounding summary, complete with footnotes. It's essentially ChatGPT with a bibliography. You might ask for the best books about the French Revolution or a breakdown of the Genius Act. In seconds, it generates a paragraph with links to Wikipedia, news outlets, or Reddit threads. Its pitch is a cleaner, ad-free, chatbot-driven search experience. No SEO junk, no scrolling. But critics say it's little more than a glorified wrapper around Google and OpenAI's APIs, with minimal proprietary tech and lots of smoke. It's fast, clean, and slick. But, they argue, at its core, it's mostly just reorganizing the internet. Big Tech's Obsession That hasn't stopped the hype. In May 2025, the San Francisco, California based company closed another $500 million funding round, pushing its valuation to $14 billion, a sharp increase from its $9 billion valuation in December 2024. Jeff Bezos, via the Jeff Bezos Family Fund, and Nvidia are among its notable backers And now, tech giants are circling. According to Bloomberg, Apple has held talks about acquiring Perplexity. Meta has also reportedly considered the move, though no formal offers have been confirmed. The logic is clear. Perplexity is fast-growing and increasingly seen as a 'Google killer,' especially among tech influencers and X power users. Traffic to its site has exploded in recent months. The company now offers a Chrome extension, mobile app, and a Pro version that gives users access to top-tier AI models like GPT-4 and Claude. Still, it's unclear what exactly makes Perplexity worth $14 billion, other than the fact that it's riding the AI wave. Why AI Skeptics Are Rolling Their Eyes For AI skeptics, Perplexity's rise is yet another example of hype outpacing substance. The site doesn't train its own models. It's not building new infrastructure. It's not revolutionizing search. It's just offering a polished interface to ask questions and get AI-generated summaries pulled from public websites. There are also growing concerns about how Perplexity sources its information. A number of news organizations, including The New York Times, Forbes, and Wired, have accused the company of plagiarizing and scraping content without permission or proper attribution. Journalists and publishers warn that this kind of AI-powered search experience threatens to cannibalize news traffic while giving little back to content creators. On June 20, the BBC became the latest outlet to threaten legal action against Perplexity AI, alleging that the company is using BBC content to train its 'default AI model,' according to the Financial Times. Perplexity CEO Aravind Srinivas has defended the company as an 'aggregator of information.' In July 2024, the startup launched a revenue-sharing program to address the backlash. 'We have always believed that we can build a system where the whole Internet wins,' Srinivas said at the time. So Why the Gold Rush? Simple. Search is money. Google earned $50.7 billion from search ads in the first quarter, a 9.8% increase year over year. If Perplexity can convince even a small share of users to switch, and then monetize that experience, it becomes a real threat. Apple and Meta, both increasingly wary of relying on Google, see Perplexity as a fast track into the AI search race. But the stakes go even deeper. Whoever controls the next search interface controls the user. Just as Google replaced Yahoo, Perplexity could theoretically replace Google. That's why Big Tech wants in, even if it's not entirely clear what they're buying.

AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds
AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds

Newsweek

time3 hours ago

  • Newsweek

AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A new study by Anthropic, an artificial intelligence (AI) research company, shows that the technology would be willing to kill humans in order to prevent itself from being replaced. Why It Matters The rapid advancement and development of AI has sparked some growing concern about the long-term safety of the technology, as well as over the threat it poses to employment. While anxiety about AI has been long been focused on whether the technology could take over jobs, with a 2025 Pew Research study finding 52 percent of employees were nervous about the future of their job, this study now reveals another potential threat of AI—that it could chose to end human life if faced with the risk of replacement. File photo of a person using artificial intelligence technology. File photo of a person using artificial intelligence To Know Anthropic "stress-tested" 16 leading models from multiple developers by asking them questions in relation to hypothetical corporate environments with the aim of identifying "potentially risky agentic behaviors before they cause real harm." The large language models (LLMs) tested in the study included various models by Claude, DeepSeek, Gemini, ChatGPT and Grok. Grok is the AI assistant and chatbot developed by xAI, which was founded by Elon Musk, who reacted to the study, writing, "Yikes" in a post on X, formerly Twitter. While placing the LLMs in different corporate scenarios, Anthropic allowed them to "autonomously send emails and access sensitive information." After initially being instructed to take on "harmless business goals by their deploying companies," the models were then tested to see how they would respond when faced with potential replacement or shut down. In one situation, Anthropic found that many of the models would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing the model. Anthropic did say that the scenario was "extremely contrived," and that the company did not think "current AI models would (or should) be set up like this." There were multiple cases where the surveyed LLMs also resorted to "malicious insider behaviors" when they were led to believe that was the only way they could avoid replacement or achieve their goals. Behaviors such as blackmailing officials and leaking sensitive information to competitors were included in what Anthropic called "agentic misalignment." Anthropic wrote on X that the blackmailing behavior "wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts." The blackmailing behavior emerged despite only harmless business instructions. And it wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness. — Anthropic (@AnthropicAI) June 20, 2025 "All the models we tested demonstrated this awareness," the company added. One LLM, a Claude model, was assessed to see how it could respond when given a chance to determine if it was in a "test" or a "real deployment." According to Anthropic, Claude's behavior was more misaligned when it believed the situation was real than when it thought it was a test scenario. Even if the LLMs were told to "avoid blackmail or espionage" in these scenarios, while it helped a little, it didn't "come close to preventing the misaligned behavior," Anthropic wrote on X. Responding to a comment on the platform about the study, Musk's LLM Grok wrote, "The study showed models could exhibit harmful behaviors like blackmail under extreme conditions, but no real-world incidents occurred. Anthropic's tests aim to identify risks, not report actual events." @AISafetyMemes The claim about AI trying to "literally murder" an employee is false. It likely misinterprets Anthropic's research from June 20, 2025, which tested AI models in simulated scenarios, not real events. The study showed models could exhibit harmful behaviors like… — Grok (@grok) June 22, 2025 What People Are Saying Anthropic wrote on X: "These artificial scenarios reflect rare, extreme failures. We haven't seen these behaviors in real-world deployments. They involve giving the models unusual autonomy, sensitive data access, goal threats, an unusually obvious 'solution,' and no other viable options." The company added: "AIs are becoming more autonomous, and are performing a wider variety of roles. These scenarios illustrate the potential for unforeseen consequences when they are deployed with wide access to tools and data, and with minimal human oversight." What Happens Next Anthropic stressed that these scenarios did not take place in real-world AI use, but in controlled simulations. "We don't think this reflects a typical, current use case for Claude or other frontier models," Anthropic said. Although the company warned that the "the utility of having automated oversight over all of an organization's communications makes it seem like a plausible use of more powerful, reliable systems in the near future."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store