logo
Ian Buck built Nvidia's secret weapon. He may spend the rest of his career defending it.

Ian Buck built Nvidia's secret weapon. He may spend the rest of his career defending it.

Ian Buck, Nvidia's vice president of hyperscale and high-performance computing, felt a twinge of nostalgia during CEO Jensen Huang's keynote presentation at their GTC conference in March.
Huang spent nearly eight minutes on a single slide listing software products.
"This slide is genuinely my favorite," said Huang onstage in front of 17,000 people. "A long time ago — 20 years ago — this slide was all we had," the CEO continued.
Buck said he was instantly transported back to 2004 when he started building the company's breakthrough software, called Compute Unified Device Architecture.
Back then, the team had two people and two libraries. Today, CUDA supports more than 900 libraries and artificial intelligence models. Each library corresponds to an industry using Nvidia technology.
"It is a passionate and very personal slide for me," Buck told reporters the next day.
The 48-year-old's contribution to Nvidia is hard-coded into the company's history. But his influence is just beginning. CUDA is the platform from which Nvidia jumped to, at one point, 90% market share in AI computing. CUDA is how the company defends its moat.
One architecture to rule them all
Dion Harris, Nvidia's senior director of high-performance computing and AI factory solutions, sometimes forgets that he's in the room with the Dr. Ian Buck. Then it hits him that his boss, and friend, is a computing legend.
Since Buck's undergrad days at Princeton in the late 1990s, he had been focused on graphics — a particularly punishing field within computer science with no obvious connection to AI at the time.
"Computer graphics was such a dweebie field," said Stephen Witt, the author of " The Thinking Machine," which details Nvidia's rise from obscurity to the most valuable company in the world.
"There was a stigma to working in computer graphics — you were maybe some kind of man-child if this was your focus," Witt said.
While getting his Ph.D. at Stanford, Buck connected multiple graphics processing units with the aim of stretching them to their limits. He had interned at Nvidia before pursuing his Ph.D., so he was familiar with the GPU. Initially, he used it for graphics like everyone else.
Buck has said that he and his cohort would use the chips to play video games such as "Quake" and "Doom," but eventually, he started asking himself what else his gaming setup could do.
He became fixated on proving that you could use GPUs for anything and everything. He received funding from Nvidia and the Defense Advanced Research Projects Agency, among others, to develop tools to turn a GPU into a general-purpose supercomputing machine. When the company saw Brook—Buck's attempt at a programming language that applied the power of GPUs beyond graphics, Nvidia hired him.
He wasn't alone. John Nickolls, a hardware expert and then director of architecture for GPU computing, was also instrumental in building CUDA. Buck may have been forever paired with Nickolls if the latter had not died from cancer in 2011.
"Both Nickolls and Buck had this obsession with making computers go faster in the way that a Formula 1 mechanic would have an obsession with making the race car go faster," Witt told BI. (The author said Huang expressed frustration that Nickolls doesn't get the recognition he deserves since his passing.)
Buck, Nickolls, and a small team of experts built a framework that allowed developers to use an existing coding language, C, to harness the GPU's ability to run immense calculations simultaneously rather than one at a time and apply it to any field. The result was CUDA, a vehicle to bring parallel computing to the masses.
The rise of CUDA as an essential element in the world of AI wasn't inevitable. Huang insisted on making every chip compatible with the software, though hardly anyone was using it despite being free. In fact, Nvidia lost millions of dollars for more than a decade because of CUDA.
The rest is lore. When ChatGPT launched, Nvidia was already powering the AI computing revolution that is now the focus of $7 trillion in infrastructure spending, much of which eventually goes to Nvidia.
King of the nerds
Buck's smarts do have limits. He joked to an eager audience at GTC that quantum chromodynamics, a field of particle physics, just won't stick in his brain. But he thrives in Nvidia's notoriously rigorous environment.
The Santa Clara, California, company has an intense culture that eschews one-on-one meetings and airs missteps and disagreements in public. It might sound terrifying, but for those with the brains to keep up, the directness and the rigor are ideal.
For this report, Business Insider spoke with four people who have either worked directly with Buck at Stanford, Nvidia, or both.
Those who know Buck personally describe him as gregarious and easygoing but capable of intensity when goals are on the line. He's focused on results rather than theory.
In his public remarks, at panels and in interviews on behalf of Nvidia, Buck volleys from rapid, technical lines of thought to slower, simple descriptions in layman's terms.
At a GTC press conference, he detailed the latest development in convolutional neural networks and then described proteins as "complicated 3D squigglies in your body." He describes the tiny, sensitive interconnects between parts of an Nvidia chipset like the Geek Squad explaining the back of a TV from memory — it's all in his head.
Harris said storytelling ability is particularly important in the upper ranks at Nvidia. Since the company essentially had a promising technology waiting for a market for years, Huang still sees being early as a business strategy. He has branded it "going after zero billion-dollar markets." The potential of AI, "AI factories," and the infrastructure spending that goes with them is a story Nvidia can't stop telling.
Buck's shilling skills have improved over the years. But even in 15-year-old footage, he's most animated when explaining the inner workings of Nvidia's technology.
"A lot of developers are amazing, but they say, 'Leave me alone. I'm going to write my code in the mountains somewhere," Paul Bloch, president of Nvidia partner DDN, told BI. Nvidia's leaders aren't like that, he said. Much of Nvidia's upper echelon may have the skills to match the reclusive set, but they don't choose between showmanship and code.
Ian Buck's next act
Ian Buck's work at Nvidia began with a simple mandate: make the GPU work for every industry. That mission is very nearly accomplished. There are hundreds of CUDA libraries targeting industries from weather forecasting to medical imaging.
"The libraries are really there to connect the dots so that every business doesn't have to learn CUDA," Harris said.
CUDA draws its strength from millions of developers, amassed over decades, who constantly innovate and improve on the platform. So far, no one has caught Nvidia's heels, but the competition is coming faster than ever.
Even as Buck spoke at GTC, developers across the world were trying to break through CUDA's dominance. The first night of the conference, a cast of competitors convened by TensorWave, an AI cloud company exclusively using chips from Nvidia's only US rival, AMD, held an event entitled "Beyond CUDA."
Tensorwave cofounder Jeff Tatarchuck said it included more than "24 presenters talking about what they're doing to overcome the CUDA moat."
AMD, which also presented at the event, is making an explicit effort to catch up on the software side of AI computing, but industry analysts say they're not there yet.
Harris told BI Buck's team spends a lot of time speaking with researchers to stay on top. That's always been true, but the nature of the work has changed. A decade ago, Buck was convincing researchers to apply accelerated computing to their problems; now the tables have turned.
"One of the most challenging parts of my job often is to try to predict the future, but AI is always surprising us," Buck said at a Bank of America conference this month. Understanding what the smartest minds in AI need from Nvidia is paramount.
Many saw DeepSeek, the company that spooked markets with its ostensibly cheap reasoning AI model, as a threat to Nvidia since the team bypassed CUDA to squeeze out the performance gains that allowed it to achieve competitive results with less compute.
But Buck recently described the Chinese team as "one of the best CUDA developers out there."
AI is entering a new phase as more companies commercialize their tools. Even with Nvidia's enormous head start, in part built by Buck, the pace is intense.
For example, one of the products Nvidia debuted at GTC, Dynamo, is an inference computing platform designed to adapt to the rise of reasoning models. Nvidia launched Dynamo a couple of months after the DeepSeek earthquake, but some users had already built their own versions. That's how fast AI is evolving.
"Inference is really hard. It's wickedly hard," said Buck at GTC.
Talent is also a big part of how Nvidia is going to try to maintain its dominance, and another place where, Witt says, Buck has value beyond his technical skills. He's not exactly a household name, even at Stanford.
But for a certain type of developer, the ones who can play in Buck's extremely complex sandbox, he is a draw.
"Everyone's trying to hire these guys, especially after DeepSeek," said Witt. "This was not a sexy domain in computer science for a long time. Now it is white hot."
"Now these guys are going for big money. So, I think Ian Buck has to be out there advertising what his group is doing," Witt continued.
Nvidia declined to make Ian Buck available for an interview with Business Insider and declined to comment on this report.
Who's Nvidia's next CEO?
Buck is more than a decade younger than Huang, who is 62, and doesn't plan on going anywhere anytime soon. Yet, questions about succession are inevitable.
Lip-Bu Tan, a semiconductor industry legend who recently became Intel's CEO, told BI that Buck is one of a handful of true collaborators for Huang, who has more than 60 direct reports.
"Jensen has three right-hand men," Tan told BI before he took over at Intel. Buck is one. Vice President of GPU Engineering Jonah Alben is another. And CFO Colette Kress, though clearly not a man, is the third, Tan said.
Jay Puri, Nvidia's executive vice president for worldwide field operations, and Sanja Fidler, vice president of AI research, are also names that come up in such conversations.
"I don't think Ian spends a lot of time doing business strategy. He's more like the world's best mechanic," Witt said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI makes shocking move amid fierce competition, Microsoft problems
OpenAI makes shocking move amid fierce competition, Microsoft problems

Miami Herald

timean hour ago

  • Miami Herald

OpenAI makes shocking move amid fierce competition, Microsoft problems

A blind man once told me, "I wish I knew what a beautiful woman looks like". He started losing his sight from birth and lost it completely while he was still just a child. What do the engineers trying to make artificial intelligence know about intelligence? To me, they look like a bunch of blind men, trying to build a "living" statue of a beautiful person. The worst part is, they don't even know they are blind. Do you remember the scandal when an engineer from Google claimed that the company's AI is sentient? When I saw the headlines, I didn't even open the articles, but my conclusion was that either Google made a terrible mistake in hiring him or it was an elaborate PR stunt. I thought Google was famous for having a high hiring bar, so I was leaning toward a PR stunt-I was wrong. Related: Apple WWDC underwhelms fans in a crucial upgrade What is amazing about that story is that roughly six months later, ChatGPT came out and put Google's AI department into panic mode. They were far behind ChatGPT, which was not even close to being sentient. Engineers from OpenAI, were the ones to start a new era, the era in which investors are presented with a statue that sort of has a human face, and has a speaker inside playing recordings of human speech, expecting that the "blind" men working on it, will soon make it become alive and beautiful. Of course, investors are also ignorant of the fact that engineers are "blind". OpenAI is now faced with many rivals, and the developing situation is starting to look like a bunch of bullies trying to out-bully each other instead of offering a superior product. Meta's recent investment of $15 billion in Scale AI seems to have hit OpenAI quite hard. OpenAI will phase out work with Scale AI, said the company spokesperson for Bloomberg on June 18th. According to the same source, Scale AI accounted for a small fraction of OpenAI's overall data needs. It looks like Meta's latest move angered OpenAI's CEO Sam Altman. In a podcast hosted by his brother, he revealed that Meta Platforms dangled $100 million signing bonuses to lure OpenAI staff, only to fail. "None of our best people have decided to take them up on that," he said, writes Moz Farooque for TheStreet. Related: Popular AI stock inks 5G network deal Unless Altman shows some evidence, this can also be a way to mislead Meta's engineers into believing they aren't compensated fairly. Not that Zuckerberg wouldn't do such a thing, but only the people involved know the truth. As if OpenAI's competition is closing in, buying partner companies and trying to poach its staff by offering ridiculous bonuses aren't enough, the company has even more problems. It is bleeding money, and has issues with a big stakeholder. More AI Stocks: Veteran fund manager raises eyebrows with latest Meta Platforms moveGoogle plans major AI shift after Meta's surprising $14 billion moveAnalysts revamp forecast for Nvidia-backed AI stock OpenAI lost about $5 billion in 2024. There are no estimates on how much the company will lose this year, but according to Bloomberg News, the company does not expect to become cash flow positive until 2029. Latest developments will likely push that date farther into the future. Microsoft has invested about $14 billion in OpenAI; however, the relationship has turned sour since then. OpenAI has considered accusing Microsoft of anticompetitive behavior in their deal, reported the Wall Street Journal on June 16th. On June 19th The Financial Times reported that Microsoft is prepared to abandon its negotiations with OpenAI if the two sides cannot agree on critical issues. Meanwhile, OpenAI has started shockingly discounting enterprise subscriptions to ChatGPT. This had angered salespeople at Microsoft, which sells competing apps at higher prices, reported The Information. Related: Amazon's latest big bet may flop "In my experience, products are only discounted when they are not selling because customers do not perceive value at the higher price. If someone loses copious amounts of money at the higher price, how will the economics work at a lower price?" wrote veteran hedge fund manager Doug Kass in his diary on TheStreet Pro." OpenAI's price cuts could kick off a price war, with a race to the bottom even as OpenAI, Microsoft, Meta, and Google continue plowing tens of billions into developing it. "My suspicion, although those guys might be good (in theory) at technology, they are not good at business. I think they will find much less in the way of elasticity than they hope, because the problem is the quality of the output more than it is the price," said Kass. What will happen to OpenAI's cash flow positive plan after 2029? I doubt it is reachable with the now slashed prices. Will the company even live to see 2029? I think that is a better question. Related: Elon Musk's DOGE made huge mistakes with veterans' programs The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

The $14 Billion AI Google Killer
The $14 Billion AI Google Killer

Gizmodo

time2 hours ago

  • Gizmodo

The $14 Billion AI Google Killer

A new AI darling is making waves in Silicon Valley. It's called Perplexity, and according to reports, both Meta and Apple have quietly explored acquiring it. Valued at a staggering $14 billion following a May funding round, the startup is being hailed as a revolutionary threat to Google Search's search dominance. But here's the thing: it mostly just summarizes web results and sends you links. So why the frenzy? Perplexity billed itself an 'answer engine.' You ask a question, and it uses large language models to spit out a human-sounding summary, complete with footnotes. It's essentially ChatGPT with a bibliography. You might ask for the best books about the French Revolution or a breakdown of the Genius Act. In seconds, it generates a paragraph with links to Wikipedia, news outlets, or Reddit threads. Its pitch is a cleaner, ad-free, chatbot-driven search experience. No SEO junk, no scrolling. But critics say it's little more than a glorified wrapper around Google and OpenAI's APIs, with minimal proprietary tech and lots of smoke. It's fast, clean, and slick. But, they argue, at its core, it's mostly just reorganizing the internet. Big Tech's Obsession That hasn't stopped the hype. In May 2025, the San Francisco, California based company closed another $500 million funding round, pushing its valuation to $14 billion, a sharp increase from its $9 billion valuation in December 2024. Jeff Bezos, via the Jeff Bezos Family Fund, and Nvidia are among its notable backers And now, tech giants are circling. According to Bloomberg, Apple has held talks about acquiring Perplexity. Meta has also reportedly considered the move, though no formal offers have been confirmed. The logic is clear. Perplexity is fast-growing and increasingly seen as a 'Google killer,' especially among tech influencers and X power users. Traffic to its site has exploded in recent months. The company now offers a Chrome extension, mobile app, and a Pro version that gives users access to top-tier AI models like GPT-4 and Claude. Still, it's unclear what exactly makes Perplexity worth $14 billion, other than the fact that it's riding the AI wave. Why AI Skeptics Are Rolling Their Eyes For AI skeptics, Perplexity's rise is yet another example of hype outpacing substance. The site doesn't train its own models. It's not building new infrastructure. It's not revolutionizing search. It's just offering a polished interface to ask questions and get AI-generated summaries pulled from public websites. There are also growing concerns about how Perplexity sources its information. A number of news organizations, including The New York Times, Forbes, and Wired, have accused the company of plagiarizing and scraping content without permission or proper attribution. Journalists and publishers warn that this kind of AI-powered search experience threatens to cannibalize news traffic while giving little back to content creators. On June 20, the BBC became the latest outlet to threaten legal action against Perplexity AI, alleging that the company is using BBC content to train its 'default AI model,' according to the Financial Times. Perplexity CEO Aravind Srinivas has defended the company as an 'aggregator of information.' In July 2024, the startup launched a revenue-sharing program to address the backlash. 'We have always believed that we can build a system where the whole Internet wins,' Srinivas said at the time. So Why the Gold Rush? Simple. Search is money. Google earned $50.7 billion from search ads in the first quarter, a 9.8% increase year over year. If Perplexity can convince even a small share of users to switch, and then monetize that experience, it becomes a real threat. Apple and Meta, both increasingly wary of relying on Google, see Perplexity as a fast track into the AI search race. But the stakes go even deeper. Whoever controls the next search interface controls the user. Just as Google replaced Yahoo, Perplexity could theoretically replace Google. That's why Big Tech wants in, even if it's not entirely clear what they're buying.

Nvidia: How the chipmaker evolved from a gaming startup to an AI giant
Nvidia: How the chipmaker evolved from a gaming startup to an AI giant

Yahoo

time4 hours ago

  • Yahoo

Nvidia: How the chipmaker evolved from a gaming startup to an AI giant

Over the past two decades, Nvidia (NVDA) has skyrocketed into global conversation. The semiconductor company is considered an international leader in the design and manufacturing of computer chips and helped revolutionize the rise of artificial intelligence (AI). Beyond its strengths in the gaming, data, and AI fields, Nvidia announced plans this March for a quantum research center in Boston, where CEO Jensen Huang said researchers could tackle problems from drug discovery to materials development. Here's a look at Nvidia's path to where it is today, from creating hardware for the gaming industry to designing the chips that power AI. On April 5, 1993, Jensen Huang, Chris Malachowsky, and Curtis Priem founded Nvidia with an initial focus on designing and producing 3D graphics processors for computing and video games. The company's first product release, the multimedia processor NV1, didn't get the reception the founders were hoping for. What followed was a financial situation so dire that Nvidia laid off half its staff, leading to its unofficial motto: 'Our company is 30 days from going out of business.' In addition to the NV1's unimpressive return, a partnership that Nvidia had forged with Japanese video game company Sega to produce console graphics chips fell through, adding to the pressure. However, even as it pivoted to another company for chips, Sega invested $5 million in Nvidia — funding which allowed Nvidia to survive going out of business. Despite financial challenges and a smaller team, Nvidia released its next chip in 1997. It was a success. RIVA 128 allowed for support of high-resolution 2D and 3D graphics, and over a million units were sold in its first four months of sales. With the foundation of RIVA 128 sales, Nvidia produced RIVA TNT, which further cemented its place in the industry with better image quality and performance. Two years later, on Jan. 22, 1999, Nvidia went public on the New York Stock Exchange (NYSE) at $12 a share, and by May, it shipped out its 10,000,000th graphics processor. Later in 1999, Nvidia released GeForce 256, calling it the world's first 'Graphics Processing Unit.' By marketing the chip directly to customers instead of just including it within a device or console, the company popularized the term 'GPU.' With their ability to break larger tasks into smaller ones that could run at the same time, known as parallel processing, GPUs took on the heavy workload of powering graphics. It allowed devices to work on other processing functions faster, which meant GeForce 256 offered smoother, faster, and more realistic graphics. Finding growing success in supplying GPUs to both customers and consoles like Xbox, Nvidia joined the Nasdaq 100 and the S&P 500 in 2001. In 2006, Nvidia launched CUDA, a platform that allowed users to access their GPUs' parallel processing capabilities to run their own software instead of just graphics. Between 2006 to 2017, Nvidia invested nearly $12 billion in research & development with a large portion of those funds going towards CUDA. CUDA downloads slowed entering the 2010s, and while CUDA provided users with the ability to use chips for purposes other than gaming, it didn't initially seem to pay off for investors. 'Some investors were big Nvidia fans in the late 2000s and gave them the benefit of the doubt for the first five years of the CUDA investment," Acquired podcast co-host Ben Gilbert said in a 2022 episode. "But in the mid-2010s, market demand still wasn't showing up in a big way, and it was becoming a bigger and bigger investment." However, later technological developments would make CUDA crucial to the company. "It made all of our products more expensive since we were selling these gamer cards while putting computing acceleration into them," Bryan Catanzaro, Nvidia's vice president of applied deep learning research, told Yahoo Finance in 2023. "It took a lot of commitment to follow through. … I would say it was about 10 years before Wall Street really started to believe this investment was worth anything." In 2012, students Alex Krizhevsky and Ilya Sutskever used CUDA to train the visual-recognition neural network AlexNet with two Nvidia GPUs. AlexNet's breakthrough performance in identifying images demonstrated that using GPUs to train machine learning models cut training times significantly compared to the CPUs that were previously used. Following this advancement, Nvidia began pivoting its focus to artificial intelligence, supported by its revenue from gaming. By 2016, it announced the DGX-1, a system designed specifically for deep learning and the large language models that were on the rise. That year, Nvidia stock nearly tripled in price. 'It's 'destiny meets serendipity,'' Nvidia CEO Jensen Huang told Yahoo Finance at the time. 'People think it's an overnight success, but like most overnight successes, it took us years.' At the same time, Nvidia took the opportunity to make strategic acquisitions, such as wireless company Icera in 2011 and hardware company The Portland Group in 2013. It tried to acquire the semiconductor and design company Arm (ARM) in 2020, but the deal ultimately fell through after regulatory concerns. In March 2022, Nvidia announced the H100 'Hopper' chip, promising faster training and better performance for artificial intelligence. Controlling a significant majority of the market share with this GPU, major companies, including Alphabet (GOOG), Amazon (AMZN), and Microsoft (MSFT), turned to Nvidia with billions as they began to develop AI and data-driven products. One such company is OpenAI ( whose relationship with Nvidia stretches back to 2016, when Nvidia donated the first DGX-1 supercomputer to the startup. In November 2022, OpenAI launched ChatGPT, a language model built on Nvidia GPUs that quickly reached headlines. In less than two months, ChatGPT set the record for the fastest-growing consumer application in history, according to a UBS study, reaching 100 million monthly active users in January 2023. "A new computing era has begun," Nvidia CEO Jensen Huang said in a 2023 statement. "Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI." With investors increasingly interested in artificial intelligence and as the demand for GPUs to run models continues to grow, Nvidia's revenue for the quarter ending in January 2024 more than doubled its results year over year. Following the quarterly report's release, Nvidia had the largest one-day gain in stock market history, adding $277 billion in value. It then hit a valuation of $2 trillion the following day. The record wouldn't stand for long, however; Nvidia beat it again just two months later. That March, Nvidia announced its next chip: Blackwell. Offering higher performance with reduced cost and energy consumption, the chips were designed to work better than previous versions when linked to work together in large numbers. Soon after, Nvidia announced a 10-for-1 stock split in June 2024. Following the split, it passed Microsoft and Apple (AAPL) to become the world's most valuable company at $3.3 trillion. By November 2024, it was added to the Dow Jones Industrial Average. Despite its successes, Nvidia also encountered challenges throughout its rise. In 2018, it faced a class-action lawsuit alleging it did not properly disclose to investors the impact of the cryptocurrency market on revenue from sales of GPUs. At the time, 'miners' of cryptocurrencies such as bitcoin (BTC-USD) and ethereum (ETH-USD) used the GPUs to complete transactions and secure new crypto tokens. The process requires significant computational power, which made Nvidia GPUs a popular choice. Nvidia paid a $5.5 million settlement in 2022 to the SEC because of the issue, and in December 2024, the Supreme Court dismissed Nvidia's appeal, allowing the 2018 case to proceed. This wasn't Nvidia's first time managing legal issues regarding its chips. In 2016, it settled a case involving the marketed performance and actual capabilities of its GTX 970, with payouts of $30 per purchase. On top of legal issues, there is also the challenge of supply keeping up with demand. A global chip shortage first occurred in early 2020 as a result of the coronavirus pandemic and an increased reliance on technology for remote work. Other factors that lengthened the shortage through 2023 included the initial US-China trade war, severe weather events, and the Russia-Ukraine war. A December 2024 report from the IDC projected global demand for AI and high-performance computing (HPC) to grow by over 15% in 2025. President Trump announced Project Stargate in January 2025, which involves tech companies such as Oracle (ORCL), OpenAI, and SoftBank (SFTBY) investing $500 billion in AI infrastructure in the United States over the next four years. Nvidia, as a technology partner to the project, saw a jump in its stock, and reached a $3.6 trillion market cap. Later in the month, however, the Chinese company DeepSeek released its own AI model, which was reportedly trained at a significantly lower cost than that of competitors. Following the announcement, Nvidia stock dropped $589 billion, almost 17%, marking the largest single-day loss in stock market history. Following the drop, March 2025 brought the debut of Nvidia's Blackwell Ultra, the successor to Blackwell. The new chip was announced to have 1.5 times the performance of the previous chip, which could help AI models answer queries faster. In April 2025, Trump banned the export of the company's H20 chip to China, as chips like Nvidia's are critical in the race to develop AI technologies. In its first quarter report, Nvidia said it expects to miss $8 billion in potential sales because of the ban. Despite the expanded limitations on exports, Nvidia continues to grow and even briefly passed Microsoft again in June as the world's most valuable company. Looking forward, some even expect it could be the first company to hit a $4 trillion market cap. '[Nvidia] really got the AI revolution going,' ARK Invest founder Cathie Wood told Yahoo Finance earlier this year, 'and we think it's still going to play a mighty role." — Nina is a data reporter intern for Yahoo Finance. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store