Latest news with #ComputeUnifiedDeviceArchitecture

Business Insider
13-06-2025
- Business
- Business Insider
Ian Buck built Nvidia's secret weapon. He may spend the rest of his career defending it.
Ian Buck, Nvidia's vice president of hyperscale and high-performance computing, felt a twinge of nostalgia during CEO Jensen Huang's keynote presentation at their GTC conference in March. Huang spent nearly eight minutes on a single slide listing software products. "This slide is genuinely my favorite," said Huang onstage in front of 17,000 people. "A long time ago — 20 years ago — this slide was all we had," the CEO continued. Buck said he was instantly transported back to 2004 when he started building the company's breakthrough software, called Compute Unified Device Architecture. Back then, the team had two people and two libraries. Today, CUDA supports more than 900 libraries and artificial intelligence models. Each library corresponds to an industry using Nvidia technology. "It is a passionate and very personal slide for me," Buck told reporters the next day. The 48-year-old's contribution to Nvidia is hard-coded into the company's history. But his influence is just beginning. CUDA is the platform from which Nvidia jumped to, at one point, 90% market share in AI computing. CUDA is how the company defends its moat. One architecture to rule them all Dion Harris, Nvidia's senior director of high-performance computing and AI factory solutions, sometimes forgets that he's in the room with the Dr. Ian Buck. Then it hits him that his boss, and friend, is a computing legend. Since Buck's undergrad days at Princeton in the late 1990s, he had been focused on graphics — a particularly punishing field within computer science with no obvious connection to AI at the time. "Computer graphics was such a dweebie field," said Stephen Witt, the author of " The Thinking Machine," which details Nvidia's rise from obscurity to the most valuable company in the world. "There was a stigma to working in computer graphics — you were maybe some kind of man-child if this was your focus," Witt said. While getting his Ph.D. at Stanford, Buck connected multiple graphics processing units with the aim of stretching them to their limits. He had interned at Nvidia before pursuing his Ph.D., so he was familiar with the GPU. Initially, he used it for graphics like everyone else. Buck has said that he and his cohort would use the chips to play video games such as "Quake" and "Doom," but eventually, he started asking himself what else his gaming setup could do. He became fixated on proving that you could use GPUs for anything and everything. He received funding from Nvidia and the Defense Advanced Research Projects Agency, among others, to develop tools to turn a GPU into a general-purpose supercomputing machine. When the company saw Brook—Buck's attempt at a programming language that applied the power of GPUs beyond graphics, Nvidia hired him. He wasn't alone. John Nickolls, a hardware expert and then director of architecture for GPU computing, was also instrumental in building CUDA. Buck may have been forever paired with Nickolls if the latter had not died from cancer in 2011. "Both Nickolls and Buck had this obsession with making computers go faster in the way that a Formula 1 mechanic would have an obsession with making the race car go faster," Witt told BI. (The author said Huang expressed frustration that Nickolls doesn't get the recognition he deserves since his passing.) Buck, Nickolls, and a small team of experts built a framework that allowed developers to use an existing coding language, C, to harness the GPU's ability to run immense calculations simultaneously rather than one at a time and apply it to any field. The result was CUDA, a vehicle to bring parallel computing to the masses. The rise of CUDA as an essential element in the world of AI wasn't inevitable. Huang insisted on making every chip compatible with the software, though hardly anyone was using it despite being free. In fact, Nvidia lost millions of dollars for more than a decade because of CUDA. The rest is lore. When ChatGPT launched, Nvidia was already powering the AI computing revolution that is now the focus of $7 trillion in infrastructure spending, much of which eventually goes to Nvidia. King of the nerds Buck's smarts do have limits. He joked to an eager audience at GTC that quantum chromodynamics, a field of particle physics, just won't stick in his brain. But he thrives in Nvidia's notoriously rigorous environment. The Santa Clara, California, company has an intense culture that eschews one-on-one meetings and airs missteps and disagreements in public. It might sound terrifying, but for those with the brains to keep up, the directness and the rigor are ideal. For this report, Business Insider spoke with four people who have either worked directly with Buck at Stanford, Nvidia, or both. Those who know Buck personally describe him as gregarious and easygoing but capable of intensity when goals are on the line. He's focused on results rather than theory. In his public remarks, at panels and in interviews on behalf of Nvidia, Buck volleys from rapid, technical lines of thought to slower, simple descriptions in layman's terms. At a GTC press conference, he detailed the latest development in convolutional neural networks and then described proteins as "complicated 3D squigglies in your body." He describes the tiny, sensitive interconnects between parts of an Nvidia chipset like the Geek Squad explaining the back of a TV from memory — it's all in his head. Harris said storytelling ability is particularly important in the upper ranks at Nvidia. Since the company essentially had a promising technology waiting for a market for years, Huang still sees being early as a business strategy. He has branded it "going after zero billion-dollar markets." The potential of AI, "AI factories," and the infrastructure spending that goes with them is a story Nvidia can't stop telling. Buck's shilling skills have improved over the years. But even in 15-year-old footage, he's most animated when explaining the inner workings of Nvidia's technology. "A lot of developers are amazing, but they say, 'Leave me alone. I'm going to write my code in the mountains somewhere," Paul Bloch, president of Nvidia partner DDN, told BI. Nvidia's leaders aren't like that, he said. Much of Nvidia's upper echelon may have the skills to match the reclusive set, but they don't choose between showmanship and code. Ian Buck's next act Ian Buck's work at Nvidia began with a simple mandate: make the GPU work for every industry. That mission is very nearly accomplished. There are hundreds of CUDA libraries targeting industries from weather forecasting to medical imaging. "The libraries are really there to connect the dots so that every business doesn't have to learn CUDA," Harris said. CUDA draws its strength from millions of developers, amassed over decades, who constantly innovate and improve on the platform. So far, no one has caught Nvidia's heels, but the competition is coming faster than ever. Even as Buck spoke at GTC, developers across the world were trying to break through CUDA's dominance. The first night of the conference, a cast of competitors convened by TensorWave, an AI cloud company exclusively using chips from Nvidia's only US rival, AMD, held an event entitled "Beyond CUDA." Tensorwave cofounder Jeff Tatarchuck said it included more than "24 presenters talking about what they're doing to overcome the CUDA moat." AMD, which also presented at the event, is making an explicit effort to catch up on the software side of AI computing, but industry analysts say they're not there yet. Harris told BI Buck's team spends a lot of time speaking with researchers to stay on top. That's always been true, but the nature of the work has changed. A decade ago, Buck was convincing researchers to apply accelerated computing to their problems; now the tables have turned. "One of the most challenging parts of my job often is to try to predict the future, but AI is always surprising us," Buck said at a Bank of America conference this month. Understanding what the smartest minds in AI need from Nvidia is paramount. Many saw DeepSeek, the company that spooked markets with its ostensibly cheap reasoning AI model, as a threat to Nvidia since the team bypassed CUDA to squeeze out the performance gains that allowed it to achieve competitive results with less compute. But Buck recently described the Chinese team as "one of the best CUDA developers out there." AI is entering a new phase as more companies commercialize their tools. Even with Nvidia's enormous head start, in part built by Buck, the pace is intense. For example, one of the products Nvidia debuted at GTC, Dynamo, is an inference computing platform designed to adapt to the rise of reasoning models. Nvidia launched Dynamo a couple of months after the DeepSeek earthquake, but some users had already built their own versions. That's how fast AI is evolving. "Inference is really hard. It's wickedly hard," said Buck at GTC. Talent is also a big part of how Nvidia is going to try to maintain its dominance, and another place where, Witt says, Buck has value beyond his technical skills. He's not exactly a household name, even at Stanford. But for a certain type of developer, the ones who can play in Buck's extremely complex sandbox, he is a draw. "Everyone's trying to hire these guys, especially after DeepSeek," said Witt. "This was not a sexy domain in computer science for a long time. Now it is white hot." "Now these guys are going for big money. So, I think Ian Buck has to be out there advertising what his group is doing," Witt continued. Nvidia declined to make Ian Buck available for an interview with Business Insider and declined to comment on this report. Who's Nvidia's next CEO? Buck is more than a decade younger than Huang, who is 62, and doesn't plan on going anywhere anytime soon. Yet, questions about succession are inevitable. Lip-Bu Tan, a semiconductor industry legend who recently became Intel's CEO, told BI that Buck is one of a handful of true collaborators for Huang, who has more than 60 direct reports. "Jensen has three right-hand men," Tan told BI before he took over at Intel. Buck is one. Vice President of GPU Engineering Jonah Alben is another. And CFO Colette Kress, though clearly not a man, is the third, Tan said. Jay Puri, Nvidia's executive vice president for worldwide field operations, and Sanja Fidler, vice president of AI research, are also names that come up in such conversations. "I don't think Ian spends a lot of time doing business strategy. He's more like the world's best mechanic," Witt said.
Yahoo
04-06-2025
- Business
- Yahoo
Here's the Salary You Could Earn Working for Tesla
Technology companies have long been popular for their high salaries and competitive benefits. And companies that expect to grow quickly need to pay well in order to hire the best and the brightest, in large numbers. So it's not surprising that Tesla pays comparably well for many positions. Check Out: For You: But exactly how much can you make working for the EV, solar and AI giant? Engineers are critical to Tesla's business, as they design cars, trucks and solar panels, as well as AI and robotics. Consider This: A senior software engineer earns an average salary of $171,638 at Tesla. A software engineer can expect to earn about $133,589. A generalist software engineer in AI inference would need to be proficient in Python and C++, as well as with a machine learning framework such as PyTorch. This position also requires experience with training and deploying neural networks for AI and experience with CUDA (Compute Unified Device Architecture). The compensation for this job, according to Tesla, is $118,000 to $390,000 plus cash and stock awards and benefits. A machine learning engineer working on self-driving vehicles can expect to earn between $140,000 and $420,000 per year, plus stock and cash awards and benefits, according to Tesla. To qualify for this position, you'll need Python experience, a major deep learning framework and best practices in software engineering, along with experience deploying machine learning models for self-driving, robotics or natural language processing. An intern who works as a system software engineer can expect to earn $36.05 to $56.13 per hour plus benefits. These interns work 40 hours per week, onsite, for a minimum of 12 weeks. This type of internship is for students who are currently enrolled in an academic program, pursuing a Computer Engineering or Computer Science degree and have experience with C/C++ and Linux. Tesla requires skilled workers in its manufacturing facilities to assemble electric vehicles, solar equipment, satellites and more. A quality technician can expect to earn $49,000 to $67,000 per year. These technicians perform inspections, audits and testing, analyze quality issues, implement corrective action and maintain documentation on quality and performance. Technicians are required to be proficient with quality systems and tools. An equipment maintenance technician earns an average of $36.88 per hour. This position requires the technical skills needed to install, repair and maintain mechanical and electrical systems, robotics and factory automation. Experience as an automotive technician or technical military experience is required. As an innovative company with leading-edge technologies, Tesla needs sales personnel to get their products into the right hands. Sales advisors for Tesla make between $41,000 and $62,000 per year. Those who sell more cars or who have been with the company for several years may earn more. Candidates for a sales position are expected to have a year or more of sales or customer service experience. General managers in sales can earn between $102,000 and $153,000 per year, plus cash and stock awards, and benefits, according to Tesla. GMs oversee sales and service operations and manage and develop sales staff. As nice as it is to have a high salary, a fat paycheck isn't the only reason to work for a particular company. There are many other factors to be considered, as well, such as benefits, work-life balance, advancement potential and job security. Glassdoor and Indeed allow employees to post anonymous reviews of the companies they work — or worked — for. These reviews offer a less biased view of what it's like to work at Tesla, though it must be noted that people are generally more likely to leave negative reviews than positive ones. For example, just over half (55%) of Tesla employees on Glassdoor feel they are compensated fairly. About the same number (56%) said they would recommend working at Tesla. The overall rating given to the company by employees was 3.5 out of 5 stars. The sentiment was similar on Indeed. The overall rating was 3.3 out of 5 stars. Indeed breaks down the ratings further, which showed that the company is more highly regarded for pay and benefits than anything else. Tesla was rated 3.6 out of 5 stars for pay and benefits, but 3.2 for culture, 2.9 for work-life balance, 2.8 for job security and advancement and 2.8 for management. Working at Tesla, or any other company, is about more than just the salary, although that is a big consideration. Finding a job where you are valued, both financially and as a contributor, is the key factor. Editor's note: Salary information was sourced from Indeed and Glassdoor unless otherwise stated and is accurate as of June 3, 2025. More From GOBankingRates Surprising Items People Are Stocking Up On Before Tariff Pains Hit: Is It Smart? 7 Tax Loopholes the Rich Use To Pay Less and Build More Wealth This article originally appeared on Here's the Salary You Could Earn Working for Tesla
Yahoo
02-06-2025
- Business
- Yahoo
Prediction: Nvidia Will Beat the Market. Here's Why
Nvidia has consistently outperformed the S&P 500 by a wide margin. Its AI accelerator business is still growing like a weed. It still looks reasonably valued relative to its long-term growth potential. 10 stocks we like better than Nvidia › Nvidia (NASDAQ: NVDA), the world's largest producer of discrete graphics processing units (GPUs), saw its stock surge 25,250% over the past 10 years as the S&P 500 advanced less than 180%. From fiscal 2015 to fiscal 2025 (which ended this January), its revenue rose at a compound annual growth rate (CAGR) of 39% as its net income increased at a CAGR of 61%. That explosive growth was initially fueled by its brisk sales of gaming GPUs, which were also used to mine certain cryptocurrencies. But over the past few years, its expansion was primarily driven by its soaring shipments of data center GPUs for the artificial intelligence (AI) market. Unlike central processing units (CPUs), which process single pieces of data at a time, GPUs process a broad range of integers and floating numbers simultaneously. That advantage makes them better suited than stand-alone CPUs for processing complex AI tasks, so the rapid expansion of the AI market generated explosive tailwinds for its sales of data center GPUs. But since the start of 2025, Nvidia's stock rose less than 4% as the S&P 500 stayed nearly flat. The Trump administration's unpredictable tariffs, tighter curbs on exported chips, and the delays for its latest Blackwell chips all caused Nvidia to lose its luster. However, I believe Nvidia's stock can stay ahead of the S&P 500 this year for five simple reasons. Nvidia controlled 82% of the discrete GPU market at the end of 2024, according to JPR. Its closest competitor, AMD, held a 17% share, while Intel -- which returned to the discrete GPU market in 2022 -- controlled just 1% of the market. Nvidia also controls about 98% of the data center GPU market, according to TechInsights. The remaining 2% is split between AMD and Intel. Nvidia's dominance of that booming market, which is supported by the widespread usage of its older A100 chips and current-gen H100 and H200 chips, makes it tough for its competitors to gain a meaningful foothold. The global AI market could still expand at a CAGR of 31% from 2025 to 2032, according to Markets and Markets. If Nvidia merely matches that growth rate, its annual revenue would surge from $130.5 billion in fiscal 2025 to $1.31 trillion by fiscal 2032. So assuming it maintains roughly the same valuations, its stock still has a clear path toward delivering a ten-bagger gain over the next seven years. Nvidia reinforces its dominance through its proprietary Compute Unified Device Architecture (CUDA) programming platform. When software developers write their AI applications in a parallel code (such as C++ or Python) on CUDA, those applications become optimized for Nvidia's GPUs but can only be executed on its chips. If a developer wants to run that same application on an AMD or Intel GPU, it needs to be rewritten in other frameworks. In addition, most libraries, frameworks, and deep learning models are optimized for CUDA instead of other platforms. That stickiness should keep Nvidia well ahead of its competitors for the foreseeable future. China accounted for just 12.5% of Nvidia's revenue in fiscal 2025, compared to 16.9% in fiscal 2024 and 21.5% in fiscal 2023. That decline was mainly caused by America's tighter export curbs on its high-end data center GPU shipments to China. Nvidia tried to counter those challenges by selling less powerful, modified versions of its flagship GPUs. However, those versions (like the scaled-back H20 variant of its H100 and H200 chips) were also recently added to the growing list of banned U.S. chip shipments to China. That sounds like grim news for Nvidia, but it can still easily offset its declining revenues in China with its growth in its other, less controversial markets. That's why its revenue grew at a CAGR of 120% from fiscal 2023 to fiscal 2025, even as the export curbs choked its Chinese business. Nvidia generated 89% of its revenue from its data center chips in the first quarter of fiscal 2026. However, its smaller gaming, professional visualization, automotive, and OEM segments also grew year over year alongside its core growth engine. Its gaming business benefited from its rollout of its new RTX Super GPUs. Its professional visualization segment grew as it launched more design-oriented chips and expanded its Omniverse platform for digital projects, and its automotive chip sales improved as more Chinese automakers integrated its Drive platform into their electric vehicles. These oft-overlooked businesses should continue expanding in the shadow of its massive AI data center business. From fiscal 2025 to fiscal 2028, analysts expect Nvidia's revenue and earnings per share to grow at CAGRs of 31% and 29%, respectively. Yet its stock still looks reasonably valued at 34 times this year's earnings. So once investors realize that its near-term issues won't affect its long-term growth, Nvidia's stock should outperform the market for the rest of the year. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $651,049!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $828,224!* Now, it's worth noting Stock Advisor's total average return is 979% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Leo Sun has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Intel, and Nvidia. The Motley Fool recommends the following options: short August 2025 $24 calls on Intel. The Motley Fool has a disclosure policy. Prediction: Nvidia Will Beat the Market. Here's Why was originally published by The Motley Fool

Business Insider
11-05-2025
- Business
- Business Insider
A guide to Nvidia's competitors: AMD, Qualcomm, Broadcom, startups, and more are vying to compete in the AI chip market
Nvidia is undoubtably dominant in the AI semiconductor space. Estimates fluctuate, but the company has more than 80% market share by some estimates when it comes to the chips that reside inside data centers and make products like ChatGPT and Claude possible. That enviable dominance goes back almost two decades, when researchers began to realize that the same kind of intensive computing that made complex, visually stunning video games and graphics possible, could enable other types of computing too. The company started building its famous software stack, named Compute Unified Device Architecture or CUDA, 16 years before the launch of ChatGPT. For much of that time, it lost money. But CEO Jensen Huang and a team of true believers saw the potential for graphics processing units to enable artificial intelligence. And today, Nvidia and its products are responsible for most of the artificial intelligence at work in the world. Thanks to the prescience of Nvidia's leadership, the company had a big head start when it came to AI computing, but challengers are running fast to catch up. Some were competitors in the gaming or traditional semiconductor spaces, and others have started up from scratch. AMD AMD is Nvidia's top competitor in the market for AI computing in the data center. Helmed by its formidable CEO Lisa Su, AMD launched its own GPU, called the MI300, for the data center in 2024, more than a full year after Nvidia's second generation of data center GPUs started shipping. Though experts and analysts have touted the chip's specifications and potential based on its design and architecture, the company's software is still somewhat behind that of Nvidia, making these chips somewhat harder to program and use to their full potential. Analysts predict that the company has under 15% market share. But AMD executives insist that they are committed to bringing its software up to par and that future expectations for the evolution of the accelerated computing market will benefit the company — specifically, the spread of AI into so-called edge devices like phones and laptops. Qualcomm, Broadcom, and custom chips Also challenging Nvidia are application-specific integrated circuits or ASICs. These custom-designed chips are less versatile than GPUs, but they can be designed for specific AI computing workloads at a much lower cost, which have made them a popular option for hyperscalers. Though multipurpose chips like Nvidia's and AMD's graphics processing units are likely to maintain the largest share of the AI-chip market in the long term, custom chips are growing fast. Morgan Stanley analysts expected the market for ASICs to double in size in 2025. Companies that specialize in ASICs include Broadcom and Marvell, along with the Asia-based players Alchip Technologies and MediaTek. Marvell is in part responsible for Amazon's Trainium chips while Broadcom builds Google's tensor processing units, among others. OpenAI, Apple, Microsoft, Meta, and TikTok parent company ByteDance have all entered the race for a competitive ASIC as well. Amazon and Google While also being prominent customers of Nvidia, the major cloud providers like Amazon Web Services and Google Cloud Platform, often called hyperscalers, have also made efforts to design their own chips, often with the help of semiconductor companies. Amazon's Trainium chips and Google's TPUs are the most scaled of these efforts and offer a cheaper alternative to Nvidia chips, mostly for the companies' internal AI workloads. However, the companies have shown some progress in getting customers and partners to use their chips as well. Anthropic has committed to running some workloads on Amazon's chips, and Apple has done the same with Google's. Intel Once the great American name in chip-making, Intel has fallen far behind its competitors in the age of AI. But, the firm does have a line of AI chips called Gaudi that some reports have said can stand up to Nvidia's in some respects. Intel installed a new CEO, semiconductor veteran Lip-Bu Tan, in the first quarter of 2025 and one of his first actions was to flatten the organization so that the AI chip operations reports directly to him. Huawei Though Nvidia's American hopeful challengers are many, China's Huawei is perhaps the most concerning competitor of all for Nvidia and all those concerned with continued US supremacy in AI. Huang himself has called Huawei the "single most formidable" tech company in China. Reports that Huawei's AI chip innovation is catching up are increasing in frequency. New restrictions from the Biden and Trump administrations on shipping even lower-power GPUs to China have further incentivized the company to catch up and serve the Chinese markets for AI. Analysts say further restrictions being considered by the Trump administration are now unlikely to hamper China's AI progress. Startups Also challenging Nvidia are a host of startups offering new chip designs and business models to the AI computing market. These firms are starting out at a disadvantage, as they don't have the full-sized sales and distribution machines decades of chip sales in other types of tech bring. But several are holding their own by finding use cases, customers, and distribution methods that are attractive to customers based on faster processing speeds or lower cost. These new AI players include Cerebras, Etched, Groq, Positron AI, Sambanova Systems, and Tenstorrent, among others.


Forbes
05-05-2025
- Business
- Forbes
Nvidia Builds An AI Superhighway To Practical Quantum Computing
At the GTC 2025 conference, Nvidia announced its plans for a new, Boston-based Nvidia Accelerated Quantum Research Center or NVAQC, designed to integrate quantum hardware with AI supercomputers. Expected to begin operations later this year, it will focus on accelerating the transition from experimental to practical quantum computing. 'We view this as a long-term opportunity,' says Tim Costa, Senior Director of Computer-Aided Engineering, Quantum and CUDA-X at Nvidia. 'Our vision is that there will come a time when adding a quantum computing element into the complex heterogeneous supercomputers that we already have would allow those systems to solve important problems that can't be solved today.' Quantum computing, like AI (i.e., deep learning) a decade ago, is yet another emerging technology with an exceptional affinity with Nvidia's core product, the GPU. It is another milestone in Nvidia's successful ride on top of the technological shift re-engineering the computer industry, the massive move from serial data processing (executing instructions one at a time, in a specific order) to parallel data processing (executing multiple operations simultaneously). Over the last twenty years, says Costa, there were several applications where 'the world was sure it was serial and not parallel, and it didn't fit GPUs. And then, a few years later, rethinking the algorithms has allowed it to move on to GPUs.' Nvidia's ability to 'diversify' from its early focus on graphics processing (initially to speed up the rendering of three-dimensional video games) is due to the development in the mid-2000s of its software, the Compute Unified Device Architecture or CUDA. This parallel processing programming language allows developers to leverage the power of GPUs for general-purpose computing. The key to CUDA's rapid adoption by developers and users of a wide variety of scientific and commercial applications was a decision by CEO Jensen Huang to apply CUDA to the entire range of Nvidia's GPUs, not just the high-end ones, thus ensuring its popularity. This decision—and the required investment—caused Nvidia's gross margin to fall from 45.6% in the 2008 fiscal year to 35.4% in the 2010 fiscal year. 'We were convinced that accelerated computing would solve problems that normal computers couldn't. We had to make that sacrifice. I had a deep belief in [CUDA's] potential,' Huang told Tae Kim, author of the recently published The Nvidia Way. This belief continues to drive Nvidia's search for opportunities where 'we can do lots of work at once,' says Costa. 'Accelerated computing is synonymous with massively parallel computing. We think accelerated computing will ultimately become the default mode of computing and accelerate all industries. That is the CUDA-X strategy.' Costa has been working on this strategy for the last six years, introducing the CUDA software to new areas of science and engineering. This has included quantum computing, helping developers of quantum computers and their users simulate quantum algorithms. Now, Nvidia is investing further in applying its AI mastery to quantum computing. Nvidia became one of the world's most valuable companies because the performance of the artificial neural networks at the heart of today's AI depends on the parallelism of the hardware they are running on, specifically the GPU's ability to process many linear algebra multiplications simultaneously. Similarly, the basic units of information in quantum computing, qubits, interact with other qubits, allowing for many different calculations to run simultaneously. Combining quantum computing and AI promises to improve AI processes and practices and, at the same time, escalate the development of practical applications of quantum computing. The focus of the new Boston research center is on 'using AI to make quantum computers more useful and more capable,' says Costa. 'Today's quantum computers are fifty to a hundred qubits. It's generally accepted now that truly useful quantum computing will come with a million qubits or more that are error corrected down to tens to hundreds of thousands of error-free or logical qubits. That process of error correction is a big compute problem that has to be done in real time. We believe that the methods that will make that successful at scale will be AI methods.' Quantum computing is a delicate process, subject to interference from 'noise' in its environment, resulting in at least one failure in every thousand operations. Increasing the number of qubits introduces more opportunities for errors. When Google announced Willow last December, it called it 'the first quantum processor where error-corrected qubits get exponentially better as they get bigger.' Its error correction software includes AI methods such as machine learning, reinforcement learning, and graph-based algorithms, helping identify and correct errors accurately, 'the key element to unlocking large-scale quantum applications,' according to Google. 'Everyone in the quantum industry realizes that the name of the game in the next five years will be quantum error correction,' says Doug Finke, Chief Content Officer at Global Quantum Intelligence. 'The hottest job in quantum these days is probably a quantum error correction scientist, because it's a very complicated thing.' The fleeting nature of qubits—they 'stay alive' for about 300 microseconds—requires speedy decisions and very complex math. A ratio of 1,000 physical qubits to one logical qubit would result in many possible errors. AI could help find out 'what are the more common errors and what are the most common ways of reacting to it,' says Finke. Researchers from the Harvard Quantum Initiative in Science and Engineering and the Engineering Quantum Systems group at MIT will test and refine these error correction AI models at the NVAQC. Other collaborators include quantum startups Quantinuum, Quantum Machines, and QuEra Computing. They will be joined by Nvidia's quantum error correction research team and Nvidia's most advanced supercomputer. 'Later this year, we will have the center ready, and we'll be training AI models and testing them on integrated devices,' says Costa.