logo
IBM's Vision For A Large-Scale Fault-Tolerant Quantum Computer By 2029

IBM's Vision For A Large-Scale Fault-Tolerant Quantum Computer By 2029

Forbes10-06-2025

IBM's vision for its large-scale fault-tolerant Starling quantum computer
IBM has just made a major announcement about its plans to achieve large-scale quantum fault tolerance before the end of this decade. Based on the company's new quantum roadmap, by 2029 IBM expects to be able to run accurate quantum circuits with hundreds of logical qubits and hundreds of millions of gate operations. If all goes according to plan, this stands to be an accomplishment with sweeping effects across the quantum market — and potentially for computing as a whole.
In advance of this announcement, I received a private briefing from IBM and engaged in detailed correspondence with some of its quantum researchers for more context. (Note: IBM is an advisory client of my firm, Moor Insights & Strategy.) The release of the new roadmap offers a good opportunity to review what IBM has already accomplished in quantum, how it has adapted its technical approach to achieve large-scale fault tolerance and how it intends to implement the milestones of its revised roadmap across the next several years.
Let's dig in.
First, we need some background on why fault tolerance is so important. Today's quantum computers have the potential, but not yet the broader capability, to solve complex problems beyond the reach of our most powerful classical supercomputers. The current generation of quantum computers are fundamentally limited by high error rates that are difficult to correct and that prevent complex quantum algorithms from running at scale. While there are numerous challenges being tackled by quantum researchers around the world, there is broad agreement that these error rates are a major hurdle to be cleared.
In this context, it is important to understand the difference between fault tolerance and quantum error correction. QEC uses specialized measurements to detect errors in encoded qubits. And although it is also a core mechanism used in fault tolerance, QEC alone can only go so far. Without fault-tolerant circuit designs in place, errors that occur during operations or even in the correction process can spread and accumulate, making it exponentially more difficult for QEC on its own to maintain logical qubit integrity.
Reaching well beyond QEC, fault-tolerant quantum computing is a very large and complex engineering challenge that applies a broad approach to errors. FTQC not only protects individual computational qubits from errors, but also systemically prevents errors from spreading. It achieves this by employing clever fault-tolerant circuit designs, and by making use of a system's noise threshold — that is, the maximum level of errors the system can handle and still function correctly. Achieving the reliability of FTQC also requires more qubits.
FTQC can potentially lower error rates much more efficiently than QEC. If an incremental drop in logical error rate is desired, then fault tolerance needs only a small polynomial increase in the number of qubits and gates to achieve the desired level of accuracy for the overall computation. Despite its complexity, this makes fault tolerance an appealing and important method for improving quantum error rates.
IBM's first quantum roadmap, released in 2020
Research on fault tolerance goes back several decades. IBM began a serious effort to build a quantum computer in the late 1990s when it collaborated with several leading universities to build a two-qubit quantum computer capable of running a small quantum algorithm. Continuing fundamental research eventually led to the 2016 launch of the IBM Quantum Experience, featuring a five-qubit superconducting quantum computer accessible via the cloud.
IBM's first quantum roadmap, released in 2020 (see the image above), detailed the availability of the company's 27-qubit Falcon processor in 2019 and outlined plans for processors with a growing number of qubits in each of the subsequent years. The roadmap concluded with the projected development in 2023 of a research-focused processor, the 1,121-qubit Condor, that was never made available to the public.
However, as IBM continued to scale its qubit counts and explore error correction and error mitigation, it became clear to its researchers that monolithic processors were insufficient to achieve the long-term goal of fault-tolerant quantum computing. To achieve fault tolerance in the context of quantum low-density parity-check (much more on qLDPC below), IBM knew it had to overcome three major issues:
This helps explain why fault tolerance is such a large and complex endeavor, and why monolithic processors were not enough. Achieving all of this would require that modularity be designed into the system.
IBM's shift to modular architecture first appeared in its 2022 roadmap with the introduction for 2024 of multi-chip processors called Crossbill and Flamingo. Crossbill was a 408-qubit processor that demonstrated the first application of short-range coupling. And Flamingo was a 1,386-qubit quantum processor that was the first to use long-range coupling.
For more background on couplers, I previously wrote a detailed Forbes.com article explaining why IBM needed modular processors and tunable couplers. Couplers play an important role in IBM's current and future fault-tolerant quantum computers. They allow qubits to be logically scaled but without the difficulty, expense and additional time required to fabricate larger chips. Couplers also provide architectural and design flexibility. Short-range couplers provide chip-to-chip parallelization by extending IBM's heavy-hex lattice across multiple chips, while long-range couplers use cables to connect modules so that quantum information can be shared between processors.
A year later, in 2023, IBM scientists made an important breakthrough by developing a more reliable way to store quantum information using qLDPC codes. These are also called bivariate bicycle codes, and you'll also hear this referred to as the gross code because it has the capability to encode 12 logical qubits into a gross of 144 physical qubits with 144 ancilla qubits, making a total of 288 physical qubits for error correction.
Previously, surface code was the go-to error-correction code for superconducting because it had the ability to tolerate high error rates, along with the abilities to scale, use the nearest neighbor and protect qubits against bit-flip and phase-flip errors. It's important to note that IBM has verified that qLDPC performs error correction just as effectively and efficiently as surface code. Yet these two methods do not bring the same level of benefit. Although qLDPC code and surface code perform equally well in terms of error correction, qLDPC code has the significant advantage of needing only one-tenth as many qubits. (More details on that below.)
This brings us to today's state of the art for IBM quantum. Currently, IBM has a fleet of quantum computers available over the cloud and at client sites, many of which are equipped with 156-qubit Heron processors. According to IBM, Heron has the highest performance of any IBM quantum processor. Heron is currently being used in the IBM Quantum System Two and it is available in other systems as well.
IBM 2025 quantum innovation roadmap, showing developments from 2016 to 2033 and beyond
IBM's new quantum roadmap shows several major developments on the horizon. In 2029 IBM expects to be the first organization to deliver what has long been the elusive goal of the entire quantum industry. After so many years of research and experimentation, IBM believes that in 2029 it will finally deliver a fault-tolerant quantum computer. By 2033, IBM also believes it will be capable of building a quantum-centric supercomputer capable of running thousands of logical qubits and a billion or so gates.
Before we go further into specifics about the milestones that IBM projects for this new roadmap, let's dig a little deeper into the technical breakthroughs enabling this work.
As mentioned earlier, one key breakthrough IBM has made comes in its use of gross code (qLDPC) for error correction, which is much more efficient than surface code.
Comparison of surface code versus qLDPC error rates
The above chart shows the qLDPC physical and logical error rates (diamonds) compared to two different surface code error rates (stars). The qLDPC code uses a total of 288 physical qubits (144 physical code qubits and 144 check qubits) to create 12 logical qubits (red diamond). As illustrated in the chart, one instance of surface code requires 2,892 physical qubits to create 12 logical qubits (green star) and the other version of surface code requires 4,044 physical qubits to create 12 logical qubits (blue star). It can be easily seen that qLDPC code uses far fewer qubits than surface code yet produces a comparable error rate.
Connectivity between the gross code and the LPU
Producing a large number of logical and physical qubits with low error rates is impressive; indeed, as explained earlier, large numbers of physical qubits with low error rates are necessary to encode and scale logical qubits. But what really matters is the ability to successfully run gates. Gates are necessary to manipulate qubits and create superpositions, entanglement and operational sequences for quantum algorithms. So, let's take a closer look at that technology.
Running gates with qLDPC codes requires an additional set of physical qubits known as a logical processing unit. The LPU has approximately 100 physical qubits and adds about 35% of ancilla overhead per logical qubit to the overall code. (If you're curious, a similar low to moderate qubit overhead would also be required for surface code to run gates.) LPUs are physically attached to qLDPC quantum memory (gross code) to allow encoded information to be monitored. LPUs can also be used to stabilize logical computations such as Clifford gates (explained below), state preparations and measurements. It is worth mentioning that the LPU itself is fault-tolerant, so it can continue to operate reliably even with component failures or errors.
IBM already understands the detailed connectivity required between the LPU and gross code. For simplification, the drawing of the gross code on the left above has been transformed into a symbolic torus (doughnut) in the drawing on the right; that torus has 12 logical qubits consisting of approximately 288 physical qubits, accompanied by the LPU. (As you look at the drawings, remember that 'gross code' and 'bivariate bicycle code' are two terms for the same thing.) The drawing on the right appears repeatedly in the diagrams below, and it will likely appear in future IBM documents and discussions about fault tolerance.
The narrow rectangle at the top of the right-hand configuration is called a 'bridge' in IBM research papers. Its function is to couple one unit to a neighboring unit with 'L-couplers.' It makes the circuits fault-tolerant inside the LPU, and it acts as a natural connecting point between modules. These long-distance couplers, about a meter in length, are used for bell pair generation. It's a method that allows the entanglement of logical qubits.
So what happens when multiple of these units are coupled together?
IBM fault-tolerant quantum architecture
Above is a generalized configuration of IBM's future fault-tolerant architecture. As mentioned earlier, each torus contains 12 logical qubits created by the gross code through the use of approximately 288 physical qubits. So, for instance, if a quantum computer was designed to run 96 logical qubits, it would be equipped with eight torus code blocks (8 x 12 = 96) which would require a total of approximately 2,304 physical qubits (8 x 288) plus eight LPUs.
Two special quantum operations are needed for quantum computers to run all the necessary algorithms plus perform error correction. These two operations are Clifford gates and non-Clifford gates. Clifford gates — named after the 19th-century British mathematician William Clifford — handle error correction in a way that allows error-correction code to fix mistakes. Clifford gates are well-suited for FTQC because they are able to limit the spread of errors. Reliability is critical for practical fault-tolerant quantum systems, so running Clifford gates helps ensure accurate computations. The other necessary quantum operation is non-Clifford gates (particularly T-gates).
A quantum computer needs both categories of gates so it can perform universal tasks such as chemistry simulations, factoring large numbers and other complex algorithms. However, there is a trick for using both of these operations together. Even though T-gates are important, they also break the symmetry needed by Clifford gates for error correction. That's where the 'magic state factory' comes in. It implements the non-Clifford group (T-gates) by combining a stream of so-called magic states alongside Clifford gates. In that way, the quantum computer can maintain its computational power and fault tolerance.
IBM's research has proven it can run fault-tolerant logic within the stabilizer (Clifford) framework. However, without the extra non‑Clifford gates, a quantum computer would not be able to execute the full spectrum of quantum algorithms.
IBM fault-tolerant quantum roadmap
Now let's take a closer look at the specific milestones in IBM's new roadmap that will take advantage of the breakthroughs explained above, and how the company plans to create a large-scale fault-tolerant quantum computer within this decade.
IBM expects to begin fabricating and testing the Loon processor sometime this year. The Loon will use two logical qubits and approximately 100 physical qubits. Although the Loon will not use the gross code, it will be using a smaller code with similar hardware requirements.
IBM has drawn on its past four-way coupler research to develop and test a six-way coupler using a central qubit connected through tunable couplers to six neighboring qubits, a setup that demonstrates low crosstalk and high fidelity between connections. IBM also intends to demonstrate the use of 'c-couplers' to connect Loon qubits to non-local qubits. Couplers up to 16mm in length have been tested, with a goal of increasing that length to 20 mm. Longer couplers allow connections to be made over more areas of the chip. So far, the longer couplers have also maintained low error rates and acceptable coherence times — in the range of several hundred microseconds.
In this phase of the roadmap, IBM plans to test one full unit of the gross code, long c-couplers and real-time decoding of the gross code. IBM also plans a demonstration of quantum advantage in 2026 via the Heron (a.k.a. Nighthawk) platform with HPC.
The Cockatoo design employs two blocks of gross code connected to LPUs to create 24 logical qubits using approximately 288 physical qubits. In this year, IBM aims to test L-couplers and module-to-module communications capability. IBM also plans to test Clifford gates between the two code blocks, giving it the ability to perform computations, but not yet universal computations.
A year later, the Starling processor should be equipped with approximately 200 logical qubits. Required components, including magic state distillation, will be tested. Although only two blocks of gross code are shown in the illustrative diagram above, the Starling will in fact require about 17 blocks of gross code, with each block connected to an LPU.
The estimated size of IBM's 2029 large-scale fault-tolerant Starling quantum computer in a ... More datacenter setting, with human figures included for size comparison
This is the year IBM plans to deliver the industry's first large‑scale, fault‑tolerant quantum computer — equipped with approximately 200 logical qubits and able to execute 100 million gate operations. A processor of this size will have approximately 17 gross code blocks equipped with LPUs and magic state distillation.
IBM expects that quantum computers during this period will run billions of gates on several thousand circuits to demonstrate the full power and potential of quantum computing.
IBM milestones in its roadmap for large-scale, fault-tolerant quantum computers
Although there have been a number of significant quantum computing advancements in recent years, building practical, fault-tolerant quantum systems has been — and still remains — a significant challenge. Up until now, this has largely been due to a lack of a suitable method for error correction. Traditional methods such as surface code have important benefits, but limitations, too. Surface code, for instance, is still not a practical solution because of the large numbers of qubits required to scale it.
IBM has overcome surface code's scaling limitation through the development of its qLDPC codes, which require only a tenth of the physical qubits needed by surface code. The qLDPC approach has allowed IBM to develop a workable architecture for a near-term, fully fault-tolerant quantum computer. IBM has also achieved other important milestones such as creating additional layers in existing chips to allow qubit connections to be made on different chip planes. Tests have shown that gates using the new layers are able to maintain high quality and low error rates in the range of existing devices.
Still, there are a few areas in need of improvement. Existing error rates are around 3x10^-3, which needs to improve to accommodate advanced applications. IBM is also working on extending coherence times. Using isolated test devices, IBM has determined that coherence is running between one to two milliseconds, and up to four milliseconds in some cases. Since it appears to me that future utility-scale algorithms and magic state factories will need between 50,000 to 100,000 gates between resets, further improvement in coherence may be required.
As stated earlier, IBM's core strategy relies on modularity and scalability. The incremental improvement of its processors through the years has allowed IBM to progressively develop and test its designs to incrementally increase the number of logical qubits and quantum operations — and, ultimately, expand quantum computing's practical utility. Without IBM's extensive prior research and its development of qLDPC for error correction, estimating IBM's chance for success would largely be guesswork. With it, IBM's plan to release a large-scale fault-tolerant quantum computer in 2029 looks aggressive but achievable.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The board decision that sent the MLB, NFL unions into controversy
The board decision that sent the MLB, NFL unions into controversy

Yahoo

time17 minutes ago

  • Yahoo

The board decision that sent the MLB, NFL unions into controversy

Last June, eight members of the board of directors for a licensing group called OneTeam Partners, which is co-owned by the players unions for five major sports leagues, signed a resolution that would have included the member unions in a plan to receive 'profits units.' Those units, like traditional equity, could be turned into cash if the company did well. It was a move that raised alarms within at least one of the unions. Advertisement By late 2024, an official at the National Football League Players Association had repeatedly raised concerns that implementing the plan could mean that labor officials serving on OneTeam's board of directors — including the head of the NFL players union, Lloyd Howell Jr., and the leader of the Major League Baseball players union, Tony Clark — were attempting to make a change that could lead to their own financial gain, potentially at the expense of union members. The resolution, which was obtained by The Athletic, called for any eventual payouts — made through what is known as a senior employee incentive plan (SEIP) — to go to the unions the board members hail from. The resolution also directly acknowledged the possibility that the unions could then grant that money to their board members. 'The explicit goal throughout the process was to financially enrich the individuals who serve on the OTP Board as labor organization representatives,' the NFLPA official wrote to lawyers in a communication criticizing the plan, which was reviewed by The Athletic. '… the idea was to pay the money into the unions, then the individuals.' In a statement to The Athletic, OneTeam said that though the plan was considered, it was ultimately abandoned. Advertisement 'In early 2024, OneTeam initiated an exploratory review to determine whether the company could lawfully offer incentive-based compensation to current and prospective Board members,' OneTeam Partners said. 'This exploratory effort was part of a broader initiative to assess strategies for attracting high-caliber, independent talent. 'Following the legal advice of a labor law expert, it was determined that the best practice, if implemented, was to make grants to the respective players associations. In so doing, any future payments would be governed by each union's player-approved bylaws, policy, and governance frameworks. It added: 'To be unequivocally clear: no OneTeam board member, nor any union employee, was directly or indirectly granted equity in OneTeam, holds equity in OneTeam or is a participant in its SEIP and any claim to the contrary is simply misinformed and false.' Federal authorities are conducting an investigation related to OneTeam Partners and union officials. The full scope of the probe, which is being run out of the Eastern District of New York, is unclear. The Eastern District of New York declined to comment. Advertisement Five major sports unions hold stakes in OneTeam, the two largest belonging to the NFLPA and the Major League Baseball Players Association, which together own two-thirds of the company, according to people briefed on the business structure who requested anonymity because they were not authorized to speak publicly. The NFLPA has 44 percent, the MLBPA 22 percent. The unions representing players in Major League Soccer, the U.S. Women's National Soccer Team and the Women's National Basketball Association own much smaller shares in OneTeam: 3.3 percent for MLS, .3 percent for the USWNTPA, and .2 percent for the WNBA, according to one of the people briefed on the structure. Early this month, the FBI started calling MLB and NFL players or their representatives. Prosecutor David Berman is heading the federal investigation, said people briefed on its process who were not authorized to speak publicly. With a federal investigation underway, the NFLPA has retained outside counsel separate from the outside lawyers retained by its executive director, Howell. Howell's lawyer did not reply to requests for comment. 'We're guided by our responsibility to our members in everything we do and we will continue to fully cooperate with the investigation,' the NFLPA said in a statement to The Athletic. Advertisement The MLBPA declined to comment Friday. That union too has retained outside counsel separate from its leader, Clark. His attorney did not return requests for comment. The NFLPA official who voiced concern about the incentive plan wrote that they were concerned about the potential for various conflicts of interest. The official argued internally that the change to the plan could dilute the players' existing stakes, which they held via their unions. The official also questioned whether the players were informed of how their financial interests might be affected. The NFLPA official's email with lawyers shows talk of changing OneTeam's SEIP dated to 2023, when a new CEO took over. In March 2024, OneTeam asked outside counsel whether there would be any issues granting union officials on its board participation in a SEIP, according to the same email. In response, the official wrote, the law firm flagged concerns regarding the National Labor Relations Act were any units to be granted directly to union board members. Plans like SEIP are common in the business world. Companies use them to reward and lure top leaders, and the programs often grant traditional shares in a company. Private companies in particular will often grant something that operates similarly to shares but is not traditional equity, according to Chris Crawford, managing director for the executive compensation practice at the firm Gallagher. Advertisement 'It's not a publicly traded, readily tradable environment,' Crawford said. 'It gets into these third-party transactions that get a little bit messy. The most common is by a generic term called 'phantom stock.'' Hence OneTeam's use of 'profits units.' But ultimately, OneTeam is not a common business because it is largely owned by unions. Union officials have legal obligations to their members and their members' interests, and most unions don't have for-profit arms with the overlay of those governance concerns. 'The labor organizations' representatives on the OTP Board are there as FIDUCIARIES representing their union members' direct ownership interests in the Company — their legal duties are not to the Company generally, but rather their union members' ownership in the company,' the NFLPA official wrote in the email to lawyers. Advertisement The union officials have their positions on OneTeam's board because of their union roles, positions for which they are already compensated. Howell was paid $3.6 million by the NFLPA for the 12 months from March 2024 through February 2025, according to the union's annual disclosure filed with the Department of Labor. Clark was paid $3.5 million for the 2024 calendar year, per the baseball union's filing. The NFLPA has four seats on OneTeam's board, and the MLBPA has three seats. Both Howell's and Clark's signatures appear on the resolution to change OneTeam's senior employee incentive plan. The unions representing players in MLS, the USWNT and the WNBA share one seat on the board that rotates. Only the signature of Becca Roux, the head of the USWNTPA, appears on the resolution from last year. Roux, as well as Bob Foose, head of the MLSPA, and Terri Jackson, head of the WNBPA, have hired Steve McCool of McGuireWoods as outside counsel. Advertisement 'I notified the prosecutor in New York that I represent a number of OTP board members,' McCool said by phone Friday. 'My clients have no cause for concern and they are available to answer any questions the government may have about this matter.' Outside investors own the remaining 30 percent of OneTeam that is not owned by unions. The SEIP resolution called for the NFLPA to receive 44 percent of the new plan units available to the board, and the MLBPA 33 percent. The other three unions were in line to receive 3.7 percent each. The outside investors on the board were not going to receive any new incentive units, the resolution said. Such an arrangement has the potential to create at least the appearance of a conflict of interest, according to Lee Adler, a labor lawyer with no involvement in the matter who has long worked as counsel to unions. Advertisement 'Is there something in that set of criteria for the incentive that might have some influence on how or what the union officials who sit on the board actually end up … legislating (at OneTeam)?' asked Adler, a lecturer at the Cornell University School of Industrial and Labor Relations. NFLPA employees said at a meeting in November 2024 that they expected payments via SEIP would be $200,000 to $300,000, the NFLPA official wrote in the email. Sports unions have moved aggressively to capitalize on their players' branding rights. The MLBPA and NFLPA were among the founders of OneTeam in 2019. Both unions already had for-profit arms that handled licensing business, and those arms still exist today. But they were betting that a company with aggregated rights would have greater leverage. The venture has been a boon not only for the unions but also for the private equity investors who partnered with them. RedBird Capital cashed out its 40 percent stake in 2022, when the company had a $1.9 billion valuation. The windfalls from name, image and licensing rights carry a slew of gains for athletes, including bolstering traditional labor objectives like collective bargaining. The NFLPA reported about $101 million in revenue from OneTeam from early 2024 into 2025, and the MLBPA about $45 million for 2024. But both the baseball and football unions have been wrapped up in public controversy this year over, in part, OneTeam. Advertisement Late last year, an anonymous complaint filed with the National Labor Relations Board levied allegations at Clark, including concerns over equity from OneTeam. The football union, where internal complaints had already been lodged, then brought on an outside firm, Linklaters, to conduct a review. The NFLPA has not publicized that firm's findings. But in March, in an email reviewed by , Howell notified OneTeam's board of directors that Linklaters found the NFLPA and OneTeam had been in compliance. This article originally appeared in The Athletic. NFL, MLB, MLS, WNBA, Sports Business 2025 The Athletic Media Company

Bill Gates and Linus Torvalds meet for the first time.
Bill Gates and Linus Torvalds meet for the first time.

The Verge

time21 minutes ago

  • The Verge

Bill Gates and Linus Torvalds meet for the first time.

Posted Jun 22, 2025 at 10:45 AM UTC Bill Gates and Linus Torvalds meet for the first time. Microsoft co-founder Bill Gates and Linus Torvalds, the creator of the Linux kernel, have surprisingly never met before. That all changed at a recent dinner hosted by Sysinternals creator Mark Russinovich. The world's of Linux and Windows finally came together in real life, and Dave Cutler, Microsoft technical fellow and Windows NT lead developer, was also there to witness the moment and meet Torvalds for the first time. 'No major kernel decisions were made,' jokes Russinovich in a post on LinkedIn.

Chime versus SoFi: Which Is the Better Fintech Stock Right Now?
Chime versus SoFi: Which Is the Better Fintech Stock Right Now?

Yahoo

time23 minutes ago

  • Yahoo

Chime versus SoFi: Which Is the Better Fintech Stock Right Now?

Chime operates an online banking platform that is similar to SoFi. SoFi is acquiring new members, increasing revenue, and accelerating profit at a pace superior to the competition. 10 stocks we like better than SoFi Technologies › It's been a hot couple of weeks for the fintech sector. Digital banking platform (NASDAQ: CHYM) and stablecoin operator Circle (NYSE: CRCL) both completed initial public offerings (IPOs) in which shares of both companies soared. While artificial intelligence (AI) is still the biggest megatrend fueling the stock market right now, the back-to-back IPOs from Circle and Chime have brought some renewed interest to the financial services arena. Given the overlapping business models of Chime and SoFi Technologies (NASDAQ: SOFI), another budding neobank, investors may be wondering which stock is the better buy right now. Let's assess Chime and SoFi from both an operational and valuation perspective. After doing so, I think smart investors will be able to determine a clear winner between the two digital banking platforms. SoFi offers many of the same financial services products that you may see at traditional banks. By offering lending, insurance, and investment management, SoFi has proven that it can compete with legacy banking providers by offering a similar, diversified portfolio of products. The main differentiator between SoFi and most of its competitors is that the company operates entirely online and lacks physical brick-and-mortar infrastructure. By creating a one-stop shop for financial services, SoFi is offering a level of convenience that is hard to match. In turn, SoFi is not only able to keep its customers loyal to the platform, but also has leveraged its comprehensive ecosystem by cross-selling additional services to existing members. SoFi refers to this strategy as its financial services productivity loop -- essentially building a model in which the lifetime value of customers increases over time, ultimately creating a competitive advantage over incumbent providers. At the end of the first quarter, SoFi boasted 10.9 million customers on its platform that use a total of 15.9 million products. This implies that each user in SoFi's network is using 1.4 products on average. As the chart above illustrates, SoFi's business model is paying off in spades, underscored by accelerating revenue growth and a transition to consistent profitability. While SoFi's business is rocking, Chime doesn't appear too far behind. In the table below, I've summarized a number of financial metrics and key performance indicators for SoFi and Chime. Category SoFi Chime Revenue -- Trailing 12-month ($) $2.8 billion $1.8 billion Members 10.9 million 8.6 million 3-year membership compound annual growth rate (CAGR) 41.3% 22.3% Net income (Trailing 12-Months) $482 million ($28.3) million Market capitalization (as of June 18) $17 billion $10.6 billion Data source: SoFi Investor Relations and Chime S-1 Filing. The obvious takeaway from the figures above is that SoFi is a larger business than Chime in terms of revenue. This is not entirely surprising, given that SoFi's platform boasts more than 2 million more members than Chime. The more subtle factor that I'd like to point out is that SoFi is far more profitable than Chime. Perhaps the biggest contributor to SoFi's profitability profile is the rate at which it is acquiring new members relative to the competition. Per the table above, SoFi's three-year compound annual growth rate (CAGR) for user acquisition is almost double that of Chime's. By onboarding more users, SoFi has been able to more quickly monetize these members and command superior unit economics compared to its peers. While Chime's growth is impressive, the company lags behind SoFi on a number of critical metrics. While I suspect that Chime may see a brief uptick in its operations thanks to the notoriety that came with the IPO, I question if the company will ever eclipse SoFi's size. Although SoFi is a bit pricey compared to traditional bank and financial services stocks based on its price-to-earnings (P/E) ratio, I think the shares deserve a premium due to the company's technology-first platform. I see SoFi growing into its valuation thanks to future earnings growth. If I had to choose between the two digital bank stocks explored here, I'd pick SoFi without thinking twice. Before you buy stock in SoFi Technologies, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and SoFi Technologies wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $664,089!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $881,731!* Now, it's worth noting Stock Advisor's total average return is 994% — a market-crushing outperformance compared to 172% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 9, 2025 JPMorgan Chase is an advertising partner of Motley Fool Money. Wells Fargo is an advertising partner of Motley Fool Money. Adam Spatacco has positions in SoFi Technologies. The Motley Fool has positions in and recommends JPMorgan Chase and PayPal. The Motley Fool recommends Capital One Financial and recommends the following options: long January 2027 $42.50 calls on PayPal and short June 2025 $77.50 calls on PayPal. The Motley Fool has a disclosure policy. Chime versus SoFi: Which Is the Better Fintech Stock Right Now? was originally published by The Motley Fool Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store