logo
2025 CNBC Disruptor 50: How we chose companies on this year's list

2025 CNBC Disruptor 50: How we chose companies on this year's list

CNBC10-06-2025

The top five companies on the 2025 CNBC Disruptor 50 list — Anduril, OpenAI, Databricks, Anthropic and Canva — have a combined valuation of just under $500 billion. This is more than the combined total valuation of almost every past Disruptor 50 list of the last 12 years.
OpenAI, the company that sparked a global arms race for new artificial intelligence capabilities, is the biggest contributor with its $300 billion value. But it is a race in which the other four companies in the top five (and more than two-thirds of the entire 2025 Disruptor 50) are very much key participants.
The piles of cash amassed by these startups is characteristic of a new era of the Disruptor 50 list, an era that began with the 2023 list and very much continues, with the Disruptors using their cash piles to fund their own growth organically, and (notably) inorganically. Databricks has been especially acquisitive, spending billions of dollars to buy other companies in the past year.
But valuation isn't everything. The eye-popping values attained by the top five companies on this year's list, and many others throughout the top 50, were technically less important factors in our ranking methodology than other measures of the companies' growth, scalability, and their overall promise to keep on disrupting in the years to come.
Here's how we chose the 2025 Disruptor 50:
All private, independently owned startup companies founded after Jan. 1, 2010, were eligible to be nominated for the Disruptor 50 list. Companies nominated were required to submit a detailed analysis, including key quantitative and qualitative information.
Quantitative metrics included company-submitted data on their sales, number of users, employee growth (or lack therof), and more. Some of this information has been kept off the record and was used for scoring purposes only. CNBC also brought in data from a pair of outside partners — PitchBook, which provided data on fundraising, implied valuations and investor quality; and IBISWorld, whose database of industry reports we use to compare the companies based on the industries they are attempting to disrupt.
CNBC's Disruptor 50 Advisory Board, a group of leading thinkers in the field of innovation and entrepreneurship from around the world, along with the newer Disruptor 50 VC Advisory Board, then ranked the quantitative criteria by importance and ability to disrupt established industries and public companies. This year, the two advisory boards found that scalability and user growth were the most important criteria, followed by sales growth and access to capital and community.
New for 2025, we can compare the way the two different advisory boards considered the importance of the list criteria. While the two boards mostly agreed, the VC group thought that the size of the industry being disrupted was much more important than the academics did, with the latter ranking access to capital and community as more important criterion than the group that provides said access.
The ranking model is complex enough to be sensitive to these differences of opinion, and perhaps more than ever, it makes good on the concept that companies must score highly on a wide range of criteria to make the final list.
Nominated companies were also asked to submit important qualitative information about themselves, including descriptions of their core business model, ideal customers and recent company milestones. A team of CNBC editorial staff, including TV anchors, reporters and producers, and CNBC.com reporters and editors, along with many members of the Advisory Board, read the submissions and provided holistic qualitative assessments of each company.
In addition, the VC Advisory Board assessed a small group of finalists as an additional component of the qualitative review. Specifically, we asked the VC group to assess some of the companies that would, if selected, be making the list for the first time, as well as to help in the consideration of high-scoring early stage firms, a group with lower valuations but promising business models poised for future growth. Importantly, these VCs were not permitted to provide an assessment of any company in their firm's own portfolios.
In the final stage of the process, total qualitative scores were combined with a weighted quantitative score to determine which 50 companies made the list and in what order.
The new generative AI era that began in 2023 has completely transformed the Disruptor 50 List. Twenty of this year's 50 companies have made the list for the first time, while another 19 were first-timers in either 2023 or 2024. Put another way, only 11 of the 2025 honorees are pre-ChatGPT CNBC Disruptors. But for most of that group (Anduril, Databricks, and Canva chief among them), the embrace of the new era is what has kept them here.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why OpenAI engineers are turning down $100 million from Meta, according to Sam Altman
Why OpenAI engineers are turning down $100 million from Meta, according to Sam Altman

Yahoo

timean hour ago

  • Yahoo

Why OpenAI engineers are turning down $100 million from Meta, according to Sam Altman

When you buy through links on our articles, Future and its syndication partners may earn a commission. OpenAI CEO Sam Altman says competitors, particularly Mark Zuckerberg's Meta, have been trying to poach OpenAI engineers with sky-high compensation packages. 'They started making these, like, giant offers to people on our team. You know, like $100 million signing bonuses and more than that in compensation per year,' Altman said this week on the Uncapped podcast, hosted by his brother, Jack Altman. Altman said he was glad to see that those enticing offers haven't worked on OpenAI's best people. He assumes this is because they looked at the two paths, Meta and OpenAI, and concluded that the latter has a better shot at delivering on superintelligence and will eventually become the more valuable company. Amid the digs, Altman said Meta is missing the one thing that truly matters in AI: a culture of real innovation. 'There are many things I respect about Meta as a company, but I don't think they're great at innovation,' said Altman, when discussing Meta's attempts to lure OpenAI engineers. He explained that by trying to recruit OpenAI staff with massive guaranteed compensation packages, Meta is essentially building a culture that prioritizes money over the work and mission. He believes that focusing on money rather than purpose and product is a recipe for the wrong kind of culture. Altman contrasted this with OpenAI's approach, which he said attracts and retains talent by aligning financial incentives with a shared sense of purpose and innovative work. 'The special thing about OpenAI is we've managed to build a culture that is good at innovation, and I think we understand a lot of things they don't know about what it takes to succeed at that,' he explained further. Drawing a parallel to past tech rivalries, Altman recalled hearing Zuckerberg discuss how Google tried to enter the social media space in the early days of Facebook. However, to those at Facebook, it was clear that it wasn't going to work for Google. Altman said he now feels similarly about Meta's approach to AI, suggesting that Meta is making an error by trying to replicate OpenAI's success directly. He even discussed how he believes many people at Meta simply copy OpenAI. Altman explained this with an example of how many other companies' chat apps resemble ChatGPT, down to the UI mistakes. He drew from his own experience to argue that the copy-and-paste strategy is fundamentally flawed, and that trying to go where your competitor already is, instead of building a culture around innovation, rarely works. When asked why he thinks Meta sees OpenAI as such a competitor, Altman mentioned how an ex-Meta employee once told him that Meta views ChatGPT as a Facebook replacement. He explained that the user experience with ChatGPT felt different, like one of the few tech products that didn't feel 'somewhat adversarial.' He contrasted this with Google, which he said has started showing worse search results, and with Meta's products, which try to hack users' brains to keep them scrolling. Instead of doing either, ChatGPT simply tries to help users with whatever questions they may have, and even help them feel better. Beyond discussing Meta, the Altman brothers talked about a wide range of topics related to the future of AI, OpenAI's strategy, and even Sam's personal reflections. Altman made a 'crazy claim' that AI will discover new science, and that humanoid robots are one of his dreams — something he thinks will be achievable within the next 5 to 10 years. An internal OpenAI doc reveals exactly how ChatGPT may become your "super-assistant" very soon OpenAI CEO Sam Altman replies to artists irate over their stolen work ChatGPT's Sam Altman threatened to "Uno reverse" Facebook over AI app — he might be dead serious

Using AI bots like ChatGPTcould be causing cognitive decline, new study shows
Using AI bots like ChatGPTcould be causing cognitive decline, new study shows

Yahoo

time3 hours ago

  • Yahoo

Using AI bots like ChatGPTcould be causing cognitive decline, new study shows

A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or using no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. Related Can ChatGPT be an alternative to psychotherapy and help with emotional growth? The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was 'biased and superficial'. The study found that repeated GPT use can come with 'cognitive debt' that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to 'diminished critical inquiry, increased vulnerability to manipulation and decreased creativity,' as well as a 'likely decrease' in learning skills. 'When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives,' the study continued. Related 'Our GPUs are melting': OpenAI puts restrictions on new ChatGPT image generation tool The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain 'before LLMs are recognised as something that is net positive for humans.'

OpenAI supremo Sam Altman says he 'doesn't know how' he would have taken care of his baby without the help of ChatGPT
OpenAI supremo Sam Altman says he 'doesn't know how' he would have taken care of his baby without the help of ChatGPT

Yahoo

time3 hours ago

  • Yahoo

OpenAI supremo Sam Altman says he 'doesn't know how' he would have taken care of his baby without the help of ChatGPT

When you buy through links on our articles, Future and its syndication partners may earn a commission. For a chap atop one of the most high profile tech organisations on the planet, OpenAI CEO Sam Altman's propensity, shall we say, to expatiate but not excogitate, is, well, remarkable. Sometimes, he really doesn't seem to think before he speaks. The latest example involves his status as a "new parent," something which he apparently doesn't consider viable without help from his very own chatbot (via Techcrunch). "Clearly, people have been able to take care of babies without ChatGPT for a long time,' Altman initially and astutely observes on the official OpenAI podcast, only to concede, "I don't know how I would've done that." "Those first few weeks it was constantly," he says of his tendency to consult ChatGPT on childcare. Apparently, books, consulting friends and family, even a good old fashioned Google search would not have occurred to this colossus astride the field of artificial, er, intelligence. If all that's a touch arch, forgive me. But the Altman is in absolute AI evangelism overdrive mode in this interview. "I spend a lot of time thinking about how my kid will use AI in the future," he says, "my kids will never be smarter than AI. But they will grow up vastly more capable than we grew up and able to do things that we cannot imagine, they'll be really good at using AI." There are countless immediate and obvious objections to that world view. For sure, people will be better at using AI. But will they themselves be more capable? Maybe most people won't be able to write coherent prose if AI does it for them from day one. Will having AI write everything make everyone more capable? Not that this is a major revelation, but this podcast makes it clear just how signed up Altman is to the AI revolution. "They will look back on this as a very prehistoric time period," he says of today's children. That's a slightly odd claim, given "prehistory" means before human activities and endeavours were recorded for posterity. And, of course, the very existence of the large language models that OpenAI creates entirely relies on the countless gigabytes of pre-AI data on which those LLMs were originally trained. Indeed, one of the greatest challenges currently facing AI is the notion of chatbot contamination. The idea is that, since the release of ChatGPT into the wild in 2022, the data on which LLMs are now being trained is increasing polluted with the synthetic output of prior chatbots. As more and more chatbots inject more and more synthetic data into the overall shared pool, subsequent generations of AI models will thus become ever more polluted and less reliable, eventually leading to a state known as AI model collapse. Indeed, some observers believe this is already happening, as evidenced by the increasing propensity to hallucinate by some of the latest models. Cleaning that problem up is going to be "prohibitively expensive, probably impossible" by some accounts. Anyway, if there's a issue with Altman's unfailingly optimistic utterances, it's probably a lack of nuance. Everything before AI is hopeless and clunky, to the point where it's hard to imagine how you'd look after a newborn baby without ChatGPT. Everything after AI is bright and clean and perfect. Of course, anyone who's used a current chatbot for more than a few moments will be very familiar with their immediately obvious limitations, let alone the broader problems they may pose even if issues like hallucination are overcome. At the very least, it would be a lot easier to empathise with the likes of Altman if there was some sense of those challenges to balance his one-sided narrative. Anywho, fire up the podcast and decide for yourself just what you make of Altman's everything-AI attitudes.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store