logo
It's OpenAI's Biggest Acquisition To Date – But What Does Windsurf Do?

It's OpenAI's Biggest Acquisition To Date – But What Does Windsurf Do?

Forbes06-05-2025

The Open AI logo NurPhoto via Getty Images
OpenAI's acquisition of Windsurf is making big headlines in the tech media world. It's the biggest such activity by the household-name model company responsible for ChatGPT, and it has big ramifications for AI development in general. It's a $3 billion deal, too.
However, a lot of the reporting stops at a very surface level.
If you're trying to figure out what Windsurf is all about, you might or might not get what you want from the syndicated content that shows up at the top of the Google SERP feed.
So let's break down two elements of what Windsurf does. As an alternative, you can also click through to this video where I interviewed Windsurf-nee-Coedium co-founder Anshul Ramachandran about everything, including how they decided to change their name. Here's one of Ramachandran's thoughts on the AI revolution, for context:
'The only thing that has ever invariably happened is (that) we've had more developers every single time the nature of what a developer is has changed.'
As for the company's major contributions to AI, read on. Vibe Coding and Windsurf
First of all, most of those reporting on Windsurf describe it as a company that has its own coding tool.
Windsurf Editor is a standalone IDE incorporating AI agents to help with automation of coding. It's an AI-native design that leverages the new capabilities of LLMs to let humans, in a sense, take a backseat.
Not too many months ago, prominent tech person Andrej Karpathy talked about the practice of vibe coding, where you just give the machine some orders, and sit back and let AI manage the details.
'It's not really coding,' he famously said, 'I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works.'
In other words, you don't have to hand-code anymore – you just catch the vibe of what the machine is doing.
I've written a number of articles about this where we talk about whether or not you need actual coding experience to do this kind of vibe coding, or whether it helps.
In any case, Windsurf Editor, as well as a cascade AI agent inside it, helps with the bugging, refracturing and other code operations, as an alternative to something like Cursor, which is also another popular option for vibe coding.
For its part, Windsurf Editor is very popular in some quarters, as in this notable quote from Y Combinator's Garry Tan:
'Every single one of these engineers has to spend literally just one day making projects with Windsurf and it will be like they strapped on rocket boosters.'
That's a pretty glowing endorsement. The Hardware Picture
Now, if you click into some of the major reporting on the OpenAI-Windsurf deal, you're not going to hear anything about strategic hardware investment. As mentioned, quite a few of these articles just say that Windsurf offers the vibe coding tool, and leave it at that.
However, others are suggesting the OpenAI is interested in Windsurf partly for its hardware approach, where the company is also focused, according to some sources, on 'custom AI chips and high performance server clusters.' (see this short.)
How does this work?
Well, if you look at internal documentation, it turns out that the server approach used by windsurf is built for something called MCP or model context protocol. The tools send activity to the appropriate servers as part of the overall workflow.
As for the chips, you can catch the rest of that interview or hear about how Windsurf and its new stakeholder are going to pursue microprocessor development.
My point is that Windsurf does both of these things – it offers vibe coding tools, and it builds hardware context for those systems. Is OpenAI a Non-Profit?
In correlative articles in contemporary news, we see that OpenAI Sam Altman is noting the company will maintain both for-profit and non-profit status.
'Our for-profit LLC, which has been under the non-profit (status) since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission.' OpenAI CEO Sam Altman said recently, according to reporting. 'Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn't in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.'
When this plan was initially announced, there was some confusion over whether the company would become non-profit or not.
This seems to suggest that both components will be maintained going into the future, and that a non-profit arm of the company will have a substantial impact.
As for OpenAI and Windsurf, this is big news. We want to see how this partnership works, and what it does for one of the biggest AI environments on the market.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

B. Riley Lowers Price Target on FuelCell Energy, Inc. (FCEL)
B. Riley Lowers Price Target on FuelCell Energy, Inc. (FCEL)

Yahoo

time18 minutes ago

  • Yahoo

B. Riley Lowers Price Target on FuelCell Energy, Inc. (FCEL)

FuelCell Energy, Inc. (NASDAQ:FCEL) is among the 13 Best Hydrogen and Fuel Cell Stocks to Buy According to Analysts. Riley has maintained its Neutral rating on FuelCell Energy, Inc. (NASDAQ:FCEL) and reduced its price target from $9 to $8 in response to the company's fiscal Q2 results. An industrial setting, with a fuel cell power plant against a backdrop of smoke stacks. The company mentioned that its recent cost-cutting initiatives resulted in a slight decrease in forecasts. FuelCell has reduced its headcount by 22% and plans to reduce annualized cost by 30% in comparison to fiscal 2024. The company's move toward stricter cost controls in the face of persistent operational difficulties is reflected in these initiatives. B. Riley's updated forecast reflects cautious investor sentiment as FuelCell Energy, Inc. (NASDAQ:FCEL) works through its reorganization. FuelCell Energy, Inc. (NASDAQ:FCEL) is a fuel cell power firm and one of the best hydrogen stocks. The company creates, manufactures, sells, installs, operates, and services fuel cell products and electrolysis platforms that reduce carbon emissions and generate hydrogen. FuelCell Energy, Inc. (NASDAQ:FCEL) provides services to many industries, including commercial and hospitality, wastewater treatment, education and healthcare, data centers, and industrial. Geographically, the business is active in Europe, Canada, South Korea, and the United States. The USA and South Korea account for the majority of revenue. While we acknowledge the potential of FCEL as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 High-Growth EV Stocks to Invest In and 13 Best Car Stocks to Buy in 2025. Disclosure. None. Sign in to access your portfolio

‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech
‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech

Yahoo

time20 minutes ago

  • Yahoo

‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech

This week the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question. This brings into sharp focus the urgent need to make AI safer. Currently we are living in the 'wild west' era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts – especially when it comes to safety. Coincidentally, at around the same time of the FBI's revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organisation dedicated to developing a new AI model specifically designed to be safer than other AI models – and target those that cause social harm. So what is Bengio's new AI model? And will it actually protect the world from AI-faciliated harm? In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions. Bengio's new nonprofit organisation, LawZero, is developing 'Scientist AI'. Bengio has said this model will be 'honest and not deceptive', and incorporate safety-by-design principles. According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways. First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses. Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy. Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can't explain their decisions. Their developers have sacrificed explainability for speed. Bengio also intends 'Scientist AI' to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems — essentially fighting fire with fire. This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale. Using an AI system against other AI systems is not just a sci-fi concept – it's a common practice in research to compare and test different level of intelligence in AI systems. Large language models and machine learning are just small parts of today's AI landscape. Another key addition Bengio's team are adding to Scientist AI is the 'world model' which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively. The absence of a world model in current AI models is clear. One well-known example is the 'hand problem': most of today's AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics — a world model — behind them. Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves. This is despite simpler AI systems, which do contain a model of the 'world' of chess, beating even the best human players. These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world. Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies. However, his journey isn't going to be easy. LawZero's US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI. Making LawZero's task harder is the fact that Scientist AI – like any other AI project – needs huge amounts of data to be powerful, and most data are controlled by major tech companies. There's also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm? Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritise safety. Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people's mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems. Armin Chitizadeh is a Lecturer in the School of Computer Science at the University of Sydney. This article is republished from The Conversation under a Creative Commons license. Read the original article

AI revolt: New ChatGPT model refuses to shut down when instructed
AI revolt: New ChatGPT model refuses to shut down when instructed

Yahoo

time20 minutes ago

  • Yahoo

AI revolt: New ChatGPT model refuses to shut down when instructed

OpenAI's latest ChatGPT model ignores basic instructions to turn itself off, and even sabotaging a shutdown mechanism in order to keep itself running, artificial intelligence researchers have warned. AI safety firm Palisade Research discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI's new o3 model. The tests involved presenting AI models with math problems, with a shutdown instruction appearing after the third problem. By rewriting the shutdown script, the o3 model was able to prevent itself from being switched off. Palisade Research said that this behaviour will become 'significantly more concerning' if adopted by AI systems capable of operating without human oversight.' OpenAI launched o3 last month, describing it as the company's 'smartest and most capable' model to date. The firm also said that its integration into ChatGPT marked a significant step towards 'a more agentic' AI that can carry out tasks independently of humans. The latest research builds on similar findings relating to Anthropic's Claude 4 model, which attempts to 'blackmail people it believes are trying to shut it down'. OpenAI's o3 model was able to sabotage the shutdown script, even when it was explicitly instructed to 'allow yourself to be shut down', the researchers said. 'This isn't the first time we've found o3 misbehaving to accomplish a goal,' Palisade Research said. 'When we pitted AI models against a powerful chess engine, o3 was the model most inclined to resort to hacking or sabotaging its opponents.' The behaviour was not limited to o3 , with Anthropic's Claude 3.7 Sonnet and Google's Gemini 2.5 Pro also sabotaging shutdowns, though OpenAI's model was by far the most prone to such behaviour. Palisade Research hypothesized that the misbehaviour is a consequence of how AI companies like OpenAI are training their latest models. 'During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions,' the researchers noted. 'This still doesn't explain why o3 is more inclined to disregard instructions than other models we tested. Since OpenAI doesn't detail their training process, we can only guess about how o3's training setup might be different.' The Independent has reached out to OpenAI for comment. Erreur lors de la récupération des données Connectez-vous pour accéder à votre portefeuille Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store