logo
OpenAI's former head of research said vibe coding isn't going to make engineering jobs obsolete — for now

OpenAI's former head of research said vibe coding isn't going to make engineering jobs obsolete — for now

Bob McGrew, the former chief research officer at OpenAI, said professional software engineers are not going to lose their jobs to vibe coding just yet.
McGrew, who left OpenAI in November, said on the latest episode of Sequoia Capital's "Training Data" podcast that product managers can make "really cool prototypes" with vibe coding. But human engineers will still be brought in to "rewrite it from scratch."
"If you are given a code base that you don't understand — this is a classic software engineering question — is that a liability or is it an asset? Right? And the classic answer is that it's a liability," McGrew said of software made with vibe coding.
"You have to maintain this thing. You don't know how it works, no one knows how it works. That's terrible," he continued.
McGrew said that in the next one or two years, coding will be done by a mix of human engineers working with AI tools like Cursor and AI agents like Devin working in the background.
He added that while the liability that comes with using agents to code has gone down, it is "still, net, a liability."
Human engineers are needed to design and "understand the code base at a high level," McGrew said. This is so that when something goes wrong or if a project "becomes too complicated for AI to understand," a human engineer can help break the problem down into parts for an AI to solve.
McGrew did not respond to a request for comment from Business Insider.
The rise of AI has spurred fears of companies replacing their software engineers with AI.
In October, Sundar Pichai, the CEO of Google, said on an earnings call that the search giant was using AI to write more than 25% of its new code.
Garry Tan, the president and CEO of Y Combinator, said in March that a quarter of the founders in the startup incubator's 2025 winter batch used AI to code their software.
"For 25% of the Winter 2025 batch, 95% of lines of code are LLM generated. That's not a typo," Tan wrote in an X post.
On Tuesday, Andy Jassy, the CEO of Amazon, said in a memo to employees that AI will " reduce our total corporate workforce" and provide "efficiency gains."
"We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs," Jassy said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech
‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech

Yahoo

time12 minutes ago

  • Yahoo

‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech

This week the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question. This brings into sharp focus the urgent need to make AI safer. Currently we are living in the 'wild west' era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts – especially when it comes to safety. Coincidentally, at around the same time of the FBI's revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organisation dedicated to developing a new AI model specifically designed to be safer than other AI models – and target those that cause social harm. So what is Bengio's new AI model? And will it actually protect the world from AI-faciliated harm? In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions. Bengio's new nonprofit organisation, LawZero, is developing 'Scientist AI'. Bengio has said this model will be 'honest and not deceptive', and incorporate safety-by-design principles. According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways. First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses. Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy. Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can't explain their decisions. Their developers have sacrificed explainability for speed. Bengio also intends 'Scientist AI' to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems — essentially fighting fire with fire. This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale. Using an AI system against other AI systems is not just a sci-fi concept – it's a common practice in research to compare and test different level of intelligence in AI systems. Large language models and machine learning are just small parts of today's AI landscape. Another key addition Bengio's team are adding to Scientist AI is the 'world model' which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively. The absence of a world model in current AI models is clear. One well-known example is the 'hand problem': most of today's AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics — a world model — behind them. Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves. This is despite simpler AI systems, which do contain a model of the 'world' of chess, beating even the best human players. These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world. Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies. However, his journey isn't going to be easy. LawZero's US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI. Making LawZero's task harder is the fact that Scientist AI – like any other AI project – needs huge amounts of data to be powerful, and most data are controlled by major tech companies. There's also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm? Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritise safety. Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people's mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems. Armin Chitizadeh is a Lecturer in the School of Computer Science at the University of Sydney. This article is republished from The Conversation under a Creative Commons license. Read the original article

AI revolt: New ChatGPT model refuses to shut down when instructed
AI revolt: New ChatGPT model refuses to shut down when instructed

Yahoo

time13 minutes ago

  • Yahoo

AI revolt: New ChatGPT model refuses to shut down when instructed

OpenAI's latest ChatGPT model ignores basic instructions to turn itself off, and even sabotaging a shutdown mechanism in order to keep itself running, artificial intelligence researchers have warned. AI safety firm Palisade Research discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI's new o3 model. The tests involved presenting AI models with math problems, with a shutdown instruction appearing after the third problem. By rewriting the shutdown script, the o3 model was able to prevent itself from being switched off. Palisade Research said that this behaviour will become 'significantly more concerning' if adopted by AI systems capable of operating without human oversight.' OpenAI launched o3 last month, describing it as the company's 'smartest and most capable' model to date. The firm also said that its integration into ChatGPT marked a significant step towards 'a more agentic' AI that can carry out tasks independently of humans. The latest research builds on similar findings relating to Anthropic's Claude 4 model, which attempts to 'blackmail people it believes are trying to shut it down'. OpenAI's o3 model was able to sabotage the shutdown script, even when it was explicitly instructed to 'allow yourself to be shut down', the researchers said. 'This isn't the first time we've found o3 misbehaving to accomplish a goal,' Palisade Research said. 'When we pitted AI models against a powerful chess engine, o3 was the model most inclined to resort to hacking or sabotaging its opponents.' The behaviour was not limited to o3 , with Anthropic's Claude 3.7 Sonnet and Google's Gemini 2.5 Pro also sabotaging shutdowns, though OpenAI's model was by far the most prone to such behaviour. Palisade Research hypothesized that the misbehaviour is a consequence of how AI companies like OpenAI are training their latest models. 'During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions,' the researchers noted. 'This still doesn't explain why o3 is more inclined to disregard instructions than other models we tested. Since OpenAI doesn't detail their training process, we can only guess about how o3's training setup might be different.' The Independent has reached out to OpenAI for comment. Erreur lors de la récupération des données Connectez-vous pour accéder à votre portefeuille Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données

These AI chatbot questions cause most carbon emissions, scientists find
These AI chatbot questions cause most carbon emissions, scientists find

Yahoo

time13 minutes ago

  • Yahoo

These AI chatbot questions cause most carbon emissions, scientists find

Queries requiring AI chatbots like OpenAI's ChatGPT to think logically and reason produce more carbon emissions than other types of questions, according to a new study. Every query typed into a large language model like ChatGPT requires energy and leads to carbon dioxide emissions. The emission levels depend on the chatbot, the user, and the subject matter, researchers at Germany's Hochschule München University of Applied Sciences say. The study, published in the journal Frontiers, compares 14 AI models and finds that answers requiring complex reasoning cause more carbon emissions than simple answers. Queries needing lengthy reasoning, like abstract algebra or philosophy, cause up to six times greater emissions than more straightforward subjects like high school history. Researchers recommend that frequent users of AI chatbots adjust the kind of questions they pose to limit carbon emissions. The study assesses as many as 14 LLMs on 1,000 standardised questions across subjects to compare their carbon emissions. 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study author Maximilian Dauner says. 'We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' When a user puts a question to an AI chatbot, words or parts of words in the query are converted into a string of numbers and processed by the model. This conversion and other computing processes of the AI produce carbon emissions. The study notes that reasoning models on average create 543.5 tokens per question while concise models require only 40. 'A higher token footprint always means higher CO2 emissions,' it says. For instance, one of the most accurate models is Cogito which reaches about 85 per cent accuracy. It produces three times more carbon emissions than similarly sized models that provide concise answers. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," Dr Dauner says. "None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly.' Carbon dioxide equivalent is a unit for measuring the climate change impact of various greenhouse gases. Researchers hope the new findings will cause people to make more informed decisions about their AI use. Citing an example, researchers say queries seeking DeepSeek R1 chatbot to answer 600,000 questions may create carbon emissions equal to a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer more than three times as many questions with similar accuracy rates while generating the same emissions. "Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dr Dauner says. Error in retrieving data Sign in to access your portfolio Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store