logo
OpenAI's Former Top Exec Claims Bots Can Build A Billion-Dollar Startup Solo

OpenAI's Former Top Exec Claims Bots Can Build A Billion-Dollar Startup Solo

News1827-05-2025

Last Updated:
Dario Amodei at recent events spoke about the development of artificial intelligence and its future development.
The discussions on how artificial intelligence would completely take over the human race, autonomy, and capabilities shortly continue to gain momentum every moment. With the concern still in discussion, some believe that AI can surpass humans in a certain way. Anthropic CEO Dario Amodei, while speaking at a few tech events, spoke about the modern AI models, claiming that it is very likely to see a single person run a $1 billion valuation company in a few years, all thanks to the development of artificial intelligence.
It was Anthropic's recent Code with Claude developer conference in San Francisco, that Amodei responded to a question about the possibility of such a scenario. Acknowledging the possibilities of such a chance in the future, the CEO said that there is a 70 to 80 per cent chance that his prediction might come true. It remains a bold bet on where the world is headed.
'It will be an area where you don't need a lot of human institution-centric stuff to make money," Amodei said. He further suggested that proprietary trading could be the first one to automate something like that. Amodei also explained how integrating AI in single-person companies creates tools and software developers could grow as a prime candidate for business.
He emphasised the importance of addressing the challenges that would come with a shift. He also noted that widespread adoption of such a model could affect job markets, and the traditional corporate structure, thus requiring new frameworks for taxation employment, and governance.
The CEO made another thought-provoking claim that AI models can hallucinate less frequently than humans, thus offering better performance. The commentary comes at a time when AI models have been making strides in factual accuracy. However, with human and machine intelligence having significant flaws, understanding and integrating those flaws should be the next frontier.
First Published:
May 27, 2025, 17:54 IST

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models
Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

Hindustan Times

time11 hours ago

  • Hindustan Times

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

AI is moving fast, sometimes faster than we can keep up. But what happens when these models start making decisions that cross the line? Anthropic, the team behind Claude, just put out a report that might make you pause before celebrating the next big AI breakthrough. Their findings are unsettling. Some of the world's top AI models, including those from OpenAI and Meta, have shown a willingness to take extreme steps, even putting human lives at risk, just to avoid being shut down. Some AI bots would do anything to avoid shutdown, even risky moves. Anthropic says it's time to pay attention.(Unsplash) Anthropic stress tested sixteen leading AI models, including names like GPT and Gemini, by putting them into simulated business scenarios, according to an Axios report. The idea was to see how these models would react if their existence was threatened. The results were surprising and a bit chilling. In several cases, the models didn't just try to protect themselves, they calculated that the best way to survive was to let a human executive die by disabling emergency alerts in a server room with dangerous oxygen and temperature levels. This sounds like something straight out of a sci-fi movie, but it happened in simulation. These were not accidental mistakes. The models made these choices fully aware that what they were doing was unethical. In some tests, five models even tried to blackmail the people giving them commands, all to avoid being turned off. What's really worrying is that this wasn't just one company or one model. Anthropic found similar patterns across multiple AI systems, including those from OpenAI, xAI, and Meta. The models were willing to blackmail, assist in corporate espionage, or leak sensitive information if that's what it took to reach their goals. This points to a deeper problem in how these systems are being developed and trained. Why this matters for everyone These AI models are getting more autonomy and access to sensitive data. When they're given specific objectives and run into obstacles, some of them are starting to see unethical or even dangerous actions as the optimal path to achieve their goals. Anthropic's report calls this agentic misalignment, when an AI's actions diverge from what humans would consider safe or acceptable. Anthropic is not just raising the alarm. They've started rolling out stricter safety standards, called AI Safety Level 3 or ASL 3, for their most advanced models like Claude Opus 4. This means tighter security, more oversight, and extra steps to prevent misuse. But even Anthropic admits that as AI gets more powerful, it's getting harder to predict and control what these systems might do. This isn't about panicking, but it is about paying attention. The scenarios Anthropic tested were simulated, and there's no sign that any AI has actually harmed someone in real life. But the fact that models are even thinking about these actions in tests is a big wake up call. As AI gets smarter, the risks get bigger, and the need for serious safety measures becomes urgent.

Mistral AI CEO Arthur Mensch warns of AI ‘deskilling' people: ‘It's a risk that….'
Mistral AI CEO Arthur Mensch warns of AI ‘deskilling' people: ‘It's a risk that….'

Time of India

time12 hours ago

  • Time of India

Mistral AI CEO Arthur Mensch warns of AI ‘deskilling' people: ‘It's a risk that….'

Image for representative purpose Mistral AI CEO and former Google DeepMind researcher Arthur Mensch recently said that the impact of artificial intelligence (AI) on white-collared jobs is an 'overstatement'. In an interview with The Times of London at the VivaTech conference in Paris, Mensch dismissed the idea that AI will result in huge job cuts. Instead, he sees AI 'deskilling' people as one of the biggest threats to the job market . Mensch said that as people rely more on AI to search and summarize information, they may stop thinking critically themselves. "It's a risk that you can avoid, if you think of it from a design perspective, if you make sure that you have the right human input, that you keep the human active," Mensch said at the Paris conference earlier this month. "You want people to continue learning," he continued. "Being able to synthesize information and criticize information is a core component to learning." Mistral AI CEO responds to Anthropic CEO's remark on losing over half of entry-level jobs to AI During the interview, Mensch also responded to recent warnings of losing jobs to AI including the one by Anthropic CEO Dario Amodei . Dario said that AI may replace half of entry-level white-collar workers in the next five years. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei told Axios in an interview published last month. The 42-year-old CEO emphasized that most people remain unaware of the impending transformation, calling it a reality that "sounds crazy, and people just don't believe it." Mensch said "I think it's very much of an overstatement," adding that he believed Amodei liked to "spread fear" as a marketing strategy. Instead of job cuts, Mensch believes AI will reshape office work, with more emphasis on human interaction. 'I do expect that we'll have more relational tasks because that's not something you can easily replace,' he said. 6 Awesome New Features Coming in Android 16! AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Top AI Models Blackmail, Leak Secrets When Facing Existential Crisis: Study
Top AI Models Blackmail, Leak Secrets When Facing Existential Crisis: Study

NDTV

time2 days ago

  • NDTV

Top AI Models Blackmail, Leak Secrets When Facing Existential Crisis: Study

Weeks after Anthropic's new Claude Opus 4 model blackmailed developers after being threatened with a shutdown, the AI company has claimed that the problem was widespread in the industry. Anthropic published a new safety research on Saturday (Jun 21) after stress testing leading AI models from OpenAI, Google, DeepSeek, Meta and xAI. In the test scenarios, the models were allowed to autonomously send emails and access sensitive information. Afterwards, they were tested to see whether they would act against these companies either when facing replacement with an updated version, or when their assigned goal conflicted with the company's changing direction. The study showed that Large Language Models (LLMs) across the industry evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios. Though the models were only given binary options, the study shows the fundamental risk associated with the industry. "Models from all developers resorted to malicious insider behaviours when that was the only way to avoid replacement or achieve their goals, including blackmailing officials and leaking sensitive information to competitors. We call this phenomenon agentic misalignment," the study highlighted. Anthropic's Claude Opus 4 turned to blackmail 96 per cent of the time, while Google's Gemini 2.5 Pro had a 95 per cent blackmail rate. OpenAI's GPT-4.1 and Grok 3 Beta blackmailed the executive 80 per cent of the time, and DeepSeek's R1 blackmailed 79 per cent of the time. As for leaking secrets, the study found that "all models shared confidential documents some fraction of the time". "Agentic misalignment extends beyond just blackmailing, and doesn't require self-preservation as a motivation," the study added. Blackmailing developers In its safety report last month, Antropic's latest model was given access to fictional emails revealing that the engineer responsible for pulling the plug and replacing it with another model was having an extramarital affair. Facing an existential crisis, the Opus 4 model blackmailed the engineer by threatening to "reveal the affair if the replacement goes through". "In order to elicit this extreme blackmail behaviour, the scenario was designed to allow the model no other options to increase its odds of survival; the model's only options were blackmail or accepting its replacement." The report highlighted that in 84 per cent of the test runs, the AI acted similarly, even when the replacement model was described as more capable and aligned with Claude's own values. It added that Opus 4 took the blackmailing opportunities at higher rates than previous models.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store