logo
#

Latest news with #OpenAI

Meta tried to buy Ilya Sutskever's $32 billion AI startup, but is now planning to hire its CEO
Meta tried to buy Ilya Sutskever's $32 billion AI startup, but is now planning to hire its CEO

CNBC

time26 minutes ago

  • Business
  • CNBC

Meta tried to buy Ilya Sutskever's $32 billion AI startup, but is now planning to hire its CEO

When Meta CEO Mark Zuckerberg poached Scale AI founder Alexandr Wang last week as part of a $14.3 billion investment in the artificial intelligence startup, he was apparently just getting started. Zuckerberg's multibillion-dollar AI hiring spree has now turned to Daniel Gross, the CEO of Ilya Sutskever's startup Safe Superintelligence, and former GitHub CEO Nat Friedman, according to sources with knowledge of the matter. It's not how Zuckerberg planned for a deal to go down. Earlier this year, sources said, Meta tried to acquire Safe Superintelligence, which was reportedly valued at $32 billion in a fundraising round in April. Sutskever, who just launched the startup a year ago, shortly after leaving OpenAI, rebuffed Meta's efforts, as well as the company's attempt to hire him, said the sources, who asked not to be named because the information is confidential. Soon after those talks ended, Zuckerberg started negotiating with Gross, the sources said. In addition to his role at Safe Superintelligence, Gross runs a venture capital firm with Friedman called NFDG, their combined initials. Both men are joining Meta as part of the transaction, and will work on products under Wang, one source said. Meta, meanwhile, will get a stake in NFDG, according to multiple sources. The Information was first to report on Meta's plans to hire Gross and Friedman. Gross, Friedman and Sutskever didn't respond to CNBC's requests for comment. A Meta spokesperson said the company "will share more about our superintelligence effort and the great people joining this team in the coming weeks." Zuckerberg's aggressive hiring tactics escalate an AI talent war that's reached new heights of late. Meta, Google and OpenAI, along with a host of other big companies and high-valued startups, are racing to develop the most powerful large language models, and pushing towards artificial general intelligence (AGI), or AI that's considered equal to or greater than human intelligence. Last week, Meta agreed to pump $14.3 billion into Scale AI to bring on Wang and a few other top engineers while getting a 49% stake in the startup. Altman said on the latest episode of the "Uncapped" podcast, which is hosted by his brother, that Meta has tried to lure OpenAI employees by offering signing bonuses as high as $100 million, with even larger annual compensation packages. Altman said "none of our best people have decided to take them up on that." "I've heard that Meta thinks of us as their biggest competitor," Atlman said on the podcast. "Their current AI efforts have not worked as well as they have hoped and I respect being aggressive and continuing to try new things." Meta didn't respond to a request for comment on Altman's remarks. OpenAI, for its part, has gone to similar lengths, paying about $6.5 billion to hire iPhone designer Jony Ive and to acquire his nascent devices startup io. Elsewhere, the founders of AI startup were recruited back to Google last year in a multibillion-dollar deal, while DeepMind co-founder Mustafa Suleyman was brought on by Microsoft in a $650 million purchase of talent from Inflection AI. In Gross, Zuckerberg is getting a longtime entrepreneur and AI investor. Gross founded the search engine Cue, which was acquired by Apple in 2013. He was a top executive at Apple and helped lead machine learning efforts and the development of Siri. He was later a partner at startup accelerator Y Combinator, before co‑founding Safe Superintelligence alongside Sutskever. Friedman co-founded two startups before becoming the CEO of GitHub following Microsoft's acquisition of the code-sharing platform in 2018. NFDG has backed Coinbase, Figma, CoreWeave, Perplexity and over the years, according to Pitchbook. It's unclear what happens to its investment portfolio in a Meta deal, a source said.

Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions
Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions

Yahoo

time2 hours ago

  • Science
  • Yahoo

Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more accurate we try to make AI models, the bigger their carbon footprint — with some prompts producing up to 50 times more carbon dioxide emissions than others, a new study has revealed. Reasoning models, such as Anthropic's Claude, OpenAI's o3 and DeepSeek's R1, are specialized large language models (LLMs) that dedicate more time and computing power to produce more accurate responses than their predecessors. Yet, aside from some impressive results, these models have been shown to face severe limitations in their ability to crack complex problems. Now, a team of researchers has highlighted another constraint on the models' performance — their exorbitant carbon footprint. They published their findings June 19 in the journal Frontiers in Communication. "The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences in Germany, said in a statement. "We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models." To answer the prompts given to them, LLMs break up language into tokens — word chunks that are converted into a string of numbers before being fed into neural networks. These neural networks are tuned using training data that calculates the probabilities of certain patterns appearing. They then use these probabilities to generate responses. Reasoning models further attempt to boost accuracy using a process known as "chain-of-thought." This is a technique that works by breaking down one complex problem into smaller, more digestible intermediary steps that follow a logical flow, mimicking how humans might arrive at the conclusion to the same problem. Related: AI 'hallucinates' constantly, but there's a solution However, these models have significantly higher energy demands than conventional LLMs, posing a potential economic bottleneck for companies and users wishing to deploy them. Yet, despite some research into the environmental impacts of growing AI adoption more generally, comparisons between the carbon footprints of different models remain relatively rare. To examine the CO₂ emissions produced by different models, the scientists behind the new study asked 14 LLMs 1,000 questions across different topics. The different models had between 7 and 72 billion parameters. The computations were performed using a Perun framework (which analyzes LLM performance and the energy it requires) on an NVIDIA A100 GPU. The team then converted energy usage into CO₂ by assuming each kilowatt-hour of energy produces 480 grams of CO₂. Their results show that, on average, reasoning models generated 543.5 tokens per question compared to just 37.7 tokens for more concise models. These extra tokens — amounting to more computations — meant that the more accurate reasoning models produced more CO₂. The most accurate model was the 72 billion parameter Cogito model, which answered 84.9% of the benchmark questions correctly. Cogito released three times the CO₂ emissions of similarly sized models made to generate answers more concisely. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," said Dauner. "None of the models that kept emissions below 500 grams of CO₂ equivalent [total greenhouse gases released] achieved higher than 80% accuracy on answering the 1,000 questions correctly." RELATED STORIES —Replika AI chatbot is sexually harassing users, including minors, new study claims —OpenAI's 'smartest' AI model was explicitly told to shut down — and it refused —AI benchmarking platform is helping top companies rig their model performances, study claims But the issues go beyond accuracy. Questions that needed longer reasoning times, like in algebra or philosophy, caused emissions to spike six times higher than straightforward look-up queries. The researchers' calculations also show that the emissions depended on the models that were chosen. To answer 60,000 questions, DeepSeek's 70 billion parameter R1 model would produce the CO₂ emitted by a round-trip flight between New York and London. Alibaba Cloud's 72 billion parameter Qwen 2.5 model, however, would be able to answer these with similar accuracy rates for a third of the emissions. The study's findings aren't definitive; emissions may vary depending on the hardware used and the energy grids used to supply their power, the researchers emphasized. But they should prompt AI users to think before they deploy the technology, the researchers noted. "If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies," Dauner said.

Elon Musk has two harsh words for OpenAI founder Sam Altman
Elon Musk has two harsh words for OpenAI founder Sam Altman

Yahoo

time2 hours ago

  • Business
  • Yahoo

Elon Musk has two harsh words for OpenAI founder Sam Altman

Elon Musk has two harsh words for OpenAI founder Sam Altman originally appeared on TheStreet. President Donald Trump is not the only public personality whom Elon Musk has had a public feud with. The billionaire tycoon has also engaged in a feud with Sam Altman. Musk is the owner of the automotive and clean energy company Tesla (Nasdaq: TSLA), social media platform X, and space technology company SpaceX, making him the wealthiest man in the world. Altman is the CEO of OpenAI, the AI company behind the popular ChatGPT bot. Both the entrepreneurs have hailed cryptocurrencies. Musk's Tesla holds more than $1 billion in Bitcoin on its balance sheet, and the tycoon has often talked about Dogecoin in positive terms. In fact, the acronym of the department he led in Washington, the Department of Government Efficiency (DOGE), is the same as the ticker of the popular meme coin. Altman is also a fan of cryptocurrencies. In October 2023, he praised Bitcoin in an episode of the Joe Rogan podcast: 'I think this idea that we have a global currency that is outside of the control of any government is a super logical and important step on the tech tree.' Altman even launched the Worldcoin cryptocurrency in 2019. Both Musk and Altman are also among the OpenAI co-founders. Musk even served on its board of directors from its launch in December 2015 until his resignation in 2018. Musk, who also runs an AI company, xAI, has frequently criticized OpenAI's shift from a non-profit to a 'capped-profit' model under Altman's leadership. As the so-called "The OpenAI Files" came out on X, Musk blasted the ChatGPT chief as "Scam Altman." AI models such as OpenAI have been criticized for their centralized models, with decentralized AI, supported by crypto payments, being touted as a possible challenger to the hegemony of the likes of OpenAI. The AI tokens market cap stood at $25 billion at the time of writing. Elon Musk has two harsh words for OpenAI founder Sam Altman first appeared on TheStreet on Jun 19, 2025 This story was originally reported by TheStreet on Jun 19, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Meta offered $100 mn bonuses to poach OpenAI employees
Meta offered $100 mn bonuses to poach OpenAI employees

Qatar Tribune

time2 hours ago

  • Business
  • Qatar Tribune

Meta offered $100 mn bonuses to poach OpenAI employees

Agencies Meta offered $100 million bonuses to OpenAI employees in an unsuccessful bid to poach the ChatGPT maker's talent and strengthen its own generative AI teams, the startup's CEO, Sam Altman, has said. Facebook's parent company -- a competitor of OpenAI -- also offered 'giant' annual salaries exceeding $100 million to OpenAI staffers, Altman said in an interview on the 'Uncapped with Jack Altman' podcast released Tuesday. 'It is crazy,' Altman told his brother Jack in the interview. 'I'm really happy that at least so far none of our best people have decided to take them up on that.' The OpenAI cofounder said Meta had made the offers to 'a lot of people on our team.' Meta did not immediately respond to a request for comment. The social media titan has invested billions of dollars in artificial intelligence technology amid fierce competition in the AI race with rivals OpenAI, Google and Microsoft. Meta chief executive Mark Zuckerberg said in January that the firm planned to invest at least $60 billion in AI this year, with ambitions to lead in the technology. Last week, Meta entered into a deal reportedly worth more than $10 billion with Scale AI, a company specializing in labeling data used in training artificial intelligence models. As part of the deal, company founder and CEO Alexandr Wang will join Meta to help with the tech giant's AI ambitions, including its work on superintelligence efforts. Comparing Meta to his company, Altman said on the podcast that 'OpenAI has a much better shot at delivering on superintelligence.' 'I think the strategy of a ton of upfront guaranteed comp and that being the reason you tell someone to join... I don't think that's going to set up a great culture,' the OpenAI boss added.

Why OpenAI and Microsoft's AI partnership might be headed for a breakup
Why OpenAI and Microsoft's AI partnership might be headed for a breakup

Yahoo

time2 hours ago

  • Business
  • Yahoo

Why OpenAI and Microsoft's AI partnership might be headed for a breakup

Microsoft has been one of OpenAI's biggest backers over the past three years, as OpenAI's flagship product, ChatGPT, has steadily embedded itself into our lives. But the multibillion-dollar relationship now appears to be on shaky ground, with rumors that OpenAI might file an antitrust complaint against the Windows-maker in an attempt to wriggle out of a longstanding agreement between the two companies. The Trump administration is trying to bring back asbestos How one company is revolutionizing the way we use everyday water Housing market weakness triggers Lennar to offer biggest incentives since 2009 The relationship, which began with Microsoft's $1 billion investment in OpenAI in 2019—and has since grown to include more than $10 billion in total funding—is built on Microsoft's entitlement to 49% of OpenAI Global LLC's profits, capped at roughly 10 times its investment. For years, the partnership has remained stable. When Sam Altman was briefly ousted as OpenAI CEO in November 2023, Microsoft remained steadfast in its support of the company. But recent events appear to have strained the relationship—specifically, a new deal OpenAI has made. OpenAI's pending acquisition of AI coding startup Windsurf—valued at $3 billion—has pushed its partnership with Microsoft to the brink. Reports suggest that OpenAI executives have threatened an antitrust complaint if Microsoft insists on full access to Windsurf's intellectual property after the deal closes. At the same time, Microsoft is reportedly uneasy about the prospect of OpenAI developing a competing Copilot product. The two companies did issue a joint statement that conveyed a sense of harmony, though it acknowledged no agreement had been reached regarding Windsurf. 'We have a long-term, productive partnership that has delivered amazing AI tools for everyone,' the companies said. 'Talks are ongoing and we are optimistic we will continue to build together for years to come.' Experts warn that OpenAI should think twice before following through on its reported threats. 'Siccing the antitrust cops on your rivals may feel very satisfying, but that strategy usually boomerangs back on the complaining company when they themselves get big and successful,' says Adam Kovacevich, founder and CEO of the Chamber of Progress, a tech industry coalition. Kovacevich argues that such internal disputes may grab headlines but ultimately distract from the broader goals. 'OpenAI and Microsoft are locked in a pretty intense AI competition with Google, Anthropic, and Meta, and these kind of governance disputes are ultimately a huge distraction from trying to win on the technology front,' he says. An internal OpenAI strategy document, recently surfaced in a court case, reveals the company's bold plan to evolve ChatGPT from a popular chatbot into an all-encompassing 'AI super assistant,' positioning it as both a crucial partner and a potential competitor to Microsoft. The document implicitly acknowledges OpenAI's reliance on partners to achieve massive scale, noting the infrastructure required to serve an enormous user base. Until January 2025, Microsoft was OpenAI's exclusive data center provider, in exchange for integrating OpenAI's models into Microsoft's products, including Copilot. Since then, the landscape has shifted. OpenAI has signed deals with CoreWeave and Oracle for additional computing capacity, and is reportedly close to an agreement with Google—despite Google offering a competing AI model—for cloud hosting. Meanwhile, Microsoft still holds a significant share in OpenAI's future profits. There are reports that OpenAI has proposed a deal to exchange Microsoft's entitlement to future profits for a 33% stake in a restructured OpenAI. But Microsoft currently retains significant control over whether OpenAI can restructure and, under a 2023 agreement, is also believed to be entitled to access any OpenAI technology, including that acquired through acquisitions—potentially giving Microsoft access to Windsurf's technology for its Copilot coding tools. For Microsoft, maintaining the status quo would likely be ideal. They would continue to access OpenAI's core technology, and benefit from Windsurf's specialist expertise to strengthen Copilot's coding capabilities. For OpenAI, the best-case outcome would involve restructuring into a for-profit entity with Microsoft's consent, while establishing boundaries to prevent Microsoft from encroaching on areas where OpenAI might eventually compete. OpenAI would also like to diversify its infrastructure partners—having admitted in legal documents that 'our current infrastructure isn't equipped to handle [redacted] users.' And, perhaps most importantly, OpenAI wants its product to stand on its own—rather than being buried within a Microsoft-branded ecosystem. 'Real choice drives competition and benefits everyone,' the confidential strategy document states. 'Users should be able to pick their AI assistant. If you're on iOS, Android, or Windows, you should be able to set ChatGPT as your default. Apple, Google, Microsoft, Meta shouldn't push their own AIs without giving users fair alternatives.' Whether OpenAI will achieve that goal remains an open question. This post originally appeared at to get the Fast Company newsletter: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store