logo
Microsoft's Exit from Talks Threatens OpenAI's For-Profit Transition

Microsoft's Exit from Talks Threatens OpenAI's For-Profit Transition

Tech giant Microsoft (MSFT) is considering withdrawing from heated negotiations with artificial intelligence (AI) startup OpenAI. The two are engaged in high-stakes talks regarding the size of Microsoft's future stake in the ChatGPT maker, should it turn into a public-benefit entity. CEO Sam Altman believes that a for-profit corporation would enable the company to raise funds more easily and even pursue an IPO (Initial Public Listing) in the future.
Confident Investing Starts Here:
Notably, OpenAI needs to transform into a for-profit structure to access the funds from its most recent funding round, or investors could walk away. In the worst-case scenario, some may convert their equity funding into debt. For instance, one of its largest backers, Japan's SoftBank (SFTBY), might reduce its funding by $10 billion, from the initially planned $30 billion. Microsoft must approve the conversion by the end of this year.
Multibillion-Dollar Partnership at Risk
Microsoft has invested over $13 billion in OpenAI since its initial funding in 2019. The companies are negotiating the size of MSFT's equity stake, which could range from 20% to 49% in the restructured entity. However, the two seem to have reached an impasse regarding their future relationship. A Financial Times report stated that if the parties fail to reach an agreement, Microsoft could withdraw and continue to rely on its existing contract, which grants it access to OpenAI's technology until 2030.
Notably, the tech giant has exclusive rights to sell access to OpenAI's AI models and tools through Azure Cloud and receives a 20% share of OpenAI's revenues. Microsoft is unwilling to give up either benefit, as this exclusivity gives it an edge over rivals Alphabet (GOOGL) and Meta Platforms (META) in the AI race. Meanwhile, this fallout has led OpenAI to consider legal action, threatening to take Microsoft to court and accusing it of anticompetitive behavior.
Moreover, OpenAI is complaining that Microsoft is unable to deliver the enhanced computing power required to run and train its advanced ChatGPT models. The AI firm boasts 500 million weekly active users worldwide.
Their partnership has indeed been a productive one, delivering advanced AI tools to the masses and widely regarded as one of the most important in the technology sector. Both parties are said to be in daily discussions on the subject, and their recent joint statement reads, 'Talks are ongoing and we are optimistic we will continue to build together for years to come.'
Furthermore, once this hurdle is cleared, OpenAI must receive approval from attorneys general in Delaware and California to convert to a for-profit structure. Additionally, OpenAI must contend with billionaire Elon Musk 's lawsuit, which seeks to halt the transformation.
Is Microsoft a Good Stock to Buy?
Analysts remain highly optimistic about Microsoft's long-term stock trajectory. On TipRanks, MSFT stock has a Strong Buy consensus rating based on 31 Buys and five Hold ratings. The average Microsoft price target of $518.77 implies 8% upside potential from current levels. Year-to-date, MSFT stock has gained 14.4%.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The $50 Billion Company That Does Almost Nothing
The $50 Billion Company That Does Almost Nothing

Gizmodo

time36 minutes ago

  • Gizmodo

The $50 Billion Company That Does Almost Nothing

Something strange is happening on Wall Street. It isn't Elon Musk, AI, or a late-night post from Donald Trump. It's a crypto company called Circle Internet Group, and it's making the market feel like the glory days of the dot-com bubble are back. Circle went public on June 5. In just eleven trading sessions, its stock exploded by an almost unprecedented 675%, adding over $42 billion to its market cap. The company now trades at a valuation that puts it in the same league as tech unicorns and AI moonshots, commanding a price that has investors paying, in essence, $295 for every $1 of its earnings. There's just one problem. Circle doesn't have revolutionary AI. It doesn't build sleek consumer gadgets. Its business model is shockingly simple. Here's how it works: You give Circle a dollar. They give you a digital token, called USDC, worth that same dollar. They then take your actual dollar, invest it in something safe like short-term U.S. Treasury bonds, and collect the interest. You get the token. They get the profit. That's it. That's the entire business. This has led critics to label Circle as little more than a glorified 'money wrapper.' So why is Wall Street treating it like the next Tesla? The answer is one word: stablecoin. USDC is a stablecoin, a digital token pegged to a stable asset, in this case, the U.S. dollar. The idea is that for every USDC token, there's a real dollar sitting in a reserve account. This makes it incredibly useful for crypto traders who need the speed of digital assets without the wild volatility of Bitcoin. And now, the bulls are betting that stablecoins are about to go mainstream. The Senate just passed the 'Genius Act,' landmark legislation that paves the way for banks, fintechs like PayPal, and even retailers like Walmart and Amazon to use stablecoins for payments. Suddenly, the dream of crypto becoming a real alternative to Visa or Mastercard seems within reach. Analysts are salivating. Citi predicts the stablecoin market could hit $3.7 trillion by 2030. In that scenario, Circle, as a neutral platform not tied to any single bank, is perfectly positioned to cash in. But there's a catch. The business model that seems so brilliant in a high-interest-rate environment is also its greatest weakness. 'Circle's whole business is literally glued to Fed policy,' one user wrote in a viral post on Reddit's r/wallstreetbets. 'It's a Treasury ETF in a trench coat.' If the Federal Reserve cuts rates, Circle's main revenue stream shrinks. There's also nothing stopping bigger players from launching their own lookalike stablecoins, erasing Circle's edge overnight. If everyone's offering the same thing, Circle's moat starts looking very shallow. And yet, Wall Street is piling in like it's the next OpenAI. What if regulators change their tune? The entire model could be at risk. The business is remarkably fragile. When contacted by Gizmodo, a spokesperson said the company was in a post-IPO 'quiet period,' legally restricting it from making promotional statements. For now, the hype is winning. Circle's stock is on fire, fueled by the promise of a future where we all pay for our coffee with digital dollars. But beneath the surface, this $50 billion company doesn't innovate or disrupt. It just holds your cash, gives you a digital receipt, and pockets the interest. And in the bizarre world of 2025 finance, that's apparently enough to be crowned the new king of Wall Street.

Windows parental controls are crashing Chrome — here's the workaround
Windows parental controls are crashing Chrome — here's the workaround

Tom's Guide

timean hour ago

  • Tom's Guide

Windows parental controls are crashing Chrome — here's the workaround

Windows 11's Family Safety feature is supposed to block certain websites from children, but apparently it's also been causing issues with Google's Chrome browser, a (vastly more popular) competitor to Microsoft's own Edge. The problem first surfaced on Windows on June 3, per The Verge, when several users started noticing they couldn't open Chrome or their browser would crash randomly. Restarting their computer or reinstalling Chrome didn't fix the issue, and other browsers like Firefox and Opera appeared unaffected. On Monday, a Google spokesperson posted in the company's community forum that it had investigated these reports and found the issues were linked to Microsoft's new Windows Family Safety feature. This optional feature is primarily used by parents and schools to manage children's screen time, filter their web browsing, and monitor their online activity. Curiously, the bug has been going on for weeks now, and Microsoft still hasn't issued a patch. 'We've not heard anything from Microsoft about a fix being rolled out,' wrote a Chromium engineer in a bug tracking thread on June 10. 'They have provided guidance to users who contact them about how to get Chrome working again, but I wouldn't think that would have a large effect.' While this issue could be an innocent bug, Microsoft has a history of placing annoying hurdles between Edge and Chrome to entice users to stick with its browser. So anytime a technical snafu makes Chrome run worse on Windows PCs, Microsoft understandably gets some serious side eye. Thankfully, there seem to be two ways to get around this bug while we wait for Microsoft to issue a fix, and they're both fairly simple. The most straightforward is to turn off the "Filter Inappropriate Websites" setting. Head to the Family Safety mobile app or Family Safety web portal, select a user's account, and choose to disable "Filter inappropriate websites" under the Edge tab. However, that'll remove the guardrails on Chrome and let your child access any website, including the ones you were trying to block in the first place. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. If you want to keep the guardrails on and still use Chrome, some users reported that altering the name in your Chrome folder (to something like Chrome1, for example), got the browser to work again even with the Family Safety feature enabled.

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

timean hour ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store