Microsoft forecasts strong growth for Azure cloud business, shares surge 8%
Microsoft forecast on Wednesday stronger-than-expected quarterly growth for its cloud-computing business Azure after blowout results in the latest quarter, assuaging investor worries in an uncertain economy and lifting its shares 8 per cent after hours.
Microsoft's results, which follow similarly above-expectations outcomes from Google last week, could ease concerns about a potential slowdown in AI demand, after some analysts pointed to canceled data-center leases at Microsoft as a sign of excess capacity. Investors had also been worried about the fallout from sweeping US tariffs that are prompting businesses to rein in spending.
Microsoft said revenue at its Azure cloud division rose 33 per cent in the third quarter ended March 31, exceeding estimates of 29.7 per cent, according to Visible Alpha. AI contributed 16 percentage points to the growth, up from 13 points in the previous quarter.
The company also forecast cloud-computing revenue growth of 34 per cent to 35 per cent on a constant currency basis for the fiscal fourth quarter, well above analyst estimates of 31.8 per cent, according to data from Visible Alpha. The company forecast revenue for its intelligent cloud segment between $28.75 billion and $29.05 billion, with the entire range above analyst estimates of $28.52 billion, according to LSEG data.
The company said its commercial bookings growth - which reflects new infrastructure and software contracts signed by business customers - was up 18 per cent in the fiscal third quarter, driven in part by a new Azure contract with ChatGPT creator OpenAI. Microsoft declined to comment on the size of the deal or what role it played in overall Azure sales growth.
"In a quarter clouded by tariff fears and AI spending scrutiny, this quarter is a clear win - even if it wasn't fireworks," said Jeremy Goldman, senior director of briefings at E-marketer.
"Azure and other cloud services beat Street expectations - and Microsoft Cloud’s growth shows it continues to turn AI infrastructure into margin-friendly growth. Still, investors will be watching closely as the company continues to pull back on data center expansion."
In the third quarter, Microsoft's capital expenditures rose 52.9 per cent to $21.4 billion, less than estimates of $22.39 billion, according to Visible Alpha. However, the proportion of longer-lived asset expenditures fell to about half of the total.
Jonathan Neilson, Microsoft's vice president of investor relations, said that reflected a shift in Microsoft's spending from long-lived assets such as data center buildings toward more spending on shorter-lived assets such as chips.
"You plug in CPUs and GPUs, and then you can start recognizing revenue," Neilson said, referring to categories of chips made by Intel, Advanced Micro Devices and Nvidia, among others.
The Intelligent Cloud unit, which houses Azure, posted revenue of $26.8 billion, compared with expectations of $26.17 billion. Overall, revenue rose 13 per cent to $70.1 billion, beating estimates of $68.42 billion, according to data compiled by LSEG.
Redmond, Washington-based Microsoft reported a profit of $3.46 per share in the quarter, beating expectations of $3.22 per share.
The company also benefited from a 6 per cent increase in revenue at its more personal computing unit, which includes Xbox and its line of laptops.
Microsoft, which has also repeatedly said it is capacity constrained on AI, has been pouring billions into building its AI infrastructure and expanding its data-center footprint.
A senior Microsoft executive reiterated earlier this month that the company would spend $80 billion on its data center build-out this year, and investors will be watching closely to see if it reaffirms that on its post-earnings call.
A pullback in Big Tech's AI spending will have big implications for suppliers such as chip giant Nvidia, as well as the US economy. J.P. Morgan analysts estimated in January that data-center spending could contribute between 10 and 20 basis points to US economic growth in 2025-2026.
Neilson said inventory levels had already been high during the company's fiscal second quarter as retailers stocked up on computers and gaming consoles on tariff worries. That activity continued into the third quarter, he said.
"We expected in Q3 for them to bring inventory levels down to a more normal level. What we actually saw was inventory levels remained elevated," Neilson said. "There continues to be some uncertainty there."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
33 minutes ago
- Time of India
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
an hour ago
- Time of India
ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency
ChatGPT might be making you think less: MIT study raises 'red flags' about AI dependency As AI tools become part of our daily routines, a question is starting to bubble up: What happens when we rely on them too much? A new study from MIT's Media Lab takes a closer look at how tools like ChatGPT may be affecting our brains. And what the researchers found is worth paying attention to. The study focused on how people engage mentally when completing tasks with and without AI. It turns out that while ChatGPT can make writing easier, it may also be reducing how much we think. According to the research team, participants who used ChatGPT showed noticeably lower brain activity than those who did the same task using Google or no tech at all. The findings suggest that depending on AI for tasks that require effort, like writing, decision-making, or creative thinking, could weaken the very mental muscles we're trying to sharpen. ChatGPT users show lowest brain activity in MIT's groundbreaking study The experiment involved 54 participants between the ages of 18 and 39. They were split into three groups and asked to write essays in response to prompts similar to those on standardised tests. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy Brass Idols - Handmade Brass Statues for Home & Gifting Luxeartisanship Buy Now Undo Group 1 used ChatGPT to generate their answers Group 2 relied on Google Search to find and compile information Group 3 worked without any tools, using only their knowledge and reasoning While they worked, each participant wore a headset that tracked electrical activity across 32 areas of the brain. The aim was to see how engaged their minds were during the process. (The research was led by Dr. Nataliya Kosmyna along with a team that included Ashly Vivian Beresnitzky, Ye Tong Yuan, Jessica Situ, Eugene Hauptmann, Xian-Hao Liao, Iris Braunstein, and Pattie Maes.) ChatGPT may be hurting your creativity, MIT researchers warn The results were pretty clear: the group that used ChatGPT showed the lowest brain activity of all three groups. In particular, areas linked to memory, creativity, and concentration were significantly less active. In contrast, those who wrote without help from AI showed the highest mental engagement. They had to organise their thoughts, build arguments, and recall information, all things that activated the brain more deeply. Even the group using Google Search showed more engagement than the AI group, possibly because the process of looking for and evaluating information keeps the brain involved. There was another telling detail. Many in the ChatGPT group simply pasted the prompts into the tool and copied the output with little to no editing. Teachers who reviewed their essays said they felt impersonal, calling them 'soulless.' Dr. Kosmyna put it bluntly: 'They weren't thinking. They were just typing.' AI dependency Short-term efficiency, long-term cost Later in the study, researchers asked participants to rewrite one of their essays, this time without using any tools. The ChatGPT users struggled. Many couldn't remember their original arguments or structure. Since they hadn't processed the material deeply the first time, it hadn't stuck. Kosmyna described this as a red flag: 'It was efficient. But nothing was integrated into their brains.' That raises a broader concern: if AI is doing the heavy lifting, are we still learning? Or are we just moving text around while our cognitive skills fade in the background? The growing concern among psychiatrists and educators Dr. Zishan Khan, a psychiatrist who works with students, says he's already seeing signs of AI overuse in younger people. 'The neural pathways responsible for thinking, remembering, and adapting—they're weakening,' he explained. The fear is that early and frequent reliance on tools like ChatGPT might lead to long-term cognitive decline, especially in developing brains. MIT's team is now expanding their research to see how AI affects people in other fields. They've already started looking at coders who use tools like GitHub Copilot. So far, Kosmyna says the early results there are 'even worse' in terms of mental engagement. A word of warning for classrooms and beyond Interestingly, the MIT researchers shared their findings before going through the full peer review process, something that's uncommon in academic research. But Kosmyna felt the potential impact was urgent enough to make an exception. 'I'm really concerned someone might say, 'Let's introduce ChatGPT into kindergarten classrooms,'' she said. 'That would be a terrible mistake. Young brains are especially vulnerable.' To prove just how easy it is to lose the depth of complex research, the team did something clever: they planted subtle factual 'traps' in the study. When readers ran the paper through ChatGPT to summarise it, many versions came back with key errors, including details the researchers never even included. What does this mean for the future of AI use Not at all. The tool isn't the enemy. It can be incredibly helpful, especially when used wisely. But this study reminds us that how we use AI matters just as much as whether we use it. Here are a few takeaways from the researchers: Use AI as a partner, not a replacement. Let it offer ideas, but make sure you're still doing the core thinking. Stay actively involved. Skipping the process of learning or writing just to get a result means you're not absorbing anything. Be cautious in education. Children need to build foundational skills before leaning on technology. Also read | Nvidia CEO Jensen Huang swears by these 6 effective management strategies to run a company like a genius AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
3 hours ago
- Time of India
This Microsoft feature is accidentally ‘blocking' Google Chrome on Windows
Microsoft 's Family Safety tool is reportedly preventing Google Chrome from opening on some Windows devices. According to a report by The Verge, the issue was first noticed on June 3, and since then, more users have complained about it. It is affecting those who have enabled Family Safety on their devices, causing Chrome to either close immediately or fail to launch at all. Other web browsers, such as Firefox and Opera, however are not affected. What is Microsoft's Family Safety feature The Family Safety feature is commonly used by schools and parents through Microsoft 365 subscriptions to limit online access for children. The bug, as per the report, has now been active for over two weeks, with no update or resolution from Microsoft at the time of publication. Google Chrome acknowledges the issue by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Investire è più facile che mai BG SAXO Scopri di più Undo The Verge report quotes Chrome support manager Ellen T who said 'Our team has investigated these reports and determined the cause of this behavior. For some users, Chrome is unable to run when Microsoft Family Safety is enabled.' While Chrome has acknowledged the issue, Microsoft is yet to issue a public statement or a timeline for a fix. 'We've not heard anything from Microsoft about a fix being rolled out,' a Chromium engineer wrote in a bug report dated June 10. 'They have provided guidance to users who contact them about how to get Chrome working again, but I wouldn't think that would have a large effect.' Some users have found that renaming the Chrome executable file (e.g., from to allows the browser to function. Disabling the 'filter inappropriate websites' option in Family Safety also resolves the issue, but removes content restrictions for children. While the issue is believed to be accidental, Microsoft has previously faced criticism for trying to steer users away from Chrome and toward its own Edge browser, using popups, misleading messages, and in some cases, altering search results. World Music Day 2025: Tech That Changed How We Listen to Music