logo
Have too many concurrent requests taken down ChatGPT? All details about outage and when it will be back online

Have too many concurrent requests taken down ChatGPT? All details about outage and when it will be back online

Time of India10-06-2025

ChatGPT went down for many users around the world. The issue started slowly at 3 a.m. ET, and then there was a big spike in user complaints around 5:30 a.m. ET. Over 1,000 reports were made during that time, as per reports.
People were seeing error messages like "something went wrong" or "error in the message stream" across the website, apps on Windows, Mac, iOS, and Android. This wasn't just happening on ChatGPT.
OpenAI
said the problem also affected Sora and their APIs. At 9:07 a.m., OpenAI said it had found the root cause of the issue and was trying to fix it, according to the report by ZDNET.
By 10:54 a.m., OpenAI said it was still fixing the problem, but some services were recovering, especially the API. OpenAI also said that full recovery across all tools like ChatGPT and Sora may take a few more hours. A spokesperson from OpenAI said that they had no extra comments apart from what was already on their status page, as stated in the report by ZDNET.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Seniors Say These 87¢ ED Pills Are Absolutely Worth It
Health Alliance by Friday Plans
Learn More
Undo
How people are reacting
Lots of users were upset, especially students preparing for exams. Alice from the UK said she uses ChatGPT for Spanish A-level revision, writing essays, and planning study schedules. She was really stressed because her final exam was the next day, as stated in the reports.
Another user, Shelby from Missouri, said she uses ChatGPT every day for everything, from recipes to health advice. When she and her family got a stomach bug, ChatGPT helped her track medicine and gave emotional support. She really missed it during the outage, according to the report by ZDNET.
Live Events
However, the good news is some users are now able to access ChatGPT again, including the ZDNET team.
FAQs
Q1. Why is
ChatGPT not working
today?
Too many requests caused a system error, and OpenAI is still fixing it.
Q2. When will ChatGPT be back online?
OpenAI said full recovery may take a few more hours.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims

Time of India

timean hour ago

  • Time of India

Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims

Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency
ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency

Time of India

time2 hours ago

  • Time of India

ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency

ChatGPT might be making you think less: MIT study raises 'red flags' about AI dependency As AI tools become part of our daily routines, a question is starting to bubble up: What happens when we rely on them too much? A new study from MIT's Media Lab takes a closer look at how tools like ChatGPT may be affecting our brains. And what the researchers found is worth paying attention to. The study focused on how people engage mentally when completing tasks with and without AI. It turns out that while ChatGPT can make writing easier, it may also be reducing how much we think. According to the research team, participants who used ChatGPT showed noticeably lower brain activity than those who did the same task using Google or no tech at all. The findings suggest that depending on AI for tasks that require effort, like writing, decision-making, or creative thinking, could weaken the very mental muscles we're trying to sharpen. ChatGPT users show lowest brain activity in MIT's groundbreaking study The experiment involved 54 participants between the ages of 18 and 39. They were split into three groups and asked to write essays in response to prompts similar to those on standardised tests. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy Brass Idols - Handmade Brass Statues for Home & Gifting Luxeartisanship Buy Now Undo Group 1 used ChatGPT to generate their answers Group 2 relied on Google Search to find and compile information Group 3 worked without any tools, using only their knowledge and reasoning While they worked, each participant wore a headset that tracked electrical activity across 32 areas of the brain. The aim was to see how engaged their minds were during the process. (The research was led by Dr. Nataliya Kosmyna along with a team that included Ashly Vivian Beresnitzky, Ye Tong Yuan, Jessica Situ, Eugene Hauptmann, Xian-Hao Liao, Iris Braunstein, and Pattie Maes.) ChatGPT may be hurting your creativity, MIT researchers warn The results were pretty clear: the group that used ChatGPT showed the lowest brain activity of all three groups. In particular, areas linked to memory, creativity, and concentration were significantly less active. In contrast, those who wrote without help from AI showed the highest mental engagement. They had to organise their thoughts, build arguments, and recall information, all things that activated the brain more deeply. Even the group using Google Search showed more engagement than the AI group, possibly because the process of looking for and evaluating information keeps the brain involved. There was another telling detail. Many in the ChatGPT group simply pasted the prompts into the tool and copied the output with little to no editing. Teachers who reviewed their essays said they felt impersonal, calling them 'soulless.' Dr. Kosmyna put it bluntly: 'They weren't thinking. They were just typing.' AI dependency Short-term efficiency, long-term cost Later in the study, researchers asked participants to rewrite one of their essays, this time without using any tools. The ChatGPT users struggled. Many couldn't remember their original arguments or structure. Since they hadn't processed the material deeply the first time, it hadn't stuck. Kosmyna described this as a red flag: 'It was efficient. But nothing was integrated into their brains.' That raises a broader concern: if AI is doing the heavy lifting, are we still learning? Or are we just moving text around while our cognitive skills fade in the background? The growing concern among psychiatrists and educators Dr. Zishan Khan, a psychiatrist who works with students, says he's already seeing signs of AI overuse in younger people. 'The neural pathways responsible for thinking, remembering, and adapting—they're weakening,' he explained. The fear is that early and frequent reliance on tools like ChatGPT might lead to long-term cognitive decline, especially in developing brains. MIT's team is now expanding their research to see how AI affects people in other fields. They've already started looking at coders who use tools like GitHub Copilot. So far, Kosmyna says the early results there are 'even worse' in terms of mental engagement. A word of warning for classrooms and beyond Interestingly, the MIT researchers shared their findings before going through the full peer review process, something that's uncommon in academic research. But Kosmyna felt the potential impact was urgent enough to make an exception. 'I'm really concerned someone might say, 'Let's introduce ChatGPT into kindergarten classrooms,'' she said. 'That would be a terrible mistake. Young brains are especially vulnerable.' To prove just how easy it is to lose the depth of complex research, the team did something clever: they planted subtle factual 'traps' in the study. When readers ran the paper through ChatGPT to summarise it, many versions came back with key errors, including details the researchers never even included. What does this mean for the future of AI use Not at all. The tool isn't the enemy. It can be incredibly helpful, especially when used wisely. But this study reminds us that how we use AI matters just as much as whether we use it. Here are a few takeaways from the researchers: Use AI as a partner, not a replacement. Let it offer ideas, but make sure you're still doing the core thinking. Stay actively involved. Skipping the process of learning or writing just to get a result means you're not absorbing anything. Be cautious in education. Children need to build foundational skills before leaning on technology. Also read | Nvidia CEO Jensen Huang swears by these 6 effective management strategies to run a company like a genius AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Apple SVP Craig Federighi reveals why the iPad won't become a Mac says: ‘It's a bad idea…'
Apple SVP Craig Federighi reveals why the iPad won't become a Mac says: ‘It's a bad idea…'

Time of India

time2 hours ago

  • Time of India

Apple SVP Craig Federighi reveals why the iPad won't become a Mac says: ‘It's a bad idea…'

Apple's senior vice president of software engineering, Craig Federighi recently answered one of the most asked questions by Apple fans, whether the iPad and Mac should ever merge into a single device. Federighi humorously answered using a memorable analogy: "We don't want to build sporks." Speaking to MacStories' Federico Viticci in an interview, Federighi explained Apple's philosophy behind not merging the two popular Apple products. "Someone said, 'If a spoon's great and a fork's great, then let's combine them into a single utensil, right?' It turns out it's not a good spoon and it's not a good fork. It's a bad idea. And so we don't want to build sporks," Federighi said. This analogy clearly explains Apple's belief that merging the two devices or operating systems will compromise the strengths of the devices and will also make the product less useful. Federighi further explained that Apple's main aim is to make both iPad and Mac excel in their respective areas and the company does not want the iPad to displace a Mac. "The Mac lets the iPad be iPad," he stated, highlighting that the existence of a robust Mac ecosystem allows the iPad to remain focused on its touch-first, immersive, and simple interface. He also said that this time the iPadOS 26 introduced many useful and improved features such as new windowing engine and desktop-like features, but these features are designed to cater to the needs of a broader range of iPad users. The main focus is to improve the capabilities of an iPad without compromising on its identity. Federighi also talked about the growth of multitasking functionality in the iPadOS. He described evolution as a multi-year journey of experimentation. He added that if Apple had introduced traditional Mac-like menu bars on the iPad, developers might have designed their apps differently, which could have compromised the iPad app ecosystem. While the two platforms can "be inspired by elements of the Mac" and vice-versa, Federighi firmly believes that a full merger would lead to a messy, confusing, and ultimately inferior user experience . Apple remains committed to its strategy of offering distinct devices optimized for different use cases, rather than a "compromise" solution that attempts to be all things to all people. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store