Latest news with #Frontiers


Time of India
4 hours ago
- Science
- Time of India
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Forbes
8 hours ago
- Entertainment
- Forbes
Calculating How Many Players ‘Destiny 2: The Edge Of Fate' Will Launch With, Per Math
Destiny 2: Edge of Fate Destiny 2 is entering a much different new era in a month, one which will launch the next few years of the game. It's switching from an annual expansion and four seasons to two smaller expansions and four 'major updates' that are not seasons, and the entire project is called 'Frontiers.' Its first expansion is The Edge of Fate, which will be out on July 15, just under a month from now. One open question is just how many players Edge of Fate will launch with, namely, who will have stuck around since the launch of Destiny 2's The Final Shape a year ago, given the context of how the game will work now. I think we can use math to at least get a pretty good estimate of this, given the difference between the pre-expansion lows of the playerbase and then what they spiked to at actual launch. We will, of course, have to use Steam for this, as we don't have data elsewhere. We only have from Beyond Light forward for Steam, so no Forsaken or Shadowkeep. Pre-Beyond Light Month (Oct 2020) – 94,000 concurrents Beyond Light (Nov 2020) – 242,000 concurrents Increase – 2.57x Pre-Witch Queen Month (Jan 2022) – 78,000 concurrents FEATURED | Frase ByForbes™ Unscramble The Anagram To Reveal The Phrase Pinpoint By Linkedin Guess The Category Queens By Linkedin Crown Each Region Crossclimb By Linkedin Unlock A Trivia Ladder Witch Queen (Feb 2022) – 290,000 concurrents Increase – 3.71x Pre-Lightfall Month (Jan 2023) – 96,000 concurrents Lightfall (Feb 2023) – 316,000 concurrents Increase – 3.29x Pre-Final Shape Month (May 2024) – 116,000 concurrents The Final Shape (June 2024) – 314,000 concurrents Increase – 2.7x increase Destiny 2 So, what I'll do here is average the increases together, which would be a 3.01x increase Then, we take the current playercount figures for this month, the month before Edge of Fate release: 38,000 3.01 x 38,000 = a potential 114,000 peak. That would be below half of Beyond Light and Witch Queen and close to a third of Lightfall and Final Shape. I also think you might be able to say that this could be an over-estimate if the idea is that in the post-Light and Darkness era, a brand new, lower-profile, smaller-scale expansion may prove less attractive. I'm not trying to dunk on the game here, but I do think we have to be realistic about the new normal for Destiny 2 going forward. I've avoided reporting on the 'record lows' the game has hit almost every month since The Final Shape, but now reality is approaching as we try to see what level of surge we're getting for these smaller expansions. Then, of course, we'd have to see how the second expansion did six months later, the Star Wars-themed Renegades. The way this works out on the revenue side is if the cost of making less content with fewer employees works with the new lower, average playercount in a way that doesn't put the game deeply in the red. This is not a short-term experiment, this is the plan for a few more years at least with no Destiny 3 on the horizon. It will have to work at least to some degree, as at this point, I don't think it's a safe bet that Bungie can rely on the upcoming Marathon to be the huge boost the studio needs (we've talked that to death at this point). Maybe Edge of Fate will prove surprising, but it's a smaller expansion, offering less content, outside the long-term Light and Darkness saga. Expectations will have to be adjusted accordingly. Follow me on Twitter, YouTube, Bluesky and Instagram. Pick up my sci-fi novels the Herokiller series and The Earthborn Trilogy.
Yahoo
17 hours ago
- Science
- Yahoo
These AI chatbot questions cause most carbon emissions, scientists find
Queries requiring AI chatbots like OpenAI's ChatGPT to think logically and reason produce more carbon emissions than other types of questions, according to a new study. Every query typed into a large language model like ChatGPT requires energy and leads to carbon dioxide emissions. The emission levels depend on the chatbot, the user, and the subject matter, researchers at Germany's Hochschule München University of Applied Sciences say. The study, published in the journal Frontiers, compares 14 AI models and finds that answers requiring complex reasoning cause more carbon emissions than simple answers. Queries needing lengthy reasoning, like abstract algebra or philosophy, cause up to six times greater emissions than more straightforward subjects like high school history. Researchers recommend that frequent users of AI chatbots adjust the kind of questions they pose to limit carbon emissions. The study assesses as many as 14 LLMs on 1,000 standardised questions across subjects to compare their carbon emissions. 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study author Maximilian Dauner says. 'We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' When a user puts a question to an AI chatbot, words or parts of words in the query are converted into a string of numbers and processed by the model. This conversion and other computing processes of the AI produce carbon emissions. The study notes that reasoning models on average create 543.5 tokens per question while concise models require only 40. 'A higher token footprint always means higher CO2 emissions,' it says. For instance, one of the most accurate models is Cogito which reaches about 85 per cent accuracy. It produces three times more carbon emissions than similarly sized models that provide concise answers. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," Dr Dauner says. "None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly.' Carbon dioxide equivalent is a unit for measuring the climate change impact of various greenhouse gases. Researchers hope the new findings will cause people to make more informed decisions about their AI use. Citing an example, researchers say queries seeking DeepSeek R1 chatbot to answer 600,000 questions may create carbon emissions equal to a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer more than three times as many questions with similar accuracy rates while generating the same emissions. "Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dr Dauner says. Error in retrieving data Sign in to access your portfolio Error in retrieving data


Time Magazine
2 days ago
- Science
- Time Magazine
Some AI Prompts Can Cause 50 Times More CO2 Emissions Than Others
Whether it be writing an email or planning a vacation, about a quarter of Americans say they interact with artificial intelligence several times a day, while another 28% say their use is about once a day. But many people might be unaware of the environmental impact of their searches. A request made using ChatGPT, for example, consumes 10 times the electricity of a Google search, according to the International Energy Agency. In addition, data centers, which are essential for powering AI models, represented 4.4% of all the electricity consumed in the U.S. in 2023—and by 2028 they're expected to consume approximately 6.7 to 12% of the country's electricity. It's likely only going to increase from there: The number of data centers worldwide have risen from 500,000 in 2012 to over 8 million as of September 2024. A new study, published in Frontiers, aims to draw more attention to the issue. Researchers analyzed the number of 'tokens'—the smallest units of data that a language model uses to process and generate text—required to produce responses, and found that certain prompts can release up to 50 times more CO2 emissions than others. Different AI models use a different number of parameters; those with more parameters often perform better. The study examined 14 large language models (LLMs) ranging from seven to 72 billion parameters, asking them the same 1,000 benchmark questions across a range of subjects. Parameters are the internal variables that a model learns during training, and then uses to produce results. Reasoning-enabled models, which are able to perform more complex tasks, on average created 543.5 'thinking' tokens per question (these are additional units of data that reasoning LLMs generate before producing an answer). That's compared to more concise models which required just 37.7 tokens per question. The more tokens were used, the higher the emissions—regardless of whether or not the answer was correct. The subject matter of the topics impacted the amount of emissions produced. Questions on straightforward topics, like high school history, produced up to six times fewer emissions than subjects like abstract algebra or philosophy, which required lengthy reasoning processes. Currently, many models have an inherent 'accuracy-sustainability trade-off,' researchers say. The model which researchers deemed the most accurate, the reasoning-enabled Cogito model, produced three times more CO2 emissions than similar sized models that generated more concise answers. The inherent challenge then, in the current landscape of AI models, is to be able to optimize both energy efficiency and accuracy. 'None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly,' first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a press release. It's not just the types of questions asked or the degree of the answer's accuracy, but the models themselves that can lead to the difference in emissions. Researchers found that some language models produce more emissions than others. For DeepSeek R1 (70 billion parameters) to answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York, while Qwen 2.5 (72 billion parameters) can answer over three times as many questions—about 1.9 million—with similar accuracy rates and the same number of emissions. The researchers hope that users might be more mindful of the environmental impact of their AI use. 'If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure," said Dauner, "they might be more selective and thoughtful about when and how they use these technologies.'


Malaysian Reserve
3 days ago
- Health
- Malaysian Reserve
Peer-Reviewed Study Validates Accuracy of SANSA Home Sleep Apnea Test
ATLANTA, Ga., June 17, 2025 /PRNewswire/ — Huxley Medical, a commercial-stage medical technology firm focused on streamlining detection of sleep and heart disorders, announced that the clinical validation study of its SANSA home sleep apnea test has been published in Frontiers in Neurology (Volume 16, 2025; doi: 10.3389/fneur.2025.1592690). The multicenter, prospective trial confirms that SANSA delivers comparable performance to in–lab polysomnography (PSG), the recognized gold standard for obstructive sleep apnea (OSA) diagnosis. The study, titled 'Polysomnography validation of SANSA to detect obstructive sleep apnea,' included 340 participants across seven U.S. clinical sites. SANSA's close agreement with PSG combined with its simple, single-point-of-contact, hands-free design positions it as an efficient and versatile solution to diagnose and monitor OSA. Study highlights include: Scale and diversity: Investigators highlighted the generalizability of the study results due to its large, diverse group of participants across seven academic and community sleep centers using different PSG protocols, exceeding that of most other home sleep testing study validations. Diagnostic accuracy: High apnea-hypopnea index (AHI) correlation with consensus PSG (91%), along with sensitivity of 88% and specificity of 87% for detecting moderate-to-severe OSA (AHI ≥ 15 events/hour) Total sleep time assessment: Correlation of R = 0.82 between SANSA and PSG-derived measures, with sleep epoch classification accuracy of 87% 'This publication represents an important milestone in the clinical validation of SANSA,' said Dr. Cathy Goldstein, MD, principal investigator on the study, professor of Neurology at the University of Michigan Sleep Disorders Center, and former chair of the American Academy of Sleep Medicine's Artificial Intelligence in Sleep Medicine Committee. 'As providers seek more efficient and scalable ways to diagnose sleep apnea, these findings reinforce that SANSA can deliver reliable, high-quality data without the complexity of traditional home sleep apnea testing devices.' The SANSA platform uses a wireless chest-worn patch to collect multi-parameter physiological data without the need for phone apps or technician setup. By simplifying testing, SANSA aims to expand access to timely diagnosis and treatment for millions of Americans with undiagnosed sleep apnea. The embedded reference electrocardiogram (ECG) within SANSA's suite of sensors also enables concurrent measurement of cardiac signals—opening new avenues for broader clinical applications. About Huxley MedicalHuxley Medical, Inc. is a privately held medical technology company on a mission to develop diagnostic solutions that streamline care for any patient anywhere. The company has received funding from the National Science Foundation, National Institutes of Health, Georgia Research Alliance Venture Fund, Invest Georgia, Georgia Tech Foundation Research Impact Fund, and Duke Capital Partners to translate its growing technology portfolio. To learn more, visit or email info@ Research manuscript: Media ContactBrennan TorstrickChief Scientific OfficerHuxley