Latest news with #MaximilianDauner
Yahoo
6 hours ago
- Science
- Yahoo
These AI chatbot questions cause most carbon emissions, scientists find
Queries requiring AI chatbots like OpenAI's ChatGPT to think logically and reason produce more carbon emissions than other types of questions, according to a new study. Every query typed into a large language model like ChatGPT requires energy and leads to carbon dioxide emissions. The emission levels depend on the chatbot, the user, and the subject matter, researchers at Germany's Hochschule München University of Applied Sciences say. The study, published in the journal Frontiers, compares 14 AI models and finds that answers requiring complex reasoning cause more carbon emissions than simple answers. Queries needing lengthy reasoning, like abstract algebra or philosophy, cause up to six times greater emissions than more straightforward subjects like high school history. Researchers recommend that frequent users of AI chatbots adjust the kind of questions they pose to limit carbon emissions. The study assesses as many as 14 LLMs on 1,000 standardised questions across subjects to compare their carbon emissions. 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study author Maximilian Dauner says. 'We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' When a user puts a question to an AI chatbot, words or parts of words in the query are converted into a string of numbers and processed by the model. This conversion and other computing processes of the AI produce carbon emissions. The study notes that reasoning models on average create 543.5 tokens per question while concise models require only 40. 'A higher token footprint always means higher CO2 emissions,' it says. For instance, one of the most accurate models is Cogito which reaches about 85 per cent accuracy. It produces three times more carbon emissions than similarly sized models that provide concise answers. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," Dr Dauner says. "None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly.' Carbon dioxide equivalent is a unit for measuring the climate change impact of various greenhouse gases. Researchers hope the new findings will cause people to make more informed decisions about their AI use. Citing an example, researchers say queries seeking DeepSeek R1 chatbot to answer 600,000 questions may create carbon emissions equal to a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer more than three times as many questions with similar accuracy rates while generating the same emissions. "Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dr Dauner says. Error in retrieving data Sign in to access your portfolio Error in retrieving data
Yahoo
13 hours ago
- Science
- Yahoo
Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions
When you buy through links on our articles, Future and its syndication partners may earn a commission. The more accurate we try to make AI models, the bigger their carbon footprint — with some prompts producing up to 50 times more carbon dioxide emissions than others, a new study has revealed. Reasoning models, such as Anthropic's Claude, OpenAI's o3 and DeepSeek's R1, are specialized large language models (LLMs) that dedicate more time and computing power to produce more accurate responses than their predecessors. Yet, aside from some impressive results, these models have been shown to face severe limitations in their ability to crack complex problems. Now, a team of researchers has highlighted another constraint on the models' performance — their exorbitant carbon footprint. They published their findings June 19 in the journal Frontiers in Communication. "The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences in Germany, said in a statement. "We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models." To answer the prompts given to them, LLMs break up language into tokens — word chunks that are converted into a string of numbers before being fed into neural networks. These neural networks are tuned using training data that calculates the probabilities of certain patterns appearing. They then use these probabilities to generate responses. Reasoning models further attempt to boost accuracy using a process known as "chain-of-thought." This is a technique that works by breaking down one complex problem into smaller, more digestible intermediary steps that follow a logical flow, mimicking how humans might arrive at the conclusion to the same problem. Related: AI 'hallucinates' constantly, but there's a solution However, these models have significantly higher energy demands than conventional LLMs, posing a potential economic bottleneck for companies and users wishing to deploy them. Yet, despite some research into the environmental impacts of growing AI adoption more generally, comparisons between the carbon footprints of different models remain relatively rare. To examine the CO₂ emissions produced by different models, the scientists behind the new study asked 14 LLMs 1,000 questions across different topics. The different models had between 7 and 72 billion parameters. The computations were performed using a Perun framework (which analyzes LLM performance and the energy it requires) on an NVIDIA A100 GPU. The team then converted energy usage into CO₂ by assuming each kilowatt-hour of energy produces 480 grams of CO₂. Their results show that, on average, reasoning models generated 543.5 tokens per question compared to just 37.7 tokens for more concise models. These extra tokens — amounting to more computations — meant that the more accurate reasoning models produced more CO₂. The most accurate model was the 72 billion parameter Cogito model, which answered 84.9% of the benchmark questions correctly. Cogito released three times the CO₂ emissions of similarly sized models made to generate answers more concisely. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," said Dauner. "None of the models that kept emissions below 500 grams of CO₂ equivalent [total greenhouse gases released] achieved higher than 80% accuracy on answering the 1,000 questions correctly." RELATED STORIES —Replika AI chatbot is sexually harassing users, including minors, new study claims —OpenAI's 'smartest' AI model was explicitly told to shut down — and it refused —AI benchmarking platform is helping top companies rig their model performances, study claims But the issues go beyond accuracy. Questions that needed longer reasoning times, like in algebra or philosophy, caused emissions to spike six times higher than straightforward look-up queries. The researchers' calculations also show that the emissions depended on the models that were chosen. To answer 60,000 questions, DeepSeek's 70 billion parameter R1 model would produce the CO₂ emitted by a round-trip flight between New York and London. Alibaba Cloud's 72 billion parameter Qwen 2.5 model, however, would be able to answer these with similar accuracy rates for a third of the emissions. The study's findings aren't definitive; emissions may vary depending on the hardware used and the energy grids used to supply their power, the researchers emphasized. But they should prompt AI users to think before they deploy the technology, the researchers noted. "If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies," Dauner said.


Economic Times
a day ago
- Science
- Economic Times
AI chatbots using reason emit more carbon than those responding concisely, study finds
A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Munchen University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models," Dauner study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 model responded to 100 questions on each of the five subjects chosen for the analysis -- philosophy, high school world history, international law, abstract algebra, and high school mathematics. "Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt," the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide equipped with an ability to reason, or 'reasoning models', produced 543.5 'thinking' tokens per question, whereas concise models -- producing one-word answers -- required just 37.7 tokens per question, the researchers tokens are additional ones that reasoning models generate before producing an answer, they more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for said, "None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly." "Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies," the author most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers."In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand," the authors wrote. "Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies," they wrote.
&w=3840&q=100)

Business Standard
a day ago
- Science
- Business Standard
AI chatbots that reason emit more carbon than ones with simple reply: Study
A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Mnchen University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models," Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 subjective. Each model responded to 100 questions on each of the five subjects chosen for the analysis -- philosophy, high school world history, international law, abstract algebra, and high school mathematics. "Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt," the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide emissions. Chatbots equipped with an ability to reason, or 'reasoning models', produced 543.5 'thinking' tokens per question, whereas concise models -- producing one-word answers -- required just 37.7 tokens per question, the researchers found. Thinking tokens are additional ones that reasoning models generate before producing an answer, they explained. However, more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for correctness. Dauner said, "None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly." "Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies," the author added. The most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers. "In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand," the authors wrote. "Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies," they wrote.


Indian Express
a day ago
- Science
- Indian Express
AI chatbots using reason emit more carbon than those responding concisely, study finds
A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. 'The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,' first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, Germany, said. 'We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models,' Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions — 500 multiple-choice and 500 subjective. Each model responded to 100 questions on each of the five subjects chosen for the analysis — philosophy, high school world history, international law, abstract algebra, and high school mathematics. 'Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt,' the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide emissions. Chatbots equipped with an ability to reason, or 'reasoning models', produced 543.5 'thinking' tokens per question, whereas concise models — producing one-word answers — required just 37.7 tokens per question, the researchers found. Thinking tokens are additional ones that reasoning models generate before producing an answer, they explained. However, more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for correctness. Dauner said, 'None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly.' 'Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies,' the author added. The most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers. 'In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand,' the authors wrote. 'Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies,' they wrote. PTI KRS KRS MPL