Your AI use could have a hidden environmental cost
Sign up for CNN's Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety.
Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up.
Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations.
The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute.
So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers.
In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported.
'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday.
Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models.
'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said.
Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said.
'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.'
For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all.
Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task.
'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained.
If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator.
Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said.
When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks.
Putting a number on the environmental impact of AI has proved challenging.
The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI models.That's partly why the researchers chose to represent carbon emissions within a range, Dauner said.
Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption.
'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said.
One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested.
'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said.
Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said.
'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.'
With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy.
'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
28 minutes ago
- Yahoo
OpenAI scrubs news of Jony Ive deal amid trademark dispute
OpenAI has removed news of its deal with Jony Ive's io from its website. The takedown comes amid a trademark dispute filed by iyO, an AI hardware startup. OpenAI said it doesn't agree with the complaint and is "reviewing our options." Turns out "i" and "o" make for a popular combination of vowels in the tech industry. Sam Altman's OpenAI launched a very public partnership with io, the company owned by famed Apple designer Jony Ive, in May. The announcement included a splashy video and photos of the two of them looking like old friends. On Sunday, however, OpenAI scrubbed any mention of that partnership from its website and social media. That's because iyO, a startup spun out of Google's moonshot factory, X, and which sounds like io, is suing OpenAI, io, Altman, and Ive for trademark infringement. iyO's latest product, iyO ONE, is an "ear-worn device that uses specialized microphones and bone-conducted sound to control audio-based applications with nothing more than the user's voice," according to the suit iyO filed on June 9. The partnership between OpenAI and io, meanwhile, is rumored to be working on a similarly screen-less, voice-activated AI device. According to its deal with OpenAI, Ive's firm will lead creative direction and design at OpenAI, focusing on developing a new slate of consumer devices. When the deal was announced, neither party shared specific details about future products. However, Altman said the partnership would shape the "future of AI." iyO approached OpenAI earlier this year about a potential collaboration and funding. OpenAI declined that offer, however, and says it is now fighting the trademark lawsuit. "We don't agree with the complaint and are reviewing our options," OpenAI told Business Insider. Read the original article on Business Insider
Yahoo
an hour ago
- Yahoo
ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development
The rapid rise of ChatGPT — and the cavalcade of competitors' generative models that followed suit — has polluted the internet with so much useless slop that it's already kneecapping the development of future AI models. As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it's originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI "model collapse." As a consequence, the finite amount of data predating ChatGPT's rise becomes extremely valuable. In a new feature, The Register likens this to the demand for "low-background steel," or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US's Trinity test. Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what's old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919. Maurice Chiodo, a research associate at the Centre for the Study of Existential Risk at the University of Cambridge called the admiral's actions the "greatest contribution to nuclear medicine in the world." "That enabled us to have this almost infinite supply of low-background steel. If it weren't for that, we'd be kind of stuck," he told The Register. "So the analogy works here because you need something that happened before a certain date." "But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI," he added. "Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'" In 2024, Chiodo co-authored a paper arguing that there needs to be a source of "clean" data not only to stave off model collapse, but to ensure fair competition between AI developers. Otherwise, the early pioneers of the tech, after ruining the internet for everyone else with their AI's refuse, would boast a massive advantage by being the only ones that benefited from a purer source of training data. Whether model collapse, particularly as a result of contaminated data, is an imminent threat is a matter of some debate. But many researchers have been sounding the alarm for years now, including Chiodo. "Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible," he told The Register. One area where the issue has already reared its head is with the technique called retrieval-augmented generation (RAG), which AI models use to supplement their dated training data with information pulled from the internet in real-time. But this new data isn't guaranteed to be free of AI tampering, and some research has shown that this results in the chatbots producing far more "unsafe" responses. The dilemma is also reflective of the broader debate around scaling, or improving AI models by adding more data and processing power. After OpenAI and other developers reported diminishing returns with their newest models in late 2024, some experts proclaimed that scaling had hit a "wall." And if that data is increasingly slop-laden, the wall would become that much more impassable. Chiodo speculates that stronger regulations like labeling AI content could help "clean up" some of this pollution, but this would be difficult to enforce. In this regard, the AI industry, which has cried foul at any government interference, may be its own worst enemy. "Currently we are in a first phase of regulation where we are shying away a bit from regulation because we think we have to be innovative," Rupprecht Podszun, professor of civil and competition law at Heinrich Heine University Düsseldorf, who co-authored the 2024 paper with Chiodo, told The Register. "And this is very typical for whatever innovation we come up with. So AI is the big thing, let it go and fine." More on AI: Sam Altman Says "Significant Fraction" of Earth's Total Electricity Should Go to Running AI
Yahoo
an hour ago
- Yahoo
Think Weed Is Harmless? This Study Might Change Your Mind
A new international study just delivered a wake-up call to millions of cannabis users: marijuana may be doubling your risk of dying from heart disease. Published in the journal Heart, CNN reported that the analysis pooled medical data from over 200 million people, most between the ages of 19 and 59. It found that marijuana users had a 29 percent higher risk of heart attacks and a 20 percent higher risk of strokes compared to nonusers. The most concerning finding? These risks were seen in young, otherwise healthy individuals with no prior heart conditions. "What was particularly striking was that the concerned patients hospitalized for these disorders were young," said senior study author Émilie Jouanjus, a pharmacology professor at the University of Toulouse. That ruled out tobacco or existing cardiovascular problems as the primary cause. This is one of the largest studies to date linking cannabis use with cardiovascular disease, and its timing is critical. As marijuana legalization expands, use has surged past that of tobacco in some demographics. Many believe marijuana is a safer, more natural alternative to smoking. Experts say that perception needs to change—fast. "Clinicians need to screen people for cannabis use and educate them about its harms, the same way we do for tobacco," said Dr. Lynn Silver of the University of California, San Francisco. Silver co-authored an editorial accompanying the study, calling for urgent updates to how marijuana is regulated and discussed publicly. The risks don't stop at inhalation. One recent study found that THC-laced edibles can impair vascular function just as much—or more—than smoking. In that study, vascular function dropped 42 percent in marijuana smokers and 56 percent in edible users. And the potency? It's not 1970 anymore. Today's cannabis is up to 510 times stronger than the joints of decades past, with some concentrates reaching 99 percent pure THC. That's raising alarms about addiction, psychosis, and now, potentially fatal heart problems. "If I was a 60-year-old with cardiovascular risk, I'd be very cautious," Silver warned. Think Weed Is Harmless? This Study Might Change Your Mind first appeared on Men's Journal on Jun 18, 2025