Researchers raise red flags after studying samples of popular food item: 'Humans are directly ingesting these'
Researchers in India have published a novel "landmark scientific study" with troubling findings concerning microplastics and a commonly consumed type of shellfish, The Statesman reported.
Researchers at two Indian universities obtained nearly 400 samples of Lamellidens marginalis, a species of freshwater mollusk commonly harvested and consumed in Southeast Asia.
Also known as mussels, the mollusks are considered an "indicator species" due to their invaluable ability to shed light on levels of water pollution in their natural habitats.
For this study, researchers examined the samples of Lamellidens marginalis to gauge the extent of plastic pollution in local rivers.
In what The Statesman called the "first concrete evidence of how deeply plastic pollution has penetrated local food systems," scientists determined that over 80% of freshwater mussel samples obtained from "six key market hubs" contained microplastic particulate matter.
"Every mussel you eat might be delivering more than just protein. It could be a vehicle for microscopic plastic particles that are now infiltrating human bodies through daily diets," lead study author Dr. Sujoy Midya explained.
"With mussels acting as natural water filters — and now [as] unwilling microplastic reservoirs — the study paints a grim picture of environmental degradation," The Statesman concluded.
According to the U.S. Fish & Wildlife Service, "a single freshwater mussel can pump and filter between 8 and 15 gallons of water per day."
Consequently, freshwater mussels "drastically improve the water quality in their environments." Like oysters and clams, these "filter feeders" perform an essential function in aquatic ecosystems, keeping waters clean and heralding dangerous conditions.
"These mussels are not just seafood — they're sentinels," Midya said. However, freshwater mussels remain popular in Southeast Asia, and the levels of contamination documented in the study are unquestionably concerning.
Do you worry about how much food you throw away?
Definitely
Sometimes
Not really
Never
Click your choice to see results and speak your mind.
"Their contamination levels reflect the scale of pollution in our freshwater ecosystems. And because they are consumed whole, humans are directly ingesting these microplastics," explained Midya.
"Research has already shown that [microplastic] particles can accumulate in human tissues, potentially leading to oxidative stress, inflammation, and even genetic damage," he added, referencing a growing number of studies linking plastic pollution to adverse human health outcomes.
Per The Statesman, the study's authors recommended "immediate action — calling for stringent environmental policies, increased public awareness, and expanded scientific monitoring" to limit the risk to humans and mollusks alike.
Researchers in Korea pioneered a method to filter microplastic particles from water, although that technology is not in widespread use yet.
At an individual level, the most effective approach is to use less plastic whenever possible — while it's difficult to completely avoid plastic, incremental changes make a big difference.
Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don't miss this cool list of easy ways to help yourself while helping the planet.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Your AI use could have a hidden environmental cost
Sign up for CNN's Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety. Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up. Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations. The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute. So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers. In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported. 'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday. Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models. 'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said. Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said. 'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.' For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all. Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task. 'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained. If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator. Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said. When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks. Putting a number on the environmental impact of AI has proved challenging. The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI partly why the researchers chose to represent carbon emissions within a range, Dauner said. Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption. 'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said. One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested. 'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said. Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said. 'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.' With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy. 'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'


CNN
5 hours ago
- CNN
How your AI prompts could harm the environment
AI Sustainability Climate change EconomyFacebookTweetLink Follow Sign up for CNN's Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety. Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up. Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations. The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute. So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers. In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported. 'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday. Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models. 'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said. Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said. 'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.' For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all. Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task. 'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained. If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator. Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said. When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks. Putting a number on the environmental impact of AI has proved challenging. The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI partly why the researchers chose to represent carbon emissions within a range, Dauner said. Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption. 'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said. One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested. 'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said. Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said. 'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.' With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy. 'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'
Yahoo
6 hours ago
- Yahoo
Cannabis Linked to 2x Risk of Heart Disease Death, Scientists Discover
New research suggests that cannabis use is linked to twice the risk of death from cardiovascular disease, and is also associated with increased risk of other major adverse cardiovascular events (MACE). Cannabis is being legalized in more and more places worldwide, increasing medicinal and recreational use – but it may warrant closer monitoring by health professionals. The researchers here, led by a team from the University of Toulouse in France, wanted to look more closely at potential health risks that had previously been flagged. They looked at 24 previous studies published between 2016 and 2023, involving around 200 million people. Overall, the increased risk linked to cannabis use was 29 percent for acute coronary syndrome (reduced blood flow to the heart), 20 percent for strokes, and 100 percent for cardiovascular disease mortality. "The findings reveal positive associations between cannabis use and MACE," write the researchers in their published paper. "These findings should encourage investigating cannabis use in all patients presenting with serious cardiovascular disorders." There are some limitations worth bearing in mind here. The studies included in the research differed in terms of how they defined cannabis use, relied on self-reporting, and didn't measure use of the drug over time. That makes them less statistically robust. The research also notes a high risk of bias in the majority of studies investigated, due to the way they were structured. And it's important to say the research doesn't show direct cause and effect, only an association. It's possible that other factors not considered here are driving both cannabis use and heart health issues in certain groups of people. Nevertheless, the large number of people surveyed on their real-world use of cannabis counts in the study's favor. It also has more recent data than many other studies, and we know that cannabis use and composition is changing over time. That's enough to warrant deeper investigation into the possible health risks. "Legalizing the drug and expanding its medical use worldwide have likely contributed to profound changes in the general perception of cannabis and to the overall rise in cannabis consumption," write the researchers. "Consequently, users' profiles and consumption habits profoundly differ from those in the 2010s, especially as cannabis products show an increasing trend in potency, with rising concentrations of delta-9-tetrahydrocannabinol (THC)." Further research is absolutely needed here, not least to determine whether the chemicals and compounds in cannabis – of which there are hundreds – could be leading to these health risks, and how taking cannabis in different forms, such as inhalables or edibles, might have an influence. Previous studies have already shown how the drug can increase the risk of cancer, and significantly alter our DNA, for example. Cannabis could also trigger psychosis through the impact it has on the brain. The new study is accompanied by an editorial written by epidemiologists Stanton Glanz and Lynn Silver, from the University of California San Francisco. In it, Glanz and Silver argue that as cannabis use rises, more should be done to educate people about the risks – as has been done with cigarettes. "Specifically, cannabis should be treated like tobacco: not criminalized but discouraged, with protection of bystanders from secondhand exposure," they write. The research has been published in the journal Heart. 5 Daily Habits Could Be Causing Your Liver Serious Harm FDA-Approved Sleeping Pill Slows Alzheimer's Tangles in Pre-Clinical Trial Risk of Sleep Breathing Disorder Set to Rise 45% by End of Century