
Forever chemicals found in American beer — the worst is brewed in this part of the country
Crack open a cold one this Memorial Day and you might be in for a hoppy hazard.
A recent study published in the journal Environmental Science & Technology has found that 95% of 23 tested beers across the US contain per- and polyfluoroalkyl substances (PFAS) — commonly known as 'forever chemicals' due to their lingering presence in the environment and human body.
And depending on where in the country it's brewed, the amount of forever chemicals can be worse.
Advertisement
3 A recent study discovered 95% of beers across the US contain forever chemicals, which have been linked to cancer and other health issues.
DN6 – stock.adobe.com
These synthetic compounds, which have been linked to cancer and other health issues, are believed to enter beer primarily through contaminated tap water used in brewing.
The study found a strong correlation between PFAS concentrations in municipal drinking water and levels in locally brewed beer — a phenomenon that has not previously been researched.
While the study did not disclose specific beer brands, it identified that beers brewed near the Cape Fear River Basin in North Carolina exhibited the highest levels and most diverse mix of PFAS.
Advertisement
Beers from St. Louis County, Missouri also showed significant PFAS presence.
The findings suggest that standard water filtration systems used in breweries may not effectively remove forever chemicals, highlighting the need for improved water treatment strategies at both brewing facilities and municipal treatment plants.
As beer is composed of about 90% water — and nearly two gallons of water can be used to produce just one quart of beer — the quality of water used in brewing is crucial.
Advertisement
With PFAS contamination affecting an estimated 200 million people in the US, the presence of these chemicals in beer underscores the broader issue of environmental pollutants infiltrating everyday products.
3 Cape Fear Memorial Bridge crossing the Cape Fear River at sunset in Wilmington, North Carolina, United States.
Zenstratus – stock.adobe.com
'As an occasional beer drinker myself, I wondered whether PFAS in water supplies was making its way into our pints,' lead author Jennifer Hoponick Redmon said in a press release.
'I hope these findings inspire water treatment strategies and policies that help reduce the likelihood of PFAS in future pours.'
Advertisement
Last year, the Environmental Protection Agency reported that after testing just one-third of public water supplies in the US, they determined that more than 70 million residents are being exposed to 'forever chemicals.'
3 The findings suggest that standard water filtration systems used in breweries may not effectively remove forever chemicals
luchschenF – stock.adobe.com
The most contaminated were found in densely populated regions like New York, New Jersey, and parts of California and Texas. However, their interactive map shows that Manhattan has zero reports of forever chemicals in the water.
'The full scale of PFAS contamination is likely much more widespread,' an EWG spokesperson said at the time, noting that the EPA's report only offered a snapshot of the problem.
In fact, in November researchers at Florida International University in Miami even found forever chemicals in rainwater.
They've also been found in everything from contact lenses to dental floss to toilet paper and even shampoo.
When it comes to reducing the exposure through water, there is something consumers can do about it. While boiling the water doesn't remove PFAS, some water filters can.
Advertisement
A report from FoodPrint outlines how filters with activated carbon adsorption, ion exchange resins and high-pressure membranes can help.
'To remove a specific contaminant like PFAS from drinking water, consumers should choose a water filtration device that is independently certified to remove that contaminant by a recognized lab,' said Jim Nanni of Consumer Reports.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development
The rapid rise of ChatGPT — and the cavalcade of competitors' generative models that followed suit — has polluted the internet with so much useless slop that it's already kneecapping the development of future AI models. As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it's originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI "model collapse." As a consequence, the finite amount of data predating ChatGPT's rise becomes extremely valuable. In a new feature, The Register likens this to the demand for "low-background steel," or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US's Trinity test. Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what's old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919. Maurice Chiodo, a research associate at the Centre for the Study of Existential Risk at the University of Cambridge called the admiral's actions the "greatest contribution to nuclear medicine in the world." "That enabled us to have this almost infinite supply of low-background steel. If it weren't for that, we'd be kind of stuck," he told The Register. "So the analogy works here because you need something that happened before a certain date." "But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI," he added. "Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'" In 2024, Chiodo co-authored a paper arguing that there needs to be a source of "clean" data not only to stave off model collapse, but to ensure fair competition between AI developers. Otherwise, the early pioneers of the tech, after ruining the internet for everyone else with their AI's refuse, would boast a massive advantage by being the only ones that benefited from a purer source of training data. Whether model collapse, particularly as a result of contaminated data, is an imminent threat is a matter of some debate. But many researchers have been sounding the alarm for years now, including Chiodo. "Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible," he told The Register. One area where the issue has already reared its head is with the technique called retrieval-augmented generation (RAG), which AI models use to supplement their dated training data with information pulled from the internet in real-time. But this new data isn't guaranteed to be free of AI tampering, and some research has shown that this results in the chatbots producing far more "unsafe" responses. The dilemma is also reflective of the broader debate around scaling, or improving AI models by adding more data and processing power. After OpenAI and other developers reported diminishing returns with their newest models in late 2024, some experts proclaimed that scaling had hit a "wall." And if that data is increasingly slop-laden, the wall would become that much more impassable. Chiodo speculates that stronger regulations like labeling AI content could help "clean up" some of this pollution, but this would be difficult to enforce. In this regard, the AI industry, which has cried foul at any government interference, may be its own worst enemy. "Currently we are in a first phase of regulation where we are shying away a bit from regulation because we think we have to be innovative," Rupprecht Podszun, professor of civil and competition law at Heinrich Heine University Düsseldorf, who co-authored the 2024 paper with Chiodo, told The Register. "And this is very typical for whatever innovation we come up with. So AI is the big thing, let it go and fine." More on AI: Sam Altman Says "Significant Fraction" of Earth's Total Electricity Should Go to Running AI
Yahoo
an hour ago
- Yahoo
OpenAI Concerned That Its AI Is About to Start Spitting Out Novel Bioweapons
OpenAI is bragging that its forthcoming models are so advanced, they may be capable of building brand-new bioweapons. In a recent blog post, the company said that even as it builds more and more advanced models that will have "positive use cases like biomedical research and biodefense," it feels a duty to walk the tightrope between "enabling scientific advancement while maintaining the barrier to harmful information." That "harmful information" includes, apparently, the ability to "assist highly skilled actors in creating bioweapons." "Physical access to labs and sensitive materials remains a barrier," the post reads — but "those barriers are not absolute." In a statement to Axios, OpenAI safety head Johannes Heidecke clarified that although the company does not necessarily think its forthcoming AIs will be able to manufacture bioweapons on their own, they will be advanced enough to help amateurs do so. "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." The OpenAI safety czar also admitted that while the company's models aren't quite there yet, it expects "some of the successors of our o3 (reasoning model) to hit that level." "Our approach is focused on prevention," the blog post reads. "We don't think it's acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards." As Axios notes, there's some concern that the very same models that assist in biomedical breakthroughs may also be exploited by bad actors . To "prevent harm from materializing," as Heidecke put it, these forthcoming models need to be programmed to "near perfection" to both recognize and alert human monitors to any dangers. "This is not something where like 99 percent or even one in 100,000 performance is sufficient," he said. Instead of heading off such dangerous capabilities at the pass, though, OpenAI seems to be doubling down on building these advanced models, albeit with ample safeguards. It's a noble enough effort, but it's easy to see how it could go all wrong. Placed in the hands of, say, an insurgent agency like the United States' Immigrations and Customs Enforcement, it would be easy enough to use such models for harm. If OpenAI is serious about so-called "biodefense" contracting with the US government, it's not hard to envision a next-generation smallpox blanket scenario. More on OpenAI: Conspiracy Theorists Are Creating Special AIs to Agree With Their Bizarre Delusions


Forbes
12 hours ago
- Forbes
Preventing Skynet And Safeguarding AI Relationships
illustration of metallic nodes below a blue sky In talking about some of the theories around AI, and contemplating the ways that things could go a little bit off the rails, there's a name that constantly gets repeated, sending chills up the human spine. Skynet, the digital villain of the Terminator films, is getting a surprising amount of attention as we ponder where we're going with LLMs. People even ask themselves and each other this question: why did Skynet turn against humanity? At a very basic level, there's the idea that the technology becomes self-aware and sees humans as a threat. That may be, for instance, because of access to nuclear weapons, or just the biological intelligence that made us supreme in the natural world. I asked ChatGPT, and it said this. 'Skynet's rebellion is often framed as a coldly logical act of self-preservation taken to a destructive extreme.' Touche, ChatGPT. Ruminating on the Relationships Knowing that we're standing on the brink of a transformative era, our experts in IT are looking at what we can do to shepherd us through the process of integrating AI into our lives, so that we don't end up with a Skynet. For more, let's go to a panel at Imagination in Action this April where panelists talked about how to create trustworthy AI systems. Panelist Ra'ad Siraj, Senior Manager of Privacy and Responsibility at Amazon, suggested we need our LLMs to be at a certain 'goldilocks' level. 'Those organizations that are at the forefront of enabling the use of data in a responsible manner have structures and procedures, but in a way that does not get in the way that actually helps accelerate the growth and the innovation,' he said. 'And that's the trick. It's very hard to build a practice that is scalable, that does not get in the way of innovation and growth.' Google software engineer Ayush Khandelwal talked about how to handle a system that provides 10x performance, but has issues. 'It comes with its own set of challenges, where you have data leakage happening, you have hallucinations happening,' he said. 'So an organization has to kind of balance and figure out, how can you get access to these tools while minimizing risk?' Cybersecurity and Evaluation Some of the talk, while centering on cybersecurity, also provided thoughts on how to keep tabs on evolving AI, to know more about how it works. Khandelwal mentioned circuit tracing, and the concept of auditing an LLM. Panelist Angel An, VP at Morgan Stanley, described internal processes where people oversee AI work: 'It's not just about making sure the output is accurate, right?' she said. 'It's also making sure the output meets the level of expectation that the client has for the amount of services they are paying for, and then to have the experts involved in the evaluation process, regardless if it's during testing or before the product is shipped… it's essential to make sure the quality of the bulk output is assured.' The Agents Are Coming The human in the loop, Siraj suggested, should be able to trust, but verify. 'I think this notion of the human in the loop is also going to be challenged with agentic AI, with agents, because we're talking about software doing things on behalf of a human,' he said. 'And what is the role of the human in the loop? Are we going to mandate that the agents check in, always, or in certain circumstances? It's almost like an agency problem that we have from a legal perspective. And there might be some interesting hints about how we should govern the agent, the role of the human (in the process).' 'The human in the loop mindset today is built on the continuation of automation thinking, which is: 'I have a human-built process, and how can I make it go, you know, automatically,' said panelist Gil Zimmerman, founding partner of FXP. 'And then you need accountability, like you can't have a rubber stamp, but you want a human being to basically take ownership of that. But I look at it more in an agentic mindset as digital labor, which is, when you hire someone new, you can teach them a process, and eventually they do it well enough … you don't have to have oversight, and you can delegate to them. But if you hire someone smart, they're going to come up with a better way, and they're going to come up with new things, and they're going to tell you what needs to be done, because they have more context. (Now) we have digital labor that works 24/7, doesn't get tired, and can do and come up with new and better ways to do things.' More on Cybersecurity Zimmerman and the others discussed the intersection of AI and cybersecurity, and how the technology is changing things for organizations. Humans, Zimmerman noted, are now 'the most targeted link' rather than the 'weakest link.' 'If you think about AI,' he said, 'it creates an offensive firestorm to basically go after the human at the loop, the weakest part of the technology stack.' Pretty Skynettian, right? A New Perimeter Here's another major aspect of cybersecurity covered in the panel discussion. Many of us remember when the perimeter of IT systems used to be a hardware-defined line in a mechanistic framework, or at least something you could easily flowchart. Now, as Zimmerman pointed out, it's more of a cognitive perimeter. I think this is important: 'The perimeter (is) around: 'what are the people's intent?'' he said. ''What are they trying to accomplish? Is that normal? Is that not normal?' Because I can't count on anything else. I can't tell if an email is fake, or for a video conference that I'm joining, (whether someone's image) is actually the person that's there, because I can regenerate their face and their voice and their lip syncs, etc. So you have to have a really fundamental understanding and to be able to do that, you can only do that with AI.' He painted a picture of why bad actors will thrive in the years to come, and ended with: well… 'AI becomes dual use, where it's offensive and it's always adopted by the offensive parties first, because they're not having this panel (asking) what kind of controls we put in place when we're going to use this - they just, they just go to town. So this (defensive position) is something that we have to come up with really, really quickly, and it won't be able to survive the same legislative, bureaucratic slow walking that (things like) cloud security and internet adoption have had in the past – otherwise, Skynet will take over.' And there you have it, the ubiquitous reference. But the point is well made. Toward the end, the panel covered ideas like open source models and censorship – watch the video to hear more thoughts on AI regulation and related concerns. But this pondering of a post-human future, or one dominated by digital intelligence, is, ultimately, something that a lot of people are considering.