
Banning Plastic Bags Works to Limit Shoreline Litter, Study Finds
At tens of thousands of shoreline cleanups across the United States in recent years, volunteers logged each piece of litter they pulled from the edges of lakes, rivers and beaches into a global database.
One of the most common entries? Plastic bags.
But in places throughout the United States where plastic bags require a fee or have been banned, fewer bags end up at the water's edge, according to research published Thursday in Science.
Lightweight and abundant, thin plastic bags often slip out of trash cans and recycling bins, travel in the wind and end up in bodies of water, where they pose serious risks to wildlife, which can become entangled or ingest them. They also break down into harmful microplastics, which have been found nearly everywhere on Earth.
Using data complied by the nonprofit Ocean Conservancy, researchers analyzed results from 45,067 shoreline cleanups between 2016 to 2023, along with a sample of 182 local and state policies enacted to regulate plastic shopping bags between 2017 and 2023.
They found areas that adopted plastic bag policies saw a 25 to 47 percent reduction in the share of plastic bag litter on shorelines, when compared with areas without policies. The longer a policy was in place, the greater the reduction.
'These policies are effective, especially in areas with high concentrations of plastic litter,' said Anna Papp, one of the authors and an environmental economist and postdoctoral associate at the Massachusetts Institute of Technology.
Want all of The Times? Subscribe.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Medscape
24 minutes ago
- Medscape
Stem Cell-Derived Islets Still Producing Insulin at 1 Year
CHICAGO — Ten people with type 1 diabetes, who had recurrent severe hypoglycemia and hypoglycemic unawareness, have remained insulin-independent for over a year following allogeneic stem cell-derived islet-cell therapy with immunosuppression, according to new phase 1/2 data from the multicenter FORWARD study sponsored by Vertex Pharmaceuticals. The insulin-producing therapy VX-880, now named zimislecel, is delivered by infusion into the hepatic portal vein. A steroid-free immunosuppressive regimen is used, involving induction with antithymocyte globulin followed by maintenance with tacrolimus plus sirolimus. "It's really exciting to have a consistent, scalable source of insulin-producing tissue," study investigator Michael R. Rickels, MD, of the University of Pennsylvania School of Medicine, Philadelphia, told Medscape Medical News . Even with the need for immunosuppression, there are many patients who could benefit from cell therapy, including those experiencing severe hypoglycemia or having challenges with glycemic control, or those already immunosuppressed for an organ transplant, he said. "Having a product with reproducible efficacy and an established safety record will be important in testing new immunomodulatory approaches, and ultimately other approaches for immune evasion, whether that's through engineering or gene-editing types of approaches in the future," added Rickels. The findings were presented on June 20 here at the American Diabetes Association (ADA) 85th Scientific Sessions and simultaneously published in the New England Journal of Medicine . Asked for comment, Jeffrey R. Millman, PhD, a professor of medicine and biomedical engineering at Washington University School of Medicine, St. Louis, Missouri, who helped develop the technique for deriving islets from stem cells, told Medscape Medical News: "It's what we hoped, but seeing it actually happen is just amazing. There's no stem cell-based therapy has come close to what they've been able to accomplish." But, Millman added, "It's still only going to be for a small portion [of people] with type 1 diabetes, which is why we need to have things like encapsulation or genetic engineering to avoid the immunosuppression part, to make it a therapy that's much more applicable to most or all people living with type 1 diabetes." 1-Year Data The new data extend the findings reported at last year's ADA meeting and continue to demonstrate the feasibility of the therapy for people with type 1 diabetes in whom the risks of immunosuppression outweigh the benefits. The 14 participants (5 men, 9 women) included in the analysis who completed 1 year of follow-up had a mean age of 43.6 years and a mean type 1 diabetes duration of 22.8 years. All had undetectable C-peptide at baseline, a mean A1c of 7.8%, and a mean total daily insulin dose of 39.3 units. All used continuous glucose monitors, 9 used insulin pumps, and 6 used automated insulin delivery systems. Despite the technology, study participants had had an average of 2.7 severe hypoglycemic episodes in the year prior to screening. All participants had engraftment and infusion, as detected by the appearance of C-peptide. Two patients received a half dose of zimislecel, and 12 received a full dose (0.8 × 109 cells) in a single infusion. At 1 year, none of the 14 patients had experienced severe hypoglycemia. All 12 who received the full dose were free of severe hypoglycemic events and had an A1c level below 7%. They also spent more than 70% of the time in the target glucose range (70-180 mg/dL), and 10 patients were insulin independent at 365 days. There were 14 adverse events, including diarrhea, headache, and nausea. Most were mild to moderate and attributed to the immune suppression. Neutropenia occurred in six participants. Two patients died, one from cryptococcal meningitis attributed to the immune suppression and one from severe dementia with agitation owing to the progression of preexisting neurocognitive impairment. The deaths resulted in a temporary pause of the research in early 2024. Overcoming the Need for Immune Suppression Millman said he is not optimistic about the potential of islet encapsulation techniques — several of which were discussed at the ADA meeting — of overcoming the need for immune suppression. "Encapsulation is promising in the sense that it is relatively simple in concept and execution, but historically it's been very challenging," he said. "The problem is that you need a certain amount of islets creating a certain amount of insulin to control blood sugars in an adult human. These cells have certain metabolic needs for glucose for the oxygen that they breathe, and if you are encapsulating them, these cells are not able to rely on blood vessels to provide the nutrients and oxygen that they need," Millman explained. He added, "There can potentially be more advanced ways of doing that that can overcome those barriers, but so far there hasn't been conclusive proof that that can be done in a way that translates to patients." Indeed, in March 2025 Vertex discontinued a phase 1/2 trial of an encapsulated islet product VX-264 because of lack of efficacy. Instead, Millman and others in the field are more optimistic about hypoimmune gene editing of the islets to avoid the necessity for immunosuppressant drugs. "There's been a lot of interesting scientific work coming out from both companies and academic labs with different ways of engineering cells to avoid immune destruction," he noted. Although this research is still in its early stages, Millman pointed to upcoming programs, such as one announced by Sana Biotechnology, for which a 6-month update will be presented here at the ADA meeting on Monday. "I'm hoping that we can learn from that, similar to what we just learned here today from Vertex Pharmaceuticals, about the challenges and the promises of genetic engineering to avoid the need for immunosuppression." Zimislecel will now be studied in a phase 3 trial, with a planned enrollment of 50 patients, to be completed by the end of summer 2025. Rickels has reported being a consultant for Vertex Pharmaceuticals and Sernova, receiving research support from Dompé and Tandem Diabetes Care, and being a consultant for Novo Nordisk. Millman has reported holding stock in and receiving research support from Sana Biotechnology.
Yahoo
2 hours ago
- Yahoo
Why is AI halllucinating more frequently, and how can we stop it?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."
Yahoo
2 hours ago
- Yahoo
An explosion of sea urchins threatens to push coral reefs in Hawaii ‘past the point of recovery'
The turquoise water of Hōnaunau Bay in Hawaii, an area popular with snorkelers and divers, is teeming with spiny creatures that threaten to push the coral reef 'past the point of recovery,' new research has found. Sea urchin numbers here are exploding as the fish species that typically keep their populations in check decline due to overfishing, according to the study, published last month in the journal PLOS ONE. It's yet another blow to a reef already suffering damage from pollution as well as climate change-driven ocean heat waves and sea level rise. Kelly J. van Woesik, a researcher at the North Carolina State University Center for Geospatial Analytics and a study author, first noticed unusually high numbers of sea urchins on snorkeling trips. 'I knew there was a story to be told,' she said. She and her fellow researchers used data from scuba surveys and images taken from the air to track the health of the reef. 'We found on average 51 urchins per square meter, which is among the highest recorded densities on coral reefs anywhere in the world,' van Woesik said. Sea urchins are small marine invertebrates, characterized by their spiny bodies and found in oceans around the world. They play a useful role in preventing algae overgrowth, which can choke off oxygen to coral. However, they also eat the reef and too many of them can cause damaging erosion. In Hōnaunau Bay, the coral is already struggling to reproduce and grow due to ocean heat and water pollution, leaving it even more vulnerable to the erosion inflicted by sea urchins. Its rate of growth has plummeted according to the study. Reef growth is typically measured by the amount of calcium carbonate — the substance which forms coral skeletons — it produces per square meter each year. The reef in Hōnaunau Bay is growing 30 times more slowly than it did four decades ago, according to the study. Production levels were around 15 kilograms (33 pounds) per square meter in parts of Hawaii, signaling a healthy reef, according to research in the 1980s. Today, the reef in Hōnaunau Bay produces just 0.5 kg (1.1 pounds) per square meter. To offset erosion from urchins, at least 26% of the reef surface must be covered by living corals – and even more coral cover is necessary for it to grow. Gregory Asner, an ecologist at Arizona State University and study author, said what was happening in this part of Hawaii was emblematic of the mounting pressures facing reefs throughout the region. 'For 27 years I have worked in Hōnaunau Bay and other bays like it across Hawaii, but Hōnaunau stood out early on as an iconic example of a reef threatened by a combination of pressures,' he said, citing warming ocean temperatures, pollution from tourism and heavy fishing. The implications of coral decline are far-reaching. Coral reefs are sometimes dubbed the 'rainforests of the sea' because they support so much ocean life. They also play a vital role protecting coastlines from storm surges and erosion. 'If the reef can't keep up with sea-level rise, it loses its ability to limit incoming wave energy,' said van Woesik. 'That increases erosion and flooding risk of coastal communities.' Kiho Kim, an environmental science professor at American University, who was not involved in the study, said the findings highlight the fragility of reef ecosystems under stress. 'Dramatic increases in any species indicate an unusual condition that has allowed them to proliferate,' Kim said. That imbalance can undermine diversity and reduce the reef's ability to provide essential ecosystem services including food security and carbon storage, he told CNN. Despite the challenges, researchers emphasize that the reef's future is not sealed. Local groups in Hōnaunau are working to reduce fishing pressure, improve water quality and support coral restoration. 'These reefs are essential to protecting the islands they surround,' van Woesik said. 'Without action taken now, we risk allowing these reefs to erode past the point of no return.'