logo
Lab-grown meat goes on sale in UK dog food

Lab-grown meat goes on sale in UK dog food

Yahoo09-02-2025

Dog food made from meat that was grown in factory vats has gone on sale in the UK.
Supplier Meatly said the "chick bites" were the first pet food products made from cultivated meat to be sold commercially anywhere in the world.
It said the technology could eventually "eliminate farm animals from the pet food industry" and reduce carbon emissions as well as the use of land and water in meat production.
A trial of the dog treats began at a pet store in Brentford, London, on Friday.
Owen Ensor, who founded London-based Meatly in 2022, said the manufacturing process was similar to brewing beer.
He said: "You take cells from a single chicken egg. From that we can create an infinite amount of meat for evermore.
"We put it in large, steel fermenters... and after a week we're able to harvest healthy, delicious chicken for our pets."
Lab-grown meat, which is genetically indistinguishable from traditionally produced meat, has proved a divisive topic in some countries.
In 2020, Singapore became the first country to authorise the sale of cell-cultivated meat for human consumption, followed by the United States three years later.
However, Italy and the US states of Alabama and Florida have instituted bans.
Advocates point to environmental benefits, while critics say cultivated meat is expensive and could harm farming.
Prof Guy Poppy, from the University of Bristol, said it addressed concerns over animal welfare.
The former chief scientific adviser at the government's Food Standards Agency added: "This is an opportunity to offer the advantages of meat but without the carbon and environmental footprint."
Prof Andrew Knight, from the University of Winchester, said: "About 20% of all the meat that is consumed by high pet-owning nations - and that would include the United Kingdom - is actually consumed by pets not people."
At a Bristol pet store and cafe, dog owners interviewed by the BBC had mixed views.
Charlotte Bloyce said her pet's carbon footprint was worth considering, while Allie Betts said she would not eat lab-grown meat herself and was reluctant to feed it to her dog.
The British Veterinary Association told the BBC it wanted more research on the safety and sustainability of cultivated meat.
Mr Ensor said he could understand people being "a little bit squeamish" about the product.
However, he said it was approved by food regulators and did not contain hormones, steroids and other chemicals sometimes found in meat.
Meatly's chief executive said the product had become much more commercially viable.
He acknowledged: "Currently it is expensive but we've made great strides bringing down the cost dramatically over the last two years and are going to continue to do so."
You can follow BBC Hampshire & Isle of Wight on Facebook, X, or Instagram.
Lab-raised meat 'not the enemy', say farmers
Lab-grown meat set to be sold in UK pet food
Ron DeSantis bans 'global elite' lab-grown meat

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why nightmares could make you age faster and die sooner
Why nightmares could make you age faster and die sooner

Yahoo

time2 hours ago

  • Yahoo

Why nightmares could make you age faster and die sooner

Frequent nightmares are linked to premature ageing and increase the risk of an early death, according to a new study. Adults who report weekly nightmares are more than three times likely to die before the age of 70 compared to those who rarely or never experience them, researchers found. The study found nightmares to be a 'stronger predictor of premature death' than smoking, obesity, poor diet, and low physical activity. The scientists warned the findings should be treated as a 'public health concern', but said people can reduce nightmares by managing stress. The team, led by Dr Abidemi Otaiku of the UK Dementia Research Institute, and Imperial College London, analysed data from 2,429 children aged eight to 10 and 183,012 adults aged 26 to 86 over a period of 19 years. The research, presented at the European Academy of Neurology (EAN) Congress this month, found that nightmares disrupt both sleep quality and duration, which impairs the body's overnight cellular restoration and repair ability. The combined impacts of chronic stress and disrupted sleep are likely to contribute to the accelerated ageing of our cells and bodies. Dr Otaiku said, 'Our sleeping brains cannot distinguish dreams from reality. That's why nightmares often wake us up sweating, gasping for breath, and with our hearts pounding – because our fight-or-flight response has been triggered. This stress reaction can be even more intense than anything we experience while awake.' He said: 'Nightmares lead to prolonged elevations of cortisol, a stress hormone closely linked to faster cellular ageing. For those who frequently experience nightmares, this cumulative stress may significantly impact the ageing process.' He added: 'Given how common and modifiable nightmares are, they should be taken far more seriously as a public health concern.' Researchers found that children and adults who had frequent nightmares also exhibited faster ageing. This accounted for approximately 40 per cent of those who had a higher risk of early death. Dr Otaiku said this was the first study to show nightmares can predict faster biological ageing and earlier mortality, even after accounting for other health issues. Even monthly nightmares were linked to faster ageing and increased mortality compared to those who had no nightmares. and the links were consistent across all ages, sexes, ethnicities, and mental health statuses. 'The good news is that nightmares can be prevented and treated,' said Dr Otaiku. Simple measures, such as maintaining good sleep hygiene, managing stress, seeking treatment for anxiety or depression and not watching scary films can be effective in reducing nightmares, he said.

Nationwide recall issued for popular chocolate brand that contains potentially ‘life-threatening' ingredient
Nationwide recall issued for popular chocolate brand that contains potentially ‘life-threatening' ingredient

New York Post

time20 hours ago

  • New York Post

Nationwide recall issued for popular chocolate brand that contains potentially ‘life-threatening' ingredient

A popular chocolate treat is being pulled from shelves nationwide over an ingredient that may trigger severe – and potentially deadly – allergic reactions, federal officials warned. An urgent recall was issued after Lipari Foods discovered that its 14-ounce packages of JLM Branded Dark Chocolate Nonpareils may contain undeclared milk, the US Food and Drug Administration (FDA) announced Friday. Those with milk allergies are urged to avoid consuming the potentially lethal candy. Select packages of JLM Branded Dark Chocolate Nonpareils may contain undeclared milk. USFDA 'People who have allergies to milk run the risk of serious or life-threatening allergic reactions if they consume these products,' the dire bulletin stated. The Michigan-based company initiated the recall after its distributor, Weave Nut Company, alerted them that the candy may contain the dairy allergen, which was not disclosed on the packaging. But the sprinkle-topped chocolate discs, sold in clear plastic tubs, had already made their way to retailers across the country. The recall targets packaging with lot codes 28202501A, 29202501A, 23202504A, 14202505A, 15202505A, and 03202506A on the bottom label. No illnesses or adverse reactions have been reported in connection with the recall. Brent Hofacker – The FDA advised customers to return the product to the place of purchase for a full refund. No illnesses or adverse reactions have been reported in connection with the recall.

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers
Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

CNBC

time20 hours ago

  • CNBC

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

As the usage of artificial intelligence — benign and adversarial — increases at breakneck speed, more cases of potentially harmful responses are being uncovered. These include hate speech, copyright infringements or sexual content. The emergence of these undesirable behaviors is compounded by a lack of regulations and insufficient testing of AI models, researchers told CNBC. Getting machine learning models to behave the way it was intended to do so is also a tall order, said Javier Rando, a researcher in AI. "The answer, after almost 15 years of research, is, no, we don't know how to do this, and it doesn't look like we are getting better," Rando, who focuses on adversarial machine learning, told CNBC. However, there are some ways to evaluate risks in AI, such as red teaming. The practice involves individuals testing and probing artificial intelligence systems to uncover and identify any potential harm — a modus operandi common in cybersecurity circles. Shayne Longpre, a researcher in AI and policy and lead of the Data Provenance Initiative, noted that there are currently insufficient people working in red teams. While AI startups are now using first-party evaluators or contracted second parties to test their models, opening the testing to third parties such as normal users, journalists, researchers, and ethical hackers would lead to a more robust evaluation, according to a paper published by Longpre and researchers. "Some of the flaws in the systems that people were finding required lawyers, medical doctors to actually vet, actual scientists who are specialized subject matter experts to figure out if this was a flaw or not, because the common person probably couldn't or wouldn't have sufficient expertise," Longpre said. Adopting standardized 'AI flaw' reports, incentives and ways to disseminate information on these 'flaws' in AI systems are some of the recommendations put forth in the paper. With this practice having been successfully adopted in other sectors such as software security, "we need that in AI now," Longpre added. Marrying this user-centred practice with governance, policy and other tools would ensure a better understanding of the risks posed by AI tools and users, said Rando. Project Moonshot is one such approach, combining technical solutions with policy mechanisms. Launched by Singapore's Infocomm Media Development Authority, Project Moonshot is a large language model evaluation toolkit developed with industry players such as IBM and Boston-based DataRobot. The toolkit integrates benchmarking, red teaming and testing baselines. There is also an evaluation mechanism which allows AI startups to ensure that their models can be trusted and do no harm to users, Anup Kumar, head of client engineering for data and AI at IBM Asia Pacific, told CNBC. Evaluation is a continuous process that should be done both prior to and following the deployment of models, said Kumar, who noted that the response to the toolkit has been mixed. "A lot of startups took this as a platform because it was open source, and they started leveraging that. But I think, you know, we can do a lot more." Moving forward, Project Moonshot aims to include customization for specific industry use cases and enable multilingual and multicultural red teaming. Pierre Alquier, Professor of Statistics at the ESSEC Business School, Asia-Pacific, said that tech companies are currently rushing to release their latest AI models without proper evaluation. "When a pharmaceutical company designs a new drug, they need months of tests and very serious proof that it is useful and not harmful before they get approved by the government," he noted, adding that a similar process is in place in the aviation sector. AI models need to meet a strict set of conditions before they are approved, Alquier added. A shift away from broad AI tools to developing ones that are designed for more specific tasks would make it easier to anticipate and control their misuse, said Alquier. "LLMs can do too many things, but they are not targeted at tasks that are specific enough," he said. As a result, "the number of possible misuses is too big for the developers to anticipate all of them." Such broad models make defining what counts as safe and secure difficult, according to a research that Rando was involved in. Tech companies should therefore avoid overclaiming that "their defenses are better than they are," said Rando.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store