logo
Science news this week: 'Dragon Man's' identity and the universe's 'missing matter'

Science news this week: 'Dragon Man's' identity and the universe's 'missing matter'

Yahoo7 hours ago

When you buy through links on our articles, Future and its syndication partners may earn a commission.
This week's science news reveals the identity of the mysterious "Dragon Man," while also finding clues to the universe's "missing matter."
In 1933, a Chinese laborer in Harbin City discovered a human-like skull with a huge cranium, broad nose and big eyes. Just under 90 years later, experts gave this curious specimen a new species name — Homo longi, or "Dragon Man" — due to its unusual shape and size. But this classification has not gone unchallenged, with many scientists saying this skull belongs not to a new species, but instead to an ancient group of humans called Denisovans. Now, a pair of new studies claim to have finally put the mystery to bed.
Another mystery that we came one step closer to solving this week is where the universe's "missing" matter is hiding. Ordinary or "baryonic" aryonic matter, which is composed of particles like protons and neutrons, makes up just 5% of the universe, but scientists have been able to observe only about half as much of it as they expected. To find the missing matter, researchers search for clues by studying short, extragalactic flashes known as fast radio bursts, which light up the intergalactic space that lies between them and Earth — and they may have just found some.
Although very few long-term studies of psilocybin — the main psychoactive ingredient in magic mushrooms — as a treatment for depression have been conducted to date, new research presented this week at the Psychedelic Science 2025 conference suggests it can alleviate depression for at least five years after a single dose.
The researchers found that 67% of study participants who had suffered from depression half a decade earlier remained in remission after a single psychedelic therapy session, while also reporting less anxiety and less difficulty functioning on a daily basis.
Discover more health news
—Iron deficiency in pregnancy can cause 'male' mice to develop female organs
—The brain might have a hidden 'off switch' for binge drinking
—Ketamine may treat depression by 'flattening the brain's hierarchies,' small study suggests
The world is awash with the color purple — lavender flowers, amethyst gemstones, plums, eggplants and purple emperor butterflies. But if you look closely at the visible-light portion of the electromagnetic spectrum, you'll notice that purple is absent. So does that mean the color doesn't really exist? Not necessarily.
—If you enjoyed this, sign up for our Life's Little Mysteries newsletter
Asking artificial intelligence reasoning models questions on topics like algebra or philosophy caused carbon dioxide emissions to spike significantly.
Specialized large language models (LLMs), such as Anthropic's Claude, OpenAI's o3 and DeepSeek's R1, dedicate more time and computing power to producing more accurate responses than their predecessors, but a new study finds the cost could come at up to 50 times more carbon emissions than their more basic equivalents.
While the study's findings aren't definitive — emissions may vary depending on the hardware used and the energy grids used to supply their power — the researchers hope their work should prompt AI users to think before deploying the more advanced technology.
Read more planet technology news
—This EV battery fully recharges in just 18 seconds — and it just got the green light for mass production
—Hurricanes and sandstorms can be forecast 5,000 times faster thanks to new Microsoft AI model
—China pits rival humanoids against each other in world's first 'robot boxing tournament'
—14,000-year-old ice age 'puppies' were actually wolf sisters that dined on woolly rhino for last meal
—Nobel laureate raises questions about AI-generated image of black hole spinning at the heart of our galaxy
—Enslaved Africans led a decade-long rebellion 1,200 years ago in Iraq, new evidence suggests
—Covering poop lagoons with a tarp could cut 80% of methane emissions from dairy farms
—Satellite coated in ultra-dark 'Vantablack' paint will launch into space next year to help combat major issue
The Colorado River snakes through seven U.S. and two Mexican states, and supplies some 40 million people, including those in Phoenix and Las Vegas, with their water needs. But as supplies of this surface water reach record lows, more and more people have been pumping groundwater from far below the surface.
Stark new satellite data reveal that the Colorado River basin has lost huge amounts of groundwater over the last few decades, with some research suggesting that this groundwater could run out by the end of the century. But is that really the case? And if so, what could be done to prevent that happening?
—How to see the groundbreaking space photos from the world's largest camera [Astronomy]
—Instead of 'de-extincting' dire wolves, scientists should use gene editing to protect living, endangered species [Opinion]
—Crows: Facts about the clever birds that live all over the world [Fact file]
—Best thermal binoculars: Observe nocturnal wildlife after dark [Buying guide]
—Watch David Attenborough's 'Ocean' from anywhere in the world with this NordVPN deal — and grab an Amazon voucher just in time for Prime Day [Deal]
A massive eruption at Indonesia's Mount Lewotobi Laki-laki volcano sent giant plumes of ash spewing more than 6 miles (10 kilometers) into the skies on Tuesday (June 17), followed by a second eruption just a day later.
This incredible mushroom-shaped cloud could be seen over 95 miles (150 km) away, and was accompanied by rumbling, lightning and thunder, typical of explosive eruptions that spew enormous amounts of material — much of which showered over nearby villages.
Warning signs at Lewoboti Laki-laki prompted officials to raise the eruption alert to the highest level on Tuesday, according to a statement, and fortunately at time of writing there have been no reports of casualties.
Want more science news? Follow our Live Science WhatsApp Channel for the latest discoveries as they happen. It's the best way to get our expert reporting on the go, but if you don't use WhatsApp we're also on Facebook, X (formerly Twitter), Flipboard, Instagram, TikTok, Bluesky and LinkedIn.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

timean hour ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

My Chinese mom's timeless health tips — including a ‘magic' go-to and a plant that helps with cramps, colds, and upset stomachs
My Chinese mom's timeless health tips — including a ‘magic' go-to and a plant that helps with cramps, colds, and upset stomachs

New York Post

time3 hours ago

  • New York Post

My Chinese mom's timeless health tips — including a ‘magic' go-to and a plant that helps with cramps, colds, and upset stomachs

Modern health trends come and go, but the real secret to feeling your best may lie in ancient practices you can do right at home. 'So many of my wellness roots trace back to my mom's kitchen. She always believed that food is medicine and that healing starts long before you're sick,' Lulu Ge, founder of Elix, a wellness brand inspired by Traditional Chinese Medicine (TCM), told The Post. Ge shared three of her mother's time-tested food tips, plus two bonus remedies, to help you spend less time in the doctor's office and more time enjoying your life. 5 Lulu Ge notes that traditional Chinese medicine focuses on whole-body wellness. Courtesy of Elix Healing Keep it warm to beat the cramps Ladies, listen up: 'Eat warming foods for a warm, pain-free womb,' Ge said. That means ditching iced drinks, especially during your period. In TCM, the menstrual cycle is believed to be closely linked to the flow of 'Qi,' or vital energy, and blood. Ge's mother warned that getting chilled can cause stagnation, leading to painful cramps, irregular periods and trouble shedding the uterine lining. The remedy? Load up on warming spices and cooked foods, while avoiding cold and raw dishes, to keep your blood flowing and stay pain-free. Ginger to the rescue Used in Chinese and Indian medicine for thousands of years, ginger comes from the root of the Zingiber officinale plant. 5 Ginger can reduce bloating and support digestion. Luis Echeverri Urrea – 'It's a warming spice shown to help with digestive issues, nausea, and bloating,' Ge said. Here's how it works: ginger speeds up the rate at which food leaves the stomach, which helps those with delayed stomach emptying — a common cause of nausea. It also reduces fermentation, constipation and other causes of bloating and gas. Additionally, ginger contains gingerol, a compound with powerful antioxidant and anti-inflammatory effects. These help reduce inflammation in the digestive tract, easing stomach pain and cramps. Whenever Ge had cramps, a cold, or an upset stomach, her mother would simmer fresh ginger into tea and insist she sip it slowly. 'Now, I keep Elix's Ginger Aide with me everywhere as a nod to that tradition,' she said. The product is a pure concentrate of decocted organic ginger slices that allows users 'to feel the effects of plant medicine in a gentle, daily ritual.' Broth that heals Bone broth isn't just soup — it's liquid gold. Made by simmering animal bones for hours, it's packed with collagen, minerals and amino acids. 5 Bone broth is made from animal bones and connective tissue, typically cattle, chicken or fish. qwartm – 'Bone broth = magic,' Ge said. 'It was the go-to for recovery — after illness or just when life felt depleting.' In TCM, bone broth is a powerhouse elixir that boosts Qi, blood and 'Yin' energy, which is responsible for providing the body with the moisture it needs to function properly. When yin is low, you may experience symptoms such as dry skin, night sweats, constipation and anxiety. Qi deficiency often manifests as fatigue, weakness, shortness of breath and loss of appetite. In TCM, bone broth is also often used to nourish postpartum mothers, giving them the nutrients and energy needed to recover from childbirth and support lactation. Ge said her mother always paired bone broth with goji berries, red dates and, of course, a dash of love. The power of rest In TCM, wellness isn't just about what's on your plate. 'My mom made sure I prioritized sleep and restorative 'Yin' time for rest and relaxation,' Ge said, noting this is especially crucial during your period, when you're wiped out or feeling on the verge of getting sick. 'She saw rest as a form of healing — not a luxury, but a necessity. It helps you bounce back with fresh energy and focus,' Ge added. 5 About 84 million Americans don't consistently get the recommended amount of sleep for optimal health. Syda Productions – In the US, about 1 in 3 adults regularly miss out on the sleep they need to stay healthy, according to a 2022 Gallup poll. Adequate sleep is essential for physical and mental well-being, enabling the brain and body to undergo critical repair and restoration processes. These include muscle recovery, tissue growth, and hormone regulation, as well as the removal of toxins, support of immune function and memory consolidation. Skip the pills — try this first 'Anytime I had pain, her instinct was to reach for a warming balm, acupressure point or herbal patch first,' Ge said. 'She trusted the body could heal — with the right support.' 5 Ge recommends blending Eastern and Western approaches for optimal health. Courtesy of Elix Healing TCM treats the whole person, aiming to fix root problems, not just mask symptoms like Western medicine often does. Herbal patches and balms work by delivering healing directly through the skin to sore spots, cutting down on systemic side effects and targeting the pain where it hurts most. In TCM, your skin mirrors is also thought to mirror your inner health. These external remedies help strengthen and repair it, making you tougher against the daily grind. Plus, balms and patches team up with internal herbs for a one-two punch, tackling symptoms and restoring balance throughout the body's energy system. All these tips can boost your health — but don't ditch Western medicine just yet. 'The real power lies in blending Eastern and Western medicine,' Ge said. 'Western tools for acute issues and diagnostics, TCM for long-term support, prevention and personalized care. Together, they offer something truly holistic,' she explained.

Science recap: This week's discoveries include new clues from the fossil skull of a mysterious human species
Science recap: This week's discoveries include new clues from the fossil skull of a mysterious human species

Yahoo

time3 hours ago

  • Yahoo

Science recap: This week's discoveries include new clues from the fossil skull of a mysterious human species

Editor's note: A version of this story appeared in CNN's Wonder Theory science newsletter. To get it in your inbox, sign up for free here. Tens of thousands of years ago, our species — Homo sapiens — mingled and interbred with other prehistoric humans: our distant cousins, the Neanderthals and Denisovans. Hundreds of Neanderthal fossils give us a good idea of their appearance, lives and relationships, but so little is known about Denisovans that they still don't have an official scientific name. Evidence of their existence has surfaced in faint traces, mapped by DNA markers that lurk in our own genetic makeup and confirmed by only a few fossil fragments. This week, however, a 146,000-year-old skull dredged out of a well in China in 2018 may just be a key missing piece to this cryptic evolutionary puzzle. The nearly complete skull did not match any previously known species of prehistoric human. But two new studies — which researchers say are among the biggest paleoanthropology papers of the year — detail how scientists were able to extract genetic material from the fossil and help unravel this biological mystery. The DNA sample taken from 'Dragon Man,' as the specimen is called, revealed that he was in fact related to Denisovans, early humans who are thought to have lived between roughly 500,000 and 30,000 years ago. The finding could be monumental, helping to paint a fuller picture of a time when our own species coexisted with other prehistoric humans. Astronomers have long grappled with the quandary of 'dark matter,' but plenty of enigmas surround regular matter as well. The proton-and-neutron-based atoms that we're familiar with are called baryonic matter. And this material is strewn between galaxies like intergalactic fog, making it extremely difficult to measure. Perhaps, that is, until now. A new study explains how scientists were able to observe the baryonic matter using the flashing of fast radio bursts. In a rare encounter, scientists have captured the first-ever footage of an elusive 3-foot-long squid alive in its deep-sea habitat. Fruit, flowers, birds and musical instruments decorated the walls of a luxury villa — part of a site the excavation team dubbed the 'Beverly Hills' of Roman Britain — before the building was razed roughly 1,800 years ago. The frescoes were painstakingly pieced together by experts from the Museum of London Archaeology. Han Li, senior building material specialist at MOLA, described the effort as a 'once in a lifetime' opportunity. Romans invaded modern-day Britain in AD 43 and established Londinium, the precursor to modern London. The occupation lasted for almost 400 years. Under the life-affirming glow of the sun, methane is a dangerous gas to be avoided. A heat-trapping chemical pollutant in Earth's atmosphere, methane exacerbates the climate crisis. But within the planet's deep recesses — thousands of feet below the ocean's surface off the US West Coast — the gas can be transformed into a nutritious meal. At least for spiders. Scientists say they've discovered three previously unknown species of sea spider living around methane seeps. In these marine habitats where sunlight can't reach, gas escapes through cracks in the seafloor and feeds bacteria that latch on to the spiders' exoskeletons. The bacteria convert carbon-rich methane and oxygen into sugars and fats the spiders can eat, according to a new study. The newfound Sericosura sea spiders may pass methane-fueled bacteria to their hatchlings as an easy source of food, the researchers suggest. Check out these other must-read science stories from the week: — A SpaceX Starship rocket exploded during a routine ground test on Wednesday. Explore how this and other recent setbacks may affect the company's Mars ambitions. — A tiny brown moth in Australia migrates some 600 miles at night using the stars for navigation — something only humans and birds were known to do before. — A hunt for ghostly cosmic particles found anomalous signals coming from Antarctic ice. A new detector could help scientists explain what they are. — Researchers used DNA to reconstruct the face of a prehistoric woman who lived around 10,500 years ago in what's now Belgium, suggesting that skin color already varied considerably among different populations. Like what you've read? Oh, but there's more. Sign up here to receive in your inbox the next edition of Wonder Theory, brought to you by CNN Space and Science writers Ashley Strickland, Katie Hunt and Jackie Wattles. They find wonder in planets beyond our solar system and discoveries from the ancient world.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store