'Artificial intelligence is not a miracle cure': Nobel laureate raises questions about AI-generated image of black hole spinning at the heart of our galaxy
When you buy through links on our articles, Future and its syndication partners may earn a commission.
The monster black hole at the center of our galaxy is spinning at near "top speed," according to a new artificial intelligence (AI) model. The model, trained partially on complex telescope data that was previously considered too noisy to be useful, aims to create the most detailed black hole images ever. However, based on the questionable quality of the data, not all experts are convinced that the AI model is accurate.
"I'm very sympathetic and interested in what they're doing," Reinhard Genzel, an astrophysicist at the Max Planck Institute for Extraterrestrial Physics in Germany and one of the winners of the 2020 Nobel Prize in physics, told Live Science. "But artificial intelligence is not a miracle cure."
For decades, scientists have been trying to observe and characterize Sagittarius A*, the supermassive black hole at the heart of our galaxy. In May 2022, they unveiled the first-ever image of this enormous object, but there were still a number of questions, such as how it behaves.
Now, an international team of scientists has attempted to harness the power of AI to glean more information about Sagittarius A* from data collected by the Event Horizon Telescope (EHT). Unlike some telescopes, the EHT doesn't reside in a single location. Rather, it is composed of several linked instruments scattered across the globe that work in tandem. The EHT uses long electromagnetic waves — up to a millimeter in length — to measure the radius of the photons surrounding a black hole.
However, this technique, known as very long baseline interferometry, is very susceptible to interference from water vapor in Earth's atmosphere. This means it can be tough for researchers to make sense of the information the instruments collect.
"It is very difficult to deal with data from the Event Horizon Telescope," Michael Janssen, an astrophysicist at Radboud University in the Netherlands and co-author of the study, told Live Science. "A neural network is ideally suited to solve this problem."
Related: Astronomers discover most powerful cosmic explosions since the Big Bang
Janssen and his team trained an AI model on EHT data that had been previously discarded for being too noisy. In other words, there was too much atmospheric static to decipher information using classical techniques.
Through this AI technique, they generated a new image of Sagittarius A*'s structure, and their picture revealed some new features. For example, the black hole appears to be spinning at "almost top speed," the researchers said in a statement, and its rotational axis also seems to be pointing toward Earth. Their results were published this month in the journal Astronomy & Astrophysics.
Pinpointing the rotational speed of Sagittarius A* would give scientists clues about how radiation behaves around supermassive black holes and offer insight into the stability of the disk of matter around it.
RELATED STORIES
—Monster black hole jet from the early universe is basking in the 'afterglow' of the Big Bang
—Astronomers discover most powerful cosmic explosions since the Big Bang
—Astronomers simulate a star's final moments as it's swallowed by a black hole: 'Breaks like an egg'
However, not everyone is convinced that the new AI is totally accurate. According to Genzel, the relatively low quality of the data going into the model could have biased it in unexpected ways. As a result, the new image may be somewhat distorted, he said, and shouldn't be taken at face value.
In the future, Janssen and his team plan to apply their technique to the latest EHT data and measure it against real-world results. They hope this analysis will help to refine the model and improve future simulations.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Why is AI halllucinating more frequently, and how can we stop it?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."


Washington Post
2 hours ago
- Washington Post
Judge blocks the Trump administration's National Science Foundation research funding cuts
BOSTON — A federal judge has blocked President Donald Trump 's administration from making drastic cuts to research funding provided by the National Science Foundation. U.S. District Judge Indira Talwani in Boston struck down on Friday a policy change that could have stripped universities of tens of millions of dollars in research funding. The universities argued the move threatened critical work in artificial intelligence, cybersecurity, semiconductors and other technology fields. Talwani said the change, announced by the NSF in May, was arbitrary and capricious and contrary to law. An email Saturday to the NSF was not immediately returned. At issue are 'indirect' costs, expenses such as building maintenance and computer systems that aren't linked directly to a specific project. Currently, the NSF determines each grant recipient's indirect costs individually and is supposed to cover actual expenses. The Trump administration has dismissed indirect expenses as 'overhead' and capped them for future awards by the NSF to universities at 15 % of the funding for direct research costs. The University of California, one of the plaintiffs, estimated the change would cost it just under $100 million a year. Judges have blocked similar caps that the Trump administration placed on grants by the Energy Department and the National Institutes of Health.
Yahoo
3 hours ago
- Yahoo
Using the Ocean to Suck Up CO2 Could Come With the Small, Unintended Side Effect of Wiping Out Marine Life
As global temperatures soar and emissions remain higher than ever, scientists are exploring the dramatic, planet-wide interventions we could take to stave off the climate crisis. One of the most intriguing possibilities involves using the ocean, already the world's largest carbon sink, to suck up even more of the greenhouse gas by removing some of the carbon that it already stores. Dozens of startups are already experimenting with this form of climate intervention, which is sometimes referred to as marine carbon dioxide removal. What makes it so appealing is that the ocean, in theory, would essentially do the work for us: all we'd have to do is set it into motion and store — or even repurpose — the extracted gases so they doesn't reenter the atmosphere. But it may be too good to be true. In a new study published in the journal Environmental Research Letters, a team of international researchers warn that this could have dire unintended consequences — like accelerating the decline of the ocean's already plunging oxygen levels. "What helps the climate is not automatically good for the ocean," lead author Andreas Oschlies, from the GEOMAR Helmholtz Center for Ocean Research Kiel in Germany, said in a statement about the work. The warmer that water becomes, the less oxygen it can dissolve. In the past fifty years, as global temperatures steadily climbed, the ocean has lost nearly 2 percent of its total dissolved oxygen, a proportion roughly equal to a staggering 77 billion metric tons, according to a 2018 study. At its worst, this phenomenon, known as ocean deoxygenation, creates entire "dead zones" where there's so little oxygen available that the waters become virtually uninhabitable. Sometimes stretching across thousands of square miles, whatever marine life was once living in the afflicted area either flees or, more grimly, suffocates to death. Climate change has accelerated the eerie aquatic trend, increasing both the size and number of these dead zones. Clearly, halting global warming would help stymy this — but not if the solution we employ requires putting additional strain on the ocean. In particular, it appears that biotic forms of marine carbon removal could precipitate devastating losses of dissolved oxygen, the researchers caution. One leading method, called ocean fertilization, proposes seeding the seas with nutrients to boost the growth of oxygen-producing algae. The problem is that when the phytoplanktons perish, their tiny corpses sink to the ocean floor, where the bacteria that feed on them end up consuming even more oxygen. "Methods that increase biomass production in the ocean, and subsequently lead to oxygen-consuming decomposition, cannot be considered harmless climate solutions," Oschlies said in the statement. "Our model simulations show that such approaches could cause a decrease in dissolved oxygen that is 4 to 40 times greater than the oxygen gain expected from reduced global warming." But the researchers aren't advocating against using the ocean as a carbon sink entirely. Encouragingly, they found that abiotic methods, including one that involves flushing the waters with minerals like limestone to convert CO2 into a molecule that stays trapped underwater, has minimal effects on oxygen levels. Instead, the researchers want to stress that going forward, anyone pursuing this research should put assessing the potential oxygen toll of their technique front and center. "The ocean is a complex system which is already heavily under pressure," Oschlies said. "If we intervene with large-scale measures, we must ensure that, no matter how good our intentions are, we are not further threatening marine environmental conditions that marine life depends on." More on the ocean: A Strange Darkness Is Spreading Throughout the Oceans