logo
Scientists May Have Found a Way to Simplify Gravity. It Could Change Physics as We Know It.

Scientists May Have Found a Way to Simplify Gravity. It Could Change Physics as We Know It.

Yahoo20-05-2025

Here's what you'll learn in this story.
A new paper uses a simplified model to prove that gravity can be unified between quantum and standard physics.
The simpler model still meets the established requirements for a robust unified gravity theory.
Even if this theory does not prove revolutionary, it shows that new ways of thinking are possible.
Two scientists in Finland are claiming to have advanced the cause of a unified theory of gravity, including 'a complete, renormalizable theory of quantum gravity.' Physicists have long tried to mesh gravity with the standard model of physics by, in a sense, comparing like with like—how can we describe gravity using measurable things in a way that aligns with how the standard model describes electromagnetic, weak, and strong forces? The key—according to the duo's new research, which appears now in the peer reviewed journal Reports on Progress in Physics—lies in a particular type of theory called a gauge.
A gauge is a way to measure something that is comparable to other things, like India's narrow gauge railways. In physics, gauge theory helps scientists take all the measurable things they know and align them in order to find commonalities or definitions. Using an old English expression, we can define a duck as something that walks like a duck and quacks like a duck. Once something is a proverbial duck, many other properties—like its color, size, or area of origin—can't change its duck-ness. The duck-ness gauge only requires waddling and quacking.
In this paper, physicists Mikko Partanen and Jukka Tulkki turn the universe at large into a bunch of overlapping, finite relationships of symmetry that act as microcosms of the entire standard model. They describe a system with eight dimensions, then break it into pieces that each use four of those dimensions. Finally, they write, '[f]our symmetries of the components of the space-time dimension field are used to derive a gauge theory, called unified gravity.'
Basically, their goal was to find the mathematically smallest model that could still hold up to all the rules required of a theory of unified gravity (one that unites the standard model and quantum physics). This work finds a middle ground between a simplified 'toy model' and the complexity of a full model of spacetime. One of the keys is that, within a gauge relationship, many terms can simply be canceled out, the same way you may have learned to do in algebra and calculus.
Partanen and Tulkki claim that by substituting new (but equivalent) values for parts of their formulae, they've created a gauge model that no longer relies on a contentious variable. 'In contrast to previous gauge theories of gravity, all infinities that are encountered in the calculations of loop diagrams can be absorbed by the redefinition of the small number of parameters of the theory in the same way as in the gauge theories of the Standard Model,' they conclude. In other words, gravity may not need to be as complicated as we've made it—at least, mathematically speaking.
A key term in this research is normalization, or renormalization. This is a form of matching reality (and observable qualities within it) to the pure mathematics of a model. Any theory of unified gravity must hold up to how we measure the effects of gravity in our portion of spacetime—or anywhere else in the universe, for that matter.
The scientists chose a compact model over a noncompact one, meaning that their model doesn't have any missing pieces that they aren't sure how to categorize. There's no quacking fish or waddling giraffe gumming up the works of what a duck must be.
You Might Also Like
The Do's and Don'ts of Using Painter's Tape
The Best Portable BBQ Grills for Cooking Anywhere
Can a Smart Watch Prolong Your Life?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Physicists Found a New Clue That Could Reveal the Fifth Force
Physicists Found a New Clue That Could Reveal the Fifth Force

Yahoo

time3 days ago

  • Yahoo

Physicists Found a New Clue That Could Reveal the Fifth Force

Here's what you'll learn when you read this story: The Standard Model of Particle Physics accounts for four fundamental forces—strong, weak, electromagnetism, and gravity—but for decades, scientists have wondered if an elusive fifth force might be at work. A new study analyzing the atomic transition of five calcium isotopes constrains the mass of a particle that would carry such a force from somewhere around 10 to 10 million electronvolts. It's still possible that these anomalies could be explainable via the standard model. The Standard Model of Particle Physics is a scientific masterpiece, but even so, it remains unfinished. For example, we still don't know why there is matter at all (a.k.a. matter-antimatter asymmetry), and then there's the whole dark matter and dark energy thing. Another source of some scientific quandary is whether there might be a fifth fundamental force. You might be familiar with the standard four—the strong force, the weak force, gravity, and electromagnetism—but some physicists wonder if a fifth force that couples together neutrons and electrons could also be at work throughout our universe. Now, an international collaboration of scientists from Germany, Switzerland, and Australia have discerned the upper limit of a particle that could carry such a force by looking at transition frequencies of five calcium isotopes. Those masses were penciled out to around 10 to 10 million electronvolts (yes, electron volts are sometimes used as mass measurements—thanks E=mc2). The results of the study were published in the journal Physical Review Letters. To arrive at this number, the researchers observed the atomic transitions of calcium-40, calcium-42, calcium-44, calcium-46, and calcium-48. An atomic transition occurs when an electron—attracted to the positively charged particles in a nucleus—briefly jumps to a higher energy level. These atomic transitions can vary based on the isotope and are influenced by the number of neutrons present in an atom. Once the observations were complete, the authors mapped the variations they recorded on what's called a King plot. According to the Standard Model, this should produce a linear plot. However, that is not what the study found. Due to the high sensitivity of the experiment, the plot ended up being nonlinear, suggesting that the deviations detected by the team could be evidence of a fifth force. That said, as the authors also note, it could also be attributable to something that is explainable within the Standard Model. However, whatever was causing these deviations, it didn't detract from the scientists' ability to set the upper limit of what the mass of the fifth-force boson might be. The search for this fifth force is a long one, and it's a scientific endeavor that's cast quite a wide net. For a while in the 1980s, scientists at MIT thought antigravity could be a fifth force, and another idea known as 'quintessence' gained popularity at the turn of the century. Recently, Fermilab in Chicago thought that they might be closing in on a fifth force, though their final results of the 'muon g-2' experiment largely confirmed the standard model. Other efforts have looked at much larger bodies than just atoms for evidence of the fifth force. Los Alamos National Laboratory published a study last year suggesting that by closely analyzing the orbits of asteroids and sussing out any deviations of those orbit, we could learn something about particle forces we don't understand. That team's ultimate aim, much like that of the team behind this new paper, was to understand the constraints on where this fifth force might reside. For now, the search continues, but scientists are taking more and more steps toward a physics-altering answer. You Might Also Like The Do's and Don'ts of Using Painter's Tape The Best Portable BBQ Grills for Cooking Anywhere Can a Smart Watch Prolong Your Life?

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Time​ Magazine

time3 days ago

  • Time​ Magazine

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results. The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study. The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper's main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' she says. 'Developing brains are at the highest risk.' Generating ideas The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel. Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices. The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely 'soulless.' The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. 'It was more like, 'just give me the essay, refine this sentence, edit it, and I'm done,'' Kosmyna says. The brain-only group, conversely, showed the highest neural connectivity, especially in alpha, theta and delta bands, which are associated with creativity ideation, memory load, and semantic processing. Researchers found this group was more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays. The third group, which used Google Search, also expressed high satisfaction and active brain function. The difference here is notable because many people now search for information within AI chatbots as opposed to Google Search. After writing the three essays, the subjects were then asked to re-write one of their previous efforts—but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. 'The task was executed, and you could say that it was efficient and convenient,' Kosmyna says. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.' The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it. Post publication This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. 'Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,' says Kosmyna. 'We need to have active legislation in sync and more importantly, be testing these tools before we implement them.' Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to 'only read this table below,' thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. 'We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,' she says, laughing. Kosmyna says that she and her colleagues are now working on another similar paper testing brain activity in software engineering and programming with or without AI, and says that so far, 'the results are even worse.' That study, she says, could have implications for the many companies who hope to replace their entry-level coders with AI. Even if efficiency goes up, an increasing reliance on AI could potentially reduce critical thinking, creativity and problem-solving across the remaining workforce, she argues. Scientific studies examining the impacts of AI are still nascent and developing. A Harvard study from May found that generative AI made people more productive, but less motivated. Also last month, MIT distanced itself from another paper written by a doctoral student in its economic program, which suggested that AI could substantially improve worker productivity.

What Is Your Cat Trying to Say? These AI Tools Aim to Decipher Meows
What Is Your Cat Trying to Say? These AI Tools Aim to Decipher Meows

Scientific American

time4 days ago

  • Scientific American

What Is Your Cat Trying to Say? These AI Tools Aim to Decipher Meows

Meeaaaoow rises like a question mark before dawn. Anyone living with a cat knows their sounds: broken chirrups like greetings, low growls that warn, purrs stitched into sleepy conversation. Ethologists have organized feline sounds that share acoustic and contextual qualities into more than 20 groupings, including the meow, the hiss, the trill, the yowl and the chatter. Any individual meow belongs, academically speaking, to a broad 'meow' category, which itself contains many variations. The house cat's verbal repertoire is far greater than that of its largely silent wild cousins. Researchers have even begun to study whether cats can drift into regional dialects, the way human accents bend along the Hudson or the Thames. And just as humans gesticulate, shrug, frown and raise their eyebrows, cats' fur and whiskers write subtitles: a twitching tail declares excitement, flattened ears signal fear, and a slow blink promises peace. Felis catus is a chatty species that, over thousands of years of domestication, has pivoted its voice toward the peculiar primate that opens the fridge. Now imagine pointing your phone at that predawn howl and reading: 'Refill bowl, please.' Last December Baidu—a Chinese multinational company that specializes in Internet services and artificial intelligence—filed a patent application for what it describes as a method for transforming animal vocalizations into human language. (A Baidu spokesperson told Reuters last month that the system is 'still in the research phase.') The proposed system would gather animal signals and process them: it would store kitten or puppy talk for 'I'm hungry' as code, then pair it not only with motion-sensing data such as tail swishes but also with vital signs such as heart rate and core temperature. All of these data would get whisked through an AI system and blended before emerging as plain-language phrases in English, Mandarin or any other tongue. The dream of decoding cat speech is much older than deep learning. By the early 20th century meows had been recorded on wax cylinders, and in the 1970s John Bradshaw, a British anthrozoologist, began more than four decades of mapping how domestic cats tell us—and each other—what they mean. By the 1990s he and his then doctoral student Charlotte Cameron-Beaumont had established that the distinct domestic 'meow,' largely absent between adults in feral colonies, is a bespoke tool for managing humans. Even domestic cats rarely use it with each other, though kittens do with their mothers. Yet for all that anecdotal richness, the formal literature remained thin: there were hundreds of papers on bird song and dozens on dolphin whistles but only a scattering on feline phonology until machine learning revived the field in the past decade. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. One of the first hints that computers might crack the cat code came in 2018, when AI scientist Yagya Raj Pandeya and his colleagues released CatSound, a library of roughly 3,000 clips covering 10 types of cat calls labeled by the scientists—from hiss and growl to purr and mother call. Each clip went through software trained on musical recordings to describe a sound's 'shape'—how its pitch rose or fell and how long it lasted—and a second program cataloged them accordingly. When the system was tested on clips it hadn't seen during training, it identified the right call type around 91 percent of the time. The study showed that the 10 vocal signals had acoustic fingerprints a machine can spot—giving researchers a proof of concept for automated cat-sound classification and eventual translation. Momentum built quickly. In 2019 researchers at the University of Milan in Italy published a study focused on the one sound aimed squarely at Homo sapiens. The research sliced the meow into three situational flavors: 'waiting for food,' 'isolation in an unfamiliar environment' and 'brushing.' By turning each meow into a set of numbers, the researchers revealed that a 'feed me' meow had a noticeably different shape from a 'where are you?' meow or a 'brush me' meow. After they trained a computer program to spot those shapes, the researchers tested the system much as Pandeya and colleagues had tested theirs: it was presented with meows not seen during training—all hand labeled based on circumstances such as hunger or isolation. The system correctly identified the meows up to 96 percent of the time, and the research confirmed that cats really do tweak their meows to match what they're trying to tell us. The research was then scaled to smartphones, turning kitchen-table curiosity into consumer AI. Developers at software engineering company Akvelon, including a former Alexa engineer, teamed up with one of the study's researchers to create the MeowTalk app, which they claim can translate meows in real time. MeowTalk has used machine learning to categorize thousands of user-submitted meows by common intent, such as 'I'm hungry,' 'I'm thirsty,' 'I'm in pain,' 'I'm happy' or 'I'm going to attack.' A 2021 validation study by MeowTalk team members claimed success rates near 90 percent. But the app also logs incorrect translation taps from skeptical owners, which serves as a reminder that the cat might be calling for something entirely different in reality. Probability scores can simply reflect pattern similarity—not necessarily the animal's exact intent. Under the hood, these machine-learning systems treat cat audio tracks like photographs. A meow becomes a spectrogram: one axis represents time, the other indicates pitch, and colors or brightness show loudness. Just as AI systems can pick out a cat's whiskers in a photograph, they can classify sound images that subtly distinguish specific kinds of meows. Last year researchers at Duzce University in Türkiye upgraded the camera: they fed spectrograms into a vision transformer, a model that chops them into tiles and assigns weights to each one to show which parts of the sound give the meow its meaning. And in May 2025 entrepreneur Vlad Reznikov uploaded a preprint to the social network ResearchGate on what he calls Feline Glossary Classification 2.3, a system that explodes cat vocabulary categorizations to 40 distinct call types across five behavioral groups. He used one machine-learning system to find the shapes inside each sound and another to study how those shapes change over the length of a single vocalization. Howls stretch, purrs pulse and many other distinct vocalizations link together in varying ways. According to Reznikov's preprint, the model had a greater than 95 percent accuracy in real-time recognition of cat sounds. Peer reviewers have yet to sharpen their pencils, but if the system can reliably distinguish a bored yowl from a 'where's my salmon?' warble, it may, if nothing else, save a lot of carpets. As for Baidu, the blueprint for its patent says its approach adds new kinds of information rather than deeper sound analysis. Imagine a cat with a fitness tracker and a baby monitor, as well as an AI assistant to explain what it all means. Whether combining these data will make the animal's message clearer or add confusion remains to be seen. Machine learning is increasingly being used to understand other aspects of animal behavior as well. Brittany Florkiewicz, a comparative and evolutionary psychologist, uses it to identify how cats mimic one another's facial expressions and to track the physical distance between them to infer relationships. 'Generally speaking, machine learning helps expedite the research process, making it very efficient and accurate, provided the models are properly guided,' she says. She believes the emergence of apps for pet owners shows how much people are thinking about innovative ways to better care for their pets. 'It's positive to see both the research community and everyday pet owners embracing this technology,' she says. Interest in animal vocalization extends not just to cats but to one of their favorite menu items: mice. DeepSqueak, a machine-learning system devised by psychologist Kevin Coffey and his team, does for rodents what the other systems do for cats. 'Mice courtship is really interesting,' Coffey says—particularly 'the full songs that they sing that humans can't hear but that are really complex songs.' Mice and rats normally communicate in an ultrasonic range, and machine learning decodes these inaudible chirps and whistles and links them to circumstances in which they occur in the lab. Coffey points out, however, that 'the animal communication space is defined by the concepts that are important to [the animals]—the things that matter in their lives.... A rat or a mouse or cat is mostly interested in communicating that they want social interaction or play or food or sex, that they're scared or hurt.' For this reason, he's skeptical of grandiose claims made by AI companies 'that we can overlap the conceptual semantic space of the animal languages and then directly translate—which is, I think, kind of total nonsense. But the idea that you can record and categorize animal vocalizations, relate them to behavior, and learn more about their lives and how complex they are—that's absolutely happening.' And though he thinks an app could realistically help people recognize when their cat is hungry or wants to be petted, he doubts it's necessary. 'We're already pretty good at that. Pet owners already communicate with their animal at that level.' Domesticated animals also communicate across species. A 2020 study found that dogs and horses playing together rapidly mimicked each other's relaxed open-mouth facial expressions and self-handicapped, putting themselves into disadvantageous or vulnerable situations to maintain well-balanced play. Florkiewicz believes this might be partly a result of domestication: humans selected which animals to raise based on communicative characteristics that facilitated shared lives. The mutual story of humans and cats is thought to have begun 12,000 years ago—when wildcats hunted rodents in the first grain stores of Neolithic farming villages in the Fertile Crescent—so there has been time for us to adapt to each other. By at least 7500 B.C.E., in Cyprus (an island with no native felines), a human had been interred with a cat. Later the Egyptians revered them; traders, sailors and eventually Vikings carried them around the world on ships; and now scientists have adapted humans' most sophisticated technology to try to comprehend their inner lives. But perhaps cats have been coaching us all along, and maybe they'll judge our software with the same cool indifference they reserve for new toys. Speech, after all, isn't merely a label but a negotiated meaning—and cats, as masters of ambiguity, may prefer a little mystery.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store