
What Is Your Cat Trying to Say? These AI Tools Aim to Decipher Meows
Meeaaaoow rises like a question mark before dawn. Anyone living with a cat knows their sounds: broken chirrups like greetings, low growls that warn, purrs stitched into sleepy conversation. Ethologists have organized feline sounds that share acoustic and contextual qualities into more than 20 groupings, including the meow, the hiss, the trill, the yowl and the chatter. Any individual meow belongs, academically speaking, to a broad 'meow' category, which itself contains many variations. The house cat's verbal repertoire is far greater than that of its largely silent wild cousins. Researchers have even begun to study whether cats can drift into regional dialects, the way human accents bend along the Hudson or the Thames. And just as humans gesticulate, shrug, frown and raise their eyebrows, cats' fur and whiskers write subtitles: a twitching tail declares excitement, flattened ears signal fear, and a slow blink promises peace. Felis catus is a chatty species that, over thousands of years of domestication, has pivoted its voice toward the peculiar primate that opens the fridge.
Now imagine pointing your phone at that predawn howl and reading: 'Refill bowl, please.' Last December Baidu—a Chinese multinational company that specializes in Internet services and artificial intelligence—filed a patent application for what it describes as a method for transforming animal vocalizations into human language. (A Baidu spokesperson told Reuters last month that the system is 'still in the research phase.') The proposed system would gather animal signals and process them: it would store kitten or puppy talk for 'I'm hungry' as code, then pair it not only with motion-sensing data such as tail swishes but also with vital signs such as heart rate and core temperature. All of these data would get whisked through an AI system and blended before emerging as plain-language phrases in English, Mandarin or any other tongue.
The dream of decoding cat speech is much older than deep learning. By the early 20th century meows had been recorded on wax cylinders, and in the 1970s John Bradshaw, a British anthrozoologist, began more than four decades of mapping how domestic cats tell us—and each other—what they mean. By the 1990s he and his then doctoral student Charlotte Cameron-Beaumont had established that the distinct domestic 'meow,' largely absent between adults in feral colonies, is a bespoke tool for managing humans. Even domestic cats rarely use it with each other, though kittens do with their mothers. Yet for all that anecdotal richness, the formal literature remained thin: there were hundreds of papers on bird song and dozens on dolphin whistles but only a scattering on feline phonology until machine learning revived the field in the past decade.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
One of the first hints that computers might crack the cat code came in 2018, when AI scientist Yagya Raj Pandeya and his colleagues released CatSound, a library of roughly 3,000 clips covering 10 types of cat calls labeled by the scientists—from hiss and growl to purr and mother call. Each clip went through software trained on musical recordings to describe a sound's 'shape'—how its pitch rose or fell and how long it lasted—and a second program cataloged them accordingly. When the system was tested on clips it hadn't seen during training, it identified the right call type around 91 percent of the time. The study showed that the 10 vocal signals had acoustic fingerprints a machine can spot—giving researchers a proof of concept for automated cat-sound classification and eventual translation.
Momentum built quickly. In 2019 researchers at the University of Milan in Italy published a study focused on the one sound aimed squarely at Homo sapiens. The research sliced the meow into three situational flavors: 'waiting for food,' 'isolation in an unfamiliar environment' and 'brushing.' By turning each meow into a set of numbers, the researchers revealed that a 'feed me' meow had a noticeably different shape from a 'where are you?' meow or a 'brush me' meow. After they trained a computer program to spot those shapes, the researchers tested the system much as Pandeya and colleagues had tested theirs: it was presented with meows not seen during training—all hand labeled based on circumstances such as hunger or isolation. The system correctly identified the meows up to 96 percent of the time, and the research confirmed that cats really do tweak their meows to match what they're trying to tell us.
The research was then scaled to smartphones, turning kitchen-table curiosity into consumer AI. Developers at software engineering company Akvelon, including a former Alexa engineer, teamed up with one of the study's researchers to create the MeowTalk app, which they claim can translate meows in real time. MeowTalk has used machine learning to categorize thousands of user-submitted meows by common intent, such as 'I'm hungry,' 'I'm thirsty,' 'I'm in pain,' 'I'm happy' or 'I'm going to attack.' A 2021 validation study by MeowTalk team members claimed success rates near 90 percent. But the app also logs incorrect translation taps from skeptical owners, which serves as a reminder that the cat might be calling for something entirely different in reality. Probability scores can simply reflect pattern similarity—not necessarily the animal's exact intent.
Under the hood, these machine-learning systems treat cat audio tracks like photographs. A meow becomes a spectrogram: one axis represents time, the other indicates pitch, and colors or brightness show loudness. Just as AI systems can pick out a cat's whiskers in a photograph, they can classify sound images that subtly distinguish specific kinds of meows. Last year researchers at Duzce University in Türkiye upgraded the camera: they fed spectrograms into a vision transformer, a model that chops them into tiles and assigns weights to each one to show which parts of the sound give the meow its meaning.
And in May 2025 entrepreneur Vlad Reznikov uploaded a preprint to the social network ResearchGate on what he calls Feline Glossary Classification 2.3, a system that explodes cat vocabulary categorizations to 40 distinct call types across five behavioral groups. He used one machine-learning system to find the shapes inside each sound and another to study how those shapes change over the length of a single vocalization. Howls stretch, purrs pulse and many other distinct vocalizations link together in varying ways. According to Reznikov's preprint, the model had a greater than 95 percent accuracy in real-time recognition of cat sounds. Peer reviewers have yet to sharpen their pencils, but if the system can reliably distinguish a bored yowl from a 'where's my salmon?' warble, it may, if nothing else, save a lot of carpets.
As for Baidu, the blueprint for its patent says its approach adds new kinds of information rather than deeper sound analysis. Imagine a cat with a fitness tracker and a baby monitor, as well as an AI assistant to explain what it all means. Whether combining these data will make the animal's message clearer or add confusion remains to be seen.
Machine learning is increasingly being used to understand other aspects of animal behavior as well. Brittany Florkiewicz, a comparative and evolutionary psychologist, uses it to identify how cats mimic one another's facial expressions and to track the physical distance between them to infer relationships. 'Generally speaking, machine learning helps expedite the research process, making it very efficient and accurate, provided the models are properly guided,' she says. She believes the emergence of apps for pet owners shows how much people are thinking about innovative ways to better care for their pets. 'It's positive to see both the research community and everyday pet owners embracing this technology,' she says.
Interest in animal vocalization extends not just to cats but to one of their favorite menu items: mice. DeepSqueak, a machine-learning system devised by psychologist Kevin Coffey and his team, does for rodents what the other systems do for cats. 'Mice courtship is really interesting,' Coffey says—particularly 'the full songs that they sing that humans can't hear but that are really complex songs.' Mice and rats normally communicate in an ultrasonic range, and machine learning decodes these inaudible chirps and whistles and links them to circumstances in which they occur in the lab.
Coffey points out, however, that 'the animal communication space is defined by the concepts that are important to [the animals]—the things that matter in their lives.... A rat or a mouse or cat is mostly interested in communicating that they want social interaction or play or food or sex, that they're scared or hurt.' For this reason, he's skeptical of grandiose claims made by AI companies 'that we can overlap the conceptual semantic space of the animal languages and then directly translate—which is, I think, kind of total nonsense. But the idea that you can record and categorize animal vocalizations, relate them to behavior, and learn more about their lives and how complex they are—that's absolutely happening.' And though he thinks an app could realistically help people recognize when their cat is hungry or wants to be petted, he doubts it's necessary. 'We're already pretty good at that. Pet owners already communicate with their animal at that level.'
Domesticated animals also communicate across species. A 2020 study found that dogs and horses playing together rapidly mimicked each other's relaxed open-mouth facial expressions and self-handicapped, putting themselves into disadvantageous or vulnerable situations to maintain well-balanced play. Florkiewicz believes this might be partly a result of domestication: humans selected which animals to raise based on communicative characteristics that facilitated shared lives.
The mutual story of humans and cats is thought to have begun 12,000 years ago—when wildcats hunted rodents in the first grain stores of Neolithic farming villages in the Fertile Crescent—so there has been time for us to adapt to each other. By at least 7500 B.C.E., in Cyprus (an island with no native felines), a human had been interred with a cat. Later the Egyptians revered them; traders, sailors and eventually Vikings carried them around the world on ships; and now scientists have adapted humans' most sophisticated technology to try to comprehend their inner lives. But perhaps cats have been coaching us all along, and maybe they'll judge our software with the same cool indifference they reserve for new toys. Speech, after all, isn't merely a label but a negotiated meaning—and cats, as masters of ambiguity, may prefer a little mystery.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Miami Herald
an hour ago
- Miami Herald
Tesla signs deal for $556 million grid-scale battery storage station in China
Tesla Friday signed a $556.8 million agreement to build a grid-scale battery storage station in China. The deal is with China Kangfu International Leasing Co., as well as the Shanghai local government. It's the first Tesla large-scale battery storage facility in China. In a statement on Chinese social media site Weibo, Tesla said, 'Tesla's first grid-side energy storage power station project in mainland China has been officially grid-side energy storage power station is a 'smart regulator' for urban electricity, which can flexibly adjust grid resources.' Tesla said that, when complete, this project is expected to become the largest grid-side energy storage project in China. Utility-scale battery energy storage assists energy grid management by keeping supply and demand in balance. More is being built worldwide. Tesla competed against two Chinese companies that offer similar products. CATL and automaker BYD have significant global market share in these battery storage products. China plans to add nearly 5 gigawatts of electricity supply powered by batteries by the end of 2025, which would bring the total capacity to 40 gigawatts. Copyright 2025 UPI News Corporation. All Rights Reserved.


Business Insider
8 hours ago
- Business Insider
Tesla (TSLA) Invites a Small Group of People to Test Its Robotaxi Service
EV maker Tesla (TSLA) has started inviting a small group of people to try out its robotaxi service in Austin, Texas. The test could begin as early as Sunday, according to Reuters. Riders must be at least 18 years old, and a Tesla employee will sit in the front passenger seat during each ride. Furthermore, the test will use about 10 Model Y SUVs running on Tesla's Full Self-Driving software. Those who receive the invitation can download Tesla's Robotaxi app to call a ride, and are asked to share their feedback on the experience. Confident Investing Starts Here: This trial is important for Tesla as the company shifts its focus from building affordable electric cars to working on robotics and artificial intelligence. Unsurprisingly, CEO Elon Musk says that safety is the top priority, with humans monitoring the cars remotely. He believes that the robotaxi service can grow quickly if the trial goes well. Still, experts are worried about Tesla's approach, which relies solely on cameras and AI, rather than incorporating additional sensors like lidar or radar. These worries grow in conditions like fog, heavy rain, or bright sunlight. As a result, some lawmakers and safety experts want more caution, which led a group of Austin-area Democratic lawmakers to ask Tesla to wait until September, when new state rules for autonomous vehicles take effect. It is worth noting that self-driving services are expensive and risky, and companies like Tesla, Waymo (GOOGL), and Zoox (AMZN) have already faced federal investigations and recalls after accidents. What Is the Prediction for Tesla Stock? Turning to Wall Street, analysts have a Hold consensus rating on TSLA stock based on 14 Buys, 12 Holds, and nine Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average TSLA price target of $286.14 per share implies 11% downside risk.


New York Post
12 hours ago
- New York Post
SoftBank pitches chip giant TSMC on building $1 trillion AI hub in US: report
SoftBank CEO Masayoshi Son is pitching Taiwan Semiconductor Manufacturing Company on a massive $1 trillion complex in the US to build robots and artificial intelligence, according to a report. The giant robotics center would be based in Arizona, a version of the production hub seen in the Chinese city of Shenzhen that could help bring manufacturing back to the US, sources told Bloomberg. It comes as President Trump has been calling for an all-hands approach to bringing manufacturing opportunities to the US, especially by tech companies and automakers. 3 SoftBank's CEO Masayoshi Son speaking during a White House event as President Trump looks on. KEN CEDENO/POOL/EPA-EFE/Shutterstock Son is seeking out TSMC invest $165 billion in the US and has opened its first Arizona factory – as a partner, according to the report. It's unclear what role Son sees for the Taiwanese chip giant, which makes Nvidia's most advanced chips, and if the company would even be interested in the project. TSMC declined to comment. Codenamed 'Project Crystal Land,' the complex is a clear attempt not only to advance artificial intelligence but to ensure a lasting legacy for Son, who has often talked down his past accomplishments and abandoned projects midway, sources told Bloomberg. The ambitious, one-of-a-kind facility would require support from the Trump administration. SoftBank officials have spoken with federal and state officials, including Commerce Secretary Howard Lutnick, to discuss possible tax breaks for firms building factories or investing in the complex, sources told Bloomberg. Son is also speaking with major tech companies as possible investors, like South Korea's Samsung. SoftBank, Samsung and the White House did not immediately respond to The Post's requests for comment. 3 Masayoshi Son is reportedly seeking out TSMC as a partner in the Arizona project. REUTERS Son's company has invested heavily in ChatGPT maker OpenAI, recently leading a $40 billion funding round for the Sam Altman-led firm as the two seek to raise hundreds of billions of dollars to fund large data centers in the US. These data centers are crucial to the artificial intelligence industry, which requires vast amounts of power and large storage capabilities. SoftBank's campaigning process for the Arizona complex could signal that its money-raising efforts alongside OpenAI are proceeding at a slower pace than they had anticipated, according to the report. Son has created a list of companies that might take part in the Arizona manufacturing hub, like automation company Agile Robots SE, sources said. 3 President Trump has said he wants to bring manufacturing opportunities back to the US. AP Meanwhile, SoftBank is exploring project financing options for Stargate, its $500 billion initiative to build data centers in the US with OpenAI and Oracle. This financing method could allow SoftBank to raise funding on a project-by-project basis, which is easier than gathering a large sum of money upfront. The same process could potentially be used for Project Crystal Land, according to the Bloomberg report. These plans are still preliminary and could change, sources told the news outlet.