
This life-changing piece of health tech is getting cheaper — and more advanced
is a senior technology correspondent at Vox and author of the User Friendly newsletter. He's spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice.
You can imagine a future where you wear earbuds that are the interface for your voice assistant as well as your lifeline on a loud plane. Vox/Getty Images
Hearing aids, like canes or orthopedic shoes, are something you don't think about a lot when you're young. But maybe you should.
You probably either know someone who needs hearing aids, or you'll need them some day yourself. About 30 million people in the United States, aged 12 and older, have hearing loss in both ears, and about two-thirds of people end up with hearing loss, which can range from mild to severe, by their 70s.
But talking to your parents or grandparents about getting hearing aids can be tough — I've done it. They might not like the idea of sticking things in their ear canals or confronting the difficult realities of aging and health. They surely shy away from the price tag of hearing aids, which can cost thousands of dollars and are not covered by insurance or Medicare.
But plugging tiny and exorbitantly expensive speakers into your ears isn't the only way. Your mom might already own hearing aids without even knowing it.
User Friendly
A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required)
Sign Up
By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Hearing aids have never been more accessible — or futuristic. In April, a company called Nuance started selling glasses that double as hearing aids thanks to microphones and beam-forming speakers built into the frame. Although at $1,200, they're not cheap, they cost far less than a pair of prescription hearing aids, which tend to range from $2,000 to $7,000.
Hearing aids have never been more accessible — or futuristic.
You can also buy something that's legally considered a personal sound amplification product (PSAP), which is not designed to treat hearing loss but does make things louder. Some of them can play music and handle phone calls too. In the age when earbuds are ubiquitous, these devices appeal to all ages.
'It's good that we're seeing people in their 30s, 40s, and 50s, talking about it, because it's totally changing the paradigm for them of engaging in hearing care earlier,' Nicholas Reed, a faculty member at the NYU Grossman School of Medicine, told me.
I'm a millennial, but I've also dealt with hearing loss my entire life. A bad stretch of childhood ear infections left me mostly deaf in one ear and pretty spotty in the other. I learned to read lips as a teenager and avoid conversations at loud parties in college. Some surgery in my 20s brought me closer to normal, but I could still use a little help.
Related The surprising thing I learned from quitting Spotify
I've spent the past few weeks trying out the Nuance glasses in various settings. They're remarkable, not only because they feel almost indistinguishable from my regular glasses but also because I forget they're hearing aids. Made by EssilorLuxottica, the company behind Ray-Ban and dozens of other glasses brands, the Nuance glasses employ some of the same technology that the Ray-Ban Meta glasses use to play music and help you talk to AI. And while the Nuance glasses don't currently offer the option to stream audio, they do help you hear what your friend is saying in a loud bar.
The AirPods Pro 2, which retail for $250, work equally as well. After Apple announced last fall that a software update would unlock an accessibility setting — it's appropriately called Hearing Aid — I started using it all the time, toggling between listening to podcasts to ordering cold brew in a crowded coffee shop. In instances where I may have needed to ask people to repeat themselves in the past, I hear them fine the first time. I just have to wear AirPods all the time, which makes the glasses solution even more appealing.
For most people, hearing loss typically starts in your 50s and gains momentum in your early retirement years. If you've ever been to a busy restaurant with your parents or grandparents, you know this can be alienating for the person left out and frustrating for the hearing person, too. The social isolation can lead to loneliness and anxiety, which can hasten cognitive decline and lower life expectancy.
Nevertheless, neither traditional clinical hearing aids or the newer category of devices are easy fixes. Once you start wearing any sort of hearing aid, it takes time to adjust, and you might need help tweaking the sound as you get used to it. That's one reason why so many people avoid it — only one in five who need hearing aids actually have them. You can't put them in your ears and immediately have perfect hearing. Your brain adjusts over time, and so it may take weeks or months to adapt to the new frequencies hearing aids help you hear.
Related How technology has inspired neuroscientists to reimagine the brain
Still, it's a worthwhile project.
'Sensory input is so key to our existence, but we just sort of overlooked it for so long,' Reed said. 'It's something that's vital to your existence and how you connect with other people.'
It's not clear how the latest hearing aid innovation will move the needle on adoption. Even though over-the-counter hearing aids have been available since 2022, when the FDA implemented new regulations for the devices, it's still an uphill battle to get people to wear them.
'Sensory input is so key to our existence, but we just sort of overlooked it for so long.' — Nicholas Reed, faculty member at the NYU Grossman School of Medicine
'We are not seeing large increases in hearing aid uptake since over-the-counter hearing aids have become available,' said Tricia Ashby, senior director of audiology practices at the American Speech-Language-Hearing Association (ASHA). 'And I have to say that mimics other countries who had over-the-counter hearing aids before the US did.'
Given the fact that the older people who need them most are potentially less likely to try the latest technology, it might still take a few years for over-the-counter hearing aids to go mainstream. Given the precedent set by companies like Apple and Nuance, though, it's possible that more devices will add hearing assistive features to existing products.
You can imagine a future where you wear earbuds that are the interface for your voice assistant as well as your lifeline on a loud plane. You might have glasses that project walking directions onto your field of view and help you hear which direction traffic's coming from when you have to cross the street. These kinds of features together only get more important as you get older and need a little more help.
'We are in an age now where you're thinking about optimizing aging, and how do you do it?' Reed said. 'And it's things like this.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Vox
a day ago
- Vox
AI doesn't have to reason to take your job
is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. A humanoid robot shakes hands with a visitor at the Zhiyuan Robotics stand at the Shanghai New International Expo Centre in Shanghai, China, on June 18, 2025, during the first day of the Mobile World Conference. Ying Tang/NurPhoto via Getty Images In 2023, one popular perspective on AI went like this: Sure, it can generate lots of impressive text, but it can't truly reason — it's all shallow mimicry, just 'stochastic parrots' squawking. At the time, it was easy to see where this perspective was coming from. Artificial intelligence had moments of being impressive and interesting, but it also consistently failed basic tasks. Tech CEOs said they could just keep making the models bigger and better, but tech CEOs say things like that all the time, including when, behind the scenes, everything is held together with glue, duct tape, and low-wage workers. It's now 2025. I still hear this dismissive perspective a lot, particularly when I'm talking to academics in linguistics and philosophy. Many of the highest profile efforts to pop the AI bubble — like the recent Apple paper purporting to find that AIs can't truly reason — linger on the claim that the models are just bullshit generators that are not getting much better and won't get much better. Future Perfect Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. But I increasingly think that repeating those claims is doing our readers a disservice, and that the academic world is failing to step up and grapple with AI's most important implications. I know that's a bold claim. So let me back it up. 'The illusion of thinking's' illusion of relevance The instant the Apple paper was posted online (it hasn't yet been peer reviewed), it took off. Videos explaining it racked up millions of views. People who may not generally read much about AI heard about the Apple paper. And while the paper itself acknowledged that AI performance on 'moderate difficulty' tasks was improving, many summaries of its takeaways focused on the headline claim of 'a fundamental scaling limitation in the thinking capabilities of current reasoning models.' For much of the audience, the paper confirmed something they badly wanted to believe: that generative AI doesn't really work — and that's something that won't change any time soon. The paper looks at the performance of modern, top-tier language models on 'reasoning tasks' — basically, complicated puzzles. Past a certain point, that performance becomes terrible, which the authors say demonstrates the models haven't developed true planning and problem-solving skills. 'These models fail to develop generalizable problem-solving capabilities for planning tasks, with performance collapsing to zero beyond a certain complexity threshold,' as the authors write. That was the topline conclusion many people took from the paper and the wider discussion around it. But if you dig into the details, you'll see that this finding is not surprising, and it doesn't actually say that much about AI. Much of the reason why the models fail at the given problem in the paper is not because they can't solve it, but because they can't express their answers in the specific format the authors chose to require. If you ask them to write a program that outputs the correct answer, they do so effortlessly. By contrast, if you ask them to provide the answer in text, line by line, they eventually reach their limits. That seems like an interesting limitation to current AI models, but it doesn't have a lot to do with 'generalizable problem-solving capabilities' or 'planning tasks.' Imagine someone arguing that humans can't 'really' do 'generalizable' multiplication because while we can calculate 2-digit multiplication problems with no problem, most of us will screw up somewhere along the way if we're trying to do 10-digit multiplication problems in our heads. The issue isn't that we 'aren't general reasoners.' It's that we're not evolved to juggle large numbers in our heads, largely because we never needed to do so. If the reason we care about 'whether AIs reason' is fundamentally philosophical, then exploring at what point problems get too long for them to solve is relevant, as a philosophical argument. But I think that most people care about what AI can and cannot do for far more practical reasons. AI is taking your job, whether it can 'truly reason' or not I fully expect my job to be automated in the next few years. I don't want that to happen, obviously. But I can see the writing on the wall. I regularly ask the AIs to write this newsletter — just to see where the competition is at. It's not there yet, but it's getting better all the time. Employers are doing that too. Entry-level hiring in professions like law, where entry-level tasks are AI-automatable, appears to be already contracting. The job market for recent college graduates looks ugly. The optimistic case around what's happening goes something like this: 'Sure, AI will eliminate a lot of jobs, but it'll create even more new jobs.' That more positive transition might well happen — though I don't want to count on it — but it would still mean a lot of people abruptly finding all of their skills and training suddenly useless, and therefore needing to rapidly develop a completely new skill set. It's this possibility, I think, that looms large for many people in industries like mine, which are already seeing AI replacements creep in. It's precisely because this prospect is so scary that declarations that AIs are just 'stochastic parrots' that can't really think are so appealing. We want to hear that our jobs are safe and the AIs are a nothingburger. But in fact, you can't answer the question of whether AI will take your job with reference to a thought experiment, or with reference to how it performs when asked to write down all the steps of Tower of Hanoi puzzles. The way to answer the question of whether AI will take your job is to invite it to try. And, uh, here's what I got when I asked ChatGPT to write this section of this newsletter: Is it 'truly reasoning'? Maybe not. But it doesn't need to be to render me potentially unemployable. 'Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse,' Cambridge professor of AI philosophy and governance Harry Law argued in a recent piece, and I think he's unambiguously right. If Vox hands me a pink slip, I don't think I'll get anywhere if I argue that I shouldn't be replaced because o3, above, can't solve a sufficiently complicated Towers of Hanoi puzzle — which, guess what, I can't do either. Critics are making themselves irrelevant when we need them most In his piece, Law surveys the state of AI criticisms and finds it fairly grim. 'Lots of recent critical writing about AI…read like extremely wishful thinking about what exactly systems can and cannot do.' This is my experience, too. Critics are often trapped in 2023, giving accounts of what AI can and cannot do that haven't been correct for two years. 'Many [academics] dislike AI, so they don't follow it closely,' Law argues. 'They don't follow it closely so they still think that the criticisms of 2023 hold water. They don't. And that's regrettable because academics have important contributions to make.' But of course, for the employment effects of AI — and in the longer run, for the global catastrophic risk concerns they may present — what matters isn't whether AIs can be induced to make silly mistakes, but what they can do when set up for success. I have my own list of 'easy' problems AIs still can't solve — they're pretty bad at chess puzzles — but I don't think that kind of work should be sold to the public as a glimpse of the 'real truth' about AI. And it definitely doesn't debunk the really quite scary future that experts increasingly believe we're headed toward.


Vox
2 days ago
- Vox
He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. The science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a 'zeroth law,' which is so important that it precedes all the others: 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' This month, the computer scientist Yoshua Bengio — known as the 'godfather of AI' because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won't harm humanity. Even though he helped lay the foundation for today's advanced AI, Bengio is increasingly worried about the technology over the past few years. In 2023, he signed an open letter urging AI companies to press pause on state-of-the-art AI development. Both because of AI's present harms (like bias against marginalized groups) and AI's future risks (like engineered bioweapons), there are very strong reasons to think that slowing down would have been a good thing. But companies are companies. They did not slow down. In fact, they created autonomous AIs known as AI agents, which can view your computer screen, select buttons, and perform tasks — just like you can. Whereas ChatGPT needs to be prompted by a human every step of the way, an agent can accomplish multistep goals with very minimal prompting, similar to a personal assistant. Right now, those goals are simple — create a website, say — and the agents don't work that well yet. But Bengio worries that giving AIs agency is an inherently risky move: Eventually, they could escape human control and go 'rogue.' So now, Bengio is pivoting to a backup plan. If he can't get companies to stop trying to build AI that matches human smarts (artificial general intelligence, or AGI) or even surpasses human smarts (artificial superintelligence, or ASI), then he wants to build something that will block those AIs from harming humanity. He calls it 'Scientist AI.' Scientist AI won't be like an AI agent — it'll have no autonomy and no goals of its own. Instead, its main job will be to calculate the probability that some other AI's action would cause harm — and, if the action is too risky, block it. AI companies could overlay Scientist AI onto their models to stop them from doing something dangerous, akin to how we put guardrails along highways to stop cars from veering off course. I talked to Bengio about why he's so disturbed by today's AI systems, whether he regrets doing the research that led to their creation, and whether he thinks throwing yet more AI at the problem will be enough to solve it. A transcript of our unusually candid conversation, edited for length and clarity, follows. Sigal Samuel When people express worry about AI, they often express it as a worry about artificial general intelligence or superintelligence. Do you think that's the wrong thing to be worrying about? Should we only worry about AGI or ASI insofar as it includes agency? Yoshua Bengio Yes. You could have a superintelligent AI that doesn't 'want' anything, and it's totally not dangerous because it doesn't have its own goals. It's just like a very smart encyclopedia. Sigal Samuel Researchers have been warning for years about the risks of AI systems, especially systems with their own goals and general intelligence. Can you explain what's making the situation increasingly scary to you now? Yoshua Bengio In the last six months, we've gotten evidence of AIs that are so misaligned that they would go against our moral instructions. They would plan and do these bad things — lying, cheating, trying to persuade us with deceptions, and — worst of all — trying to escape our control and not wanting to be shut down, and doing anything [to avoid shutdown], including blackmail. These are not an immediate danger because they're all controlled we don't know how to really deal with this. Sigal Samuel And these bad behaviors increase the more agency the AI system has? Yoshua Bengio Yes. The systems we had last year, before we got into reasoning models, were much less prone to this. It's just getting worse and worse. That makes sense because we see that their planning ability is improving exponentially. And [the AIs] need good planning to strategize about things like 'How am I going to convince these people to do what I want?' or 'How do I escape their control?' So if we don't fix these problems quickly, we may end up with, initially, funny accidents, and later, not-funny accidents. That's motivating what we're trying to do at LawZero. We're trying to think about how we design AI more precisely, so that, by construction, it's not even going to have any incentive or reason to do such things. In fact, it's not going to want anything. Sigal Samuel Tell me about how Scientist AI could be used as a guardrail against the bad actions of an AI agent. I'm imagining Scientist AI as the babysitter of the agentic AI, double-checking what it's doing. Yoshua Bengio So, in order to do the job of a guardrail, you don't need to be an agent yourself. The only thing you need to do is make a good prediction. And the prediction is this: Is this action that my agent wants to do acceptable, morally speaking? Does it satisfy the safety specifications that humans have provided? Or is it going to harm somebody? And if the answer is yes, with some probability that's not very small, then the guardrail says: No, this is a bad action. And the agent has to [try a different] action. Sigal Samuel But even if we build Scientist AI, the domain of 'What is moral or immoral?' is famously contentious. There's just no consensus. So how would Scientist AI learn what to classify as a bad action? Yoshua Bengio It's not for any kind of AI to decide what is right or wrong. We should establish that using democracy. Law should be about trying to be clear about what is acceptable or not. Now, of course, there could be ambiguity in the law. Hence you can get a corporate lawyer who is able to find loopholes in the law. But there's a way around this: Scientist AI is planned so that it will see the ambiguity. It will see that there are different interpretations, say, of a particular rule. And then it can be conservative about the interpretation — as in, if any of the plausible interpretations would judge this action as really bad, then the action is rejected. Sigal Samuel I think a problem there would be that almost any moral choice arguably has ambiguity. We've got some of the most contentious moral issues — think about gun control or abortion in the US — where, even democratically, you might get a significant proportion of the population that says they're opposed. How do you propose to deal with that? Yoshua Bengio I don't. Except by having the strongest possible honesty and rationality in the answers, which, in my opinion, would already be a big gain compared to the sort of democratic discussions that are happening. One of the features of the Scientist AI, like a good human scientist, is that you can ask: Why are you saying this? And he would come up with — not 'he,' sorry! — it would come up with a justification. The AI would be involved in the dialogue to try to help us rationalize what are the pros and cons and so on. So I actually think that these sorts of machines could be turned into tools to help democratic debates. It's a little bit more than fact-checking — it's also like reasoning-checking. Sigal Samuel This idea of developing Scientist AI stems from your disillusionment with the AI we've been developing so far. And your research was very foundational in laying the groundwork for that kind of AI. On a personal level, do you feel some sense of inner conflict or regret about having done the research that laid that groundwork? Yoshua Bengio I should have thought of this 10 years ago. In fact, I could have, because I read some of the early works in AI safety. But I think there are very strong psychological defenses that I had, and that most of the AI researchers have. You want to feel good about your work, and you want to feel like you're the good guy, not doing something that could cause in the future lots of harm and death. So we kind of look the other way. And for myself, I was thinking: This is so far into the future! Before we get to the science-fiction-sounding things, we're going to have AI that can help us with medicine and climate and education, and it's going to be great. So let's worry about these things when we get there. But that was before ChatGPT came. When ChatGPT came, I couldn't continue living with this internal lie, because, well, we are getting very close to human-level. Sigal Samuel The reason I ask this is because it struck me when reading your plan for Scientist AI that you say it's modeled after the platonic idea of a scientist — a selfless, ideal person who's just trying to understand the world. I thought: Are you in some way trying to build the ideal version of yourself, this 'he' that you mentioned, the ideal scientist? Is it like what you wish you could have been? Yoshua Bengio You should do psychotherapy instead of journalism! Yeah, you're pretty close to the mark. In a way, it's an ideal that I have been looking toward for myself. I think that's an ideal that scientists should be looking toward as a model. Because, for the most part in science, we need to step back from our emotions so that we avoid biases and preconceived ideas and ego. Sigal Samuel A couple of years ago you were one of the signatories of the letter urging AI companies to pause cutting-edge work. Obviously, the pause did not happen. For me, one of the takeaways from that moment was that we're at a point where this is not predominantly a technological problem. It's political. It's really about power and who gets the power to shape the incentive structure. We know the incentives in the AI industry are horribly misaligned. There's massive commercial pressure to build cutting-edge AI. To do that, you need a ton of compute so you need billions of dollars, so you're practically forced to get in bed with a Microsoft or an Amazon. How do you propose to avoid that fate? Yoshua Bengio That's why we're doing this as a nonprofit. We want to avoid the market pressure that would force us into the capability race and, instead, focus on the scientific aspects of safety. I think we could do a lot of good without having to train frontier models ourselves. If we come up with a methodology for training AI that is convincingly safer, at least on some aspects like loss of control, and we hand it over almost for free to companies that are building AI — well, no one in these companies actually wants to see a rogue AI. It's just that they don't have the incentive to do the work! So I think just knowing how to fix the problem would reduce the risks considerably. I also think that governments will hopefully take these questions more and more seriously. I know right now it doesn't look like it, but when we start seeing more evidence of the kind we've seen in the last six months, but stronger and more scary, public opinion might push sufficiently that we'll see regulation or some way to incentivize companies to behave better. It might even happen just for market reasons — like, [AI companies] could be sued. So, at some point, they might reason that they should be willing to pay some money to reduce the risks of accidents. Sigal Samuel I was happy to see that LawZero isn't only talking about reducing the risks of accidents but is also talking about 'protecting human joy and endeavor.' A lot of people fear that if AI gets better than them at things, well, what is the meaning of their life? How would you advise people to think about the meaning of their human life if we enter an era where machines have both agency and extreme intelligence? Yoshua Bengio I understand it would be easy to be discouraged and to feel powerless. But the decisions that human beings are going to make in the coming years as AI becomes more powerful — these decisions are incredibly consequential. So there's a sense in which it's hard to get more meaning than that! If you want to do something about it, be part of the thinking, be part of the democratic debate. I would advise us all to remind ourselves that we have agency. And we have an amazing task in front of us: to shape the future.


Vox
2 days ago
- Vox
What we learned the last time we put AI in a Barbie
is a senior technology correspondent at Vox and author of the User Friendly newsletter. He's spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice. The first big Christmas gift I remember getting was an animatronic bear named Teddy Ruxpin. Thanks to a cassette tape hidden in his belly, he could talk, his eyes and mouth moving in a famously creepy way. Later that winter, when I was sick with a fever, I hallucinated that the toy came alive and attacked me. I never saw Teddy again after that. These days, toys can do a lot more than tell pre-recorded stories. So-called smart toys, many of which are internet-connected, are a $20 billion business, and increasingly, they're artificially intelligent. Mattel and OpenAI announced a partnership last week to 'bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety.' They're planning to announce their first product later this year. It's unclear what this might entail: maybe it's Barbies that can gossip with you or a self-driving Hot Wheels or something we haven't even dreamed up yet. All of this makes me nervous as a young parent. I already knew that generative AI was invading classrooms and filling the internet with slop, but I wasn't expecting it to take over the toy aisle so soon. After all, we're already struggling to figure out how to manage our kids' relationship with the technology in their lives, from screen time to the uncanny videos made to trick YouTube's algorithm. As it seeps further into our society, a growing number of people are using AI without even realizing it. So you can't blame me for being anxious about how children might encounter the technology in unexpected ways. User Friendly A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. AI-powered toys are not as new as you might think. They're not even new for Mattel. A decade ago, the toy giant released Hello Barbie, an internet-connected doll that listened to kids and used AI to respond (think Siri, not ChatGPT). It was essentially the same concept as Teddy Ruxpin except with a lot of digital vulnerabilities. Naturally, security researchers took notice and hacked Hello Barbie, revealing that bad actors could steal personal information or eavesdrop on conversations children were having with the doll. Mattel discontinued the doll in 2017. Hello Barbie later made an appearance in the Barbie movie alongside other poor toy choices like Sugar Daddy Ken and Pregnant Midge. Despite this cautionary tale, companies keep trying to make talking AI toys a thing. One more recent example comes from the mind of Grimes, of all people. Inspired by the son she shares with Elon Musk, the musician teamed up with a company called Curio to create a stuffed rocket ship named Grok. The embodied chatbot is supposed to learn about whomever is playing with it and become a personalized companion. In real life, Grok is frustratingly dumb, according to Katie Arnold-Ratliff, a mom and writer who chronicled her son's experience with the toy in New York magazine last year. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' 'When it started remembering things about my kid, and speaking back to him, he was amazed,' Arnold-Ratliff told me this week. 'That awe very quickly dissipated once it was like, why are you talking about this completely unrelated thing.' Grok is still somewhere in their house, she said, but it has been turned off for quite some time. It turns out Arnold-Ratliff's son is more interested in inanimate objects that he can make come alive with his imagination. Sure, he'll play Mario on his Nintendo Switch for long stretches of time, but afterward, he'll draw his own worlds on paper. He'll even create digital versions of new levels on Super Mario Maker but get frustrated when the software can't keep up with his imagination. This is a miraculous paradox when it comes to kids and certain tech-powered toys. Although an adult might think that, for instance, AI could prompt kids to think about play in new ways or become an innovative new imaginary friend, kids tend to prefer imagining on their own terms. That's according to Naomi Aguiar, PhD, a researcher at Oregon State University who studies how children form relationships with AI chatbots. 'There's nothing wrong with children's imaginations. They work fine,' Aguiar said. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' Aguiar did concede that AI can be a powerful educational tool for kids, especially for those who don't have access to resources or who may be on the spectrum. 'If we focus on solutions to specific problems and train the models to do that, it could open up a lot of opportunities,' she told me. Putting AI in a Barbie, however, is not solving a particular problem. None of this means that I'm allergic to the concept of tech-centric toys for kids. Quite the opposite, in fact. Ahead of the Mattel-OpenAI announcement, I'd started researching toys my kid might like that incorporated some technology — enough to make them especially interesting and engaging — but stopped short of triggering dystopian nightmares. Much to my surprise, what I found was something of a mashup between completely inanimate objects and that terrifying Teddy Ruxpin. One of these toys is called a Toniebox, a screen-free audio player with little figurines called Tonies that you put atop the box to unlock content — namely songs, stories, and so forth. Licenses abound, so you can buy a Tonie that corresponds with pretty much any popular kids character, like Disney princesses or Paddington Bear. There are also so-called Creative Tonies that allow you to upload your own audio. For instance, you could ostensibly have a stand-in for a grandparent to enable story time, even if Grandma and Grandpa are not physically there. The whole experience is mediated with an app that the kid never needs to see. There's also the Yoto Player and the Yoto Mini, which are similar to the Toniebox but use cards instead of figurines and have a very low-resolution display that can show a clock or a pixelated character. Because it has that display, kids can also create custom icons to show up when they record their own content onto a card. Yoto has been beta-testing an AI-powered story generator, which is designed for parents to create custom stories for their kids. If those audio players are geared toward story time, a company called Nex makes a video game console for playtime. It's called Nex Playground, and kids use their movements to control it. This happens thanks to a camera equipped with machine-learning capabilities to recognize your movements and expressions. So imagine playing Wii Sports, but instead of throwing the Nintendo controller through your TV screen when you're trying to bowl, you make the bowling motion to play the game. Nex makes most of its games in-house, and all of the computation needed for its gameplay happens on the device itself. That means there's no data being collected or sent to the cloud. Once you download a game, you don't even have to be online to play it. 'We envision toys that can just grow in a way where they become a new way to interact with technology for kids and evolve into something that's much deeper, much more meaningful for families,' David Lee, CEO of Nex, said when I asked him about the future of toys. It will be a few more years before I have to worry about my kid's interactions with a video game console, much less an AI-powered Barbie — and certainly not Teddy Ruxpin. But she loves her Toniebox. She talks to the figurines and lines them up alongside each other, like a little posse. I have no idea what she's imagining them saying back. In a way, that's the point.