logo
How to find a meaningful job: try 'moral ambition,' says Rutger Bregman

How to find a meaningful job: try 'moral ambition,' says Rutger Bregman

Vox13-05-2025

is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.
We're told from a young age to achieve. Get good grades. Get into a good school. Get a good job. Be ambitious about earning a high salary or a high-status position. But many of us eventually find ourselves asking: What's the point of all this ambition? The fat salary or the fancy title…are those really meaningful measures of success?
There's another possibility: Instead of measuring our success in terms of fame or fortune, we could measure it in terms of how much good we do for others. And we could get super ambitious about using our lives to do a gargantuan amount of good.
That's the message of Moral Ambition, a new book by historian and author Rutger Bregman. He wants us to stop wasting our talents on meaningless work and start devoting ourselves to solving the world's biggest problems, like malaria and pandemics and climate change.
I recently got the chance to talk to Bregman on The Gray Area, Vox's philosophically-minded podcast. I invited him on the show because I find his message inspiring — and, to be honest, because I also had some questions about it. I want to dedicate myself to work that feels meaningful, but I'm not sure work that helps the greatest number of people is the only way to do that. Moral optimization — the effort to mathematically quantify moral goodness so that we can then maximize it — is, in my experience, agonizing and ultimately counterproductive.
I also noticed that Bregman's 'moral ambition' has a lot in common with effective altruism (EA), the movement that's all about using reason and evidence to do the most good possible. After the downfall of Sam Bankman-Fried, the EA crypto billionaire who was convicted of fraud in 2023, EA suffered a major reputational blow. I wondered: Is Bregman just trying to rescue the EA baby from the bathwater? (Disclosure: In 2022, Future Perfect was awarded a one-time $200,000 grant from Building a Stronger Future, a family foundation run by Sam and Gabe Bankman-Fried. Future Perfect has returned the balance of the grant and is no longer pursuing this project.)
So in our conversation, I talked to Bregman about all the different things that can make our lives feel meaningful, and asked: Are some objectively better than others? And how is moral ambition different from ideas that came before it, like effective altruism?
This interview has been edited for length and clarity. There's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts.
Why should people be morally ambitious?
My whole career, I've been fascinated with the waste of talent that's going on in modern economies. There's this one study from two Dutch economists and they estimate that around 25 percent of all workers think that their own job is socially meaningless, or at least doubt the value of their job.
That is just insane to me. I mean, this is five times the unemployment rate. And we're talking about people who often have excellent resumes, who went to very nice universities. Harvard is an interesting case in point: 45 percent of Harvard graduates end up in consultancy or finance. I'm not saying all of that is totally socially useless, but I do wonder whether that is the best allocation of talent. [Note: In 2020, 45 percent of Harvard graduating seniors entering the workforce went into consulting and finance. Among the class of 2024, the number was 34 percent.]
We face some pretty big problems out there, whether it's the threat of the next pandemic that may be just around the corner, terrible diseases like malaria and tuberculosis killing millions of people, the problem with democracy breaking down. I mean, the list goes on and on. And so I've always been frustrated by this enormous waste of talent. If we're going to have a career anyway, we might as well do a lot of good with it.
What role does personal passion play in this? You write in the book, 'Don't start out by asking, what's my passion? Ask instead, how can I contribute most? And then choose the role that suits you best. Don't forget, your talents are but a means to an end.'
I think 'follow your passion' is probably the worst career advice out there. At the School for Moral Ambition, an organization I co-founded, we deeply believe in the Gandalf-Frodo model of changing the world. Frodo didn't follow his passion. Gandalf never asked him, 'What's your passion, Frodo?' He said, 'Look, this really needs to be done, you've got to throw the ring into the mountain.' If Frodo would have followed his passion, he would have probably been a gardener having a life full of second breakfasts and being pretty comfortable in the Shire. And then the orcs would have turned up and murdered everyone he ever loved.
So the point here is, find yourself some wise old wizard, a Gandalf. Figure out what some of the most pressing issues that we face as a species are. And ask yourself, how can I make a difference? And then you will find out that you can become very passionate about it.
In your book, there's a Venn diagram with three circles. The first is labeled 'sizable.' The second is 'solvable.' And the third is 'sorely overlooked.' And in the middle, where they all overlap, it says 'moral ambition.'
I wonder about the 'sizable' part of that. Does moral ambition always have to be about scale? I'm a journalist now, but before that I was a novelist. And I didn't care how many people my work impacted. My feeling was: If my novel deeply moves just one reader and helps them feel less alone or more understood, I will be happy. Are you telling me I shouldn't be happy with that?
I think there is absolutely a place for, as the French say, art pour l'art — art for the sake of art itself. I don't want to let everything succumb to a utilitarian calculus. But I do think it's better to help a lot of people than just a few people. On the margins, I think in the world today, we need much more moral ambition than we currently have.
When I was reading your book, I kept thinking of the philosopher Susan Wolf, who has this great essay called 'Moral Saints.' She argues that you shouldn't try to be a moral saint — someone who tries to make all their actions as morally good as possible.
She writes, 'If the moral saint is devoting all his time to feeding the hungry or healing the sick or raising money for Oxfam, then necessarily he is not reading Victorian novels, playing the oboe or improving his backhand. A life in which none of these possible aspects of character are developed may seem to be a life strangely barren.' How do you square that with your urge to be morally ambitious?
We are living in a world where a huge amount of people have a career that they consider socially meaningless and then they spend the rest of their time swiping TikTok. That's the reality, right? I really don't think that there's a big danger of people reading my book and moving all the way in the other direction.
There's only one community I know of where this has become a problem. It's the effective altruism community. In a way, moral ambition could be seen as effective altruism for normies.
Let's talk about that. I'm not an effective altruist, but I am a journalist who has reported a lot on EA, so I'm curious where you stand on this. You talk about EA in the book and you echo a lot of its core ideas. Your call to prioritize causes that are sizable, solvable, and sorely overlooked is a rephrase of EA's call to prioritize the 'important, tractable, and neglected.' And then there's this idea that you shouldn't just be trying to do good, you should try to do the most good possible. So is being morally ambitious different from being an effective altruist?
So, I wouldn't say the most good. I would say, you should do a lot of good — which is different, right? That's not about being perfect, but just being ambitious.
Effective altruism is a movement that I admire quite a bit. I think there's a lot we can learn from them. And there are also quite a few things that I don't really like about them.
What I really like about them is their moral seriousness. I come from the political left, and if there's one thing that's often quite annoying about lefties it's that they preach a lot, but they do little. For example, I think it's pretty easy to make the case that donating to charity is one of the most effective things you can do. But very few of my progressive leftist friends donate anything. So I really like the moral seriousness of the EAs. Go to EA conferences and you will meet quite a few people who have donated kidneys to random strangers, which is pretty impressive.
The main thing I dislike is where the motivation comes from. One of the founding fathers of effective altruism was the philosopher Peter Singer, who has a thought experiment of the child drowning in the shallow pond…
That's the thought experiment where Singer says, if you see a kid drowning in a shallow pond, and you could save this kid without putting your own life in danger, but you will ruin your expensive clothes, should you do it? Yes, obviously. And by analogy, if we have money, we could easily save the lives of people in developing countries, so we should donate it instead of spending it on frivolous stuff.
Yes. I never really liked the thought experiment because it always felt like a form of moral blackmail to me. It's like, now I'm suddenly supposed to see drowning children everywhere. Like, this microphone is way too expensive, I could have donated that money to some charity in Malawi! It's a totally inhuman way of looking at life. It just doesn't resonate with me at all.
But there are quite a few people who instantly thought, 'Yes, that is true.' They said, 'Let's build a movement together.' And I do really like that. I see EAs as very weird, but pretty impressive.
Let's pick up on that weirdness. In your book, you straight up tell readers, 'Join a cult — or start your own. Regardless, you can't be afraid to come across as weird if you want to make a difference. Every milestone of civilization was first seen as the crazy idea of some subculture.' But how do you think about the downsides of being in a cult?
A cult is a group of thoughtful, committed citizens who want to change the world, and they have some shared beliefs that make them very weird to the rest of society. Sometimes that's exactly what's necessary. To give you one simple example, in a world that doesn't really seem to care about animals all that much, it's easy to become disillusioned. But when you join a safe space of ambitious do-gooders, you can suddenly get this feeling of, 'Hey, I'm not the only one! There are other people who deeply care about animals as well. And I can do much more than I'm currently doing.' So it can have a radicalizing effect.
Now, I totally acknowledge that there are signs of dangers here. You can become too dogmatic, and you can be quite hostile to people who don't share all your beliefs. I just want to recognize that if you look at some of these great movements of history — the abolitionists, the suffragettes — they had cultish aspects. They were, in a way, a little bit like a cult.
Do you have any advice for people on how to avoid the downside — that you can become deaf to criticism from the outside?
Yes. Don't let it suck up your whole life. When I hear about all these EAs living in group houses, you know, they're probably taking things too far. I think it helps if you're a normie in other respects of your life. It gives you a certain groundedness and stability.
In general, it's super important to surround yourself with people who are critical of your work, who don't take you too seriously, who can laugh at you or see your foolishness and call it out — and still be a good friend.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI doesn't have to reason to take your job
AI doesn't have to reason to take your job

Vox

time3 days ago

  • Vox

AI doesn't have to reason to take your job

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. A humanoid robot shakes hands with a visitor at the Zhiyuan Robotics stand at the Shanghai New International Expo Centre in Shanghai, China, on June 18, 2025, during the first day of the Mobile World Conference. Ying Tang/NurPhoto via Getty Images In 2023, one popular perspective on AI went like this: Sure, it can generate lots of impressive text, but it can't truly reason — it's all shallow mimicry, just 'stochastic parrots' squawking. At the time, it was easy to see where this perspective was coming from. Artificial intelligence had moments of being impressive and interesting, but it also consistently failed basic tasks. Tech CEOs said they could just keep making the models bigger and better, but tech CEOs say things like that all the time, including when, behind the scenes, everything is held together with glue, duct tape, and low-wage workers. It's now 2025. I still hear this dismissive perspective a lot, particularly when I'm talking to academics in linguistics and philosophy. Many of the highest profile efforts to pop the AI bubble — like the recent Apple paper purporting to find that AIs can't truly reason — linger on the claim that the models are just bullshit generators that are not getting much better and won't get much better. Future Perfect Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. But I increasingly think that repeating those claims is doing our readers a disservice, and that the academic world is failing to step up and grapple with AI's most important implications. I know that's a bold claim. So let me back it up. 'The illusion of thinking's' illusion of relevance The instant the Apple paper was posted online (it hasn't yet been peer reviewed), it took off. Videos explaining it racked up millions of views. People who may not generally read much about AI heard about the Apple paper. And while the paper itself acknowledged that AI performance on 'moderate difficulty' tasks was improving, many summaries of its takeaways focused on the headline claim of 'a fundamental scaling limitation in the thinking capabilities of current reasoning models.' For much of the audience, the paper confirmed something they badly wanted to believe: that generative AI doesn't really work — and that's something that won't change any time soon. The paper looks at the performance of modern, top-tier language models on 'reasoning tasks' — basically, complicated puzzles. Past a certain point, that performance becomes terrible, which the authors say demonstrates the models haven't developed true planning and problem-solving skills. 'These models fail to develop generalizable problem-solving capabilities for planning tasks, with performance collapsing to zero beyond a certain complexity threshold,' as the authors write. That was the topline conclusion many people took from the paper and the wider discussion around it. But if you dig into the details, you'll see that this finding is not surprising, and it doesn't actually say that much about AI. Much of the reason why the models fail at the given problem in the paper is not because they can't solve it, but because they can't express their answers in the specific format the authors chose to require. If you ask them to write a program that outputs the correct answer, they do so effortlessly. By contrast, if you ask them to provide the answer in text, line by line, they eventually reach their limits. That seems like an interesting limitation to current AI models, but it doesn't have a lot to do with 'generalizable problem-solving capabilities' or 'planning tasks.' Imagine someone arguing that humans can't 'really' do 'generalizable' multiplication because while we can calculate 2-digit multiplication problems with no problem, most of us will screw up somewhere along the way if we're trying to do 10-digit multiplication problems in our heads. The issue isn't that we 'aren't general reasoners.' It's that we're not evolved to juggle large numbers in our heads, largely because we never needed to do so. If the reason we care about 'whether AIs reason' is fundamentally philosophical, then exploring at what point problems get too long for them to solve is relevant, as a philosophical argument. But I think that most people care about what AI can and cannot do for far more practical reasons. AI is taking your job, whether it can 'truly reason' or not I fully expect my job to be automated in the next few years. I don't want that to happen, obviously. But I can see the writing on the wall. I regularly ask the AIs to write this newsletter — just to see where the competition is at. It's not there yet, but it's getting better all the time. Employers are doing that too. Entry-level hiring in professions like law, where entry-level tasks are AI-automatable, appears to be already contracting. The job market for recent college graduates looks ugly. The optimistic case around what's happening goes something like this: 'Sure, AI will eliminate a lot of jobs, but it'll create even more new jobs.' That more positive transition might well happen — though I don't want to count on it — but it would still mean a lot of people abruptly finding all of their skills and training suddenly useless, and therefore needing to rapidly develop a completely new skill set. It's this possibility, I think, that looms large for many people in industries like mine, which are already seeing AI replacements creep in. It's precisely because this prospect is so scary that declarations that AIs are just 'stochastic parrots' that can't really think are so appealing. We want to hear that our jobs are safe and the AIs are a nothingburger. But in fact, you can't answer the question of whether AI will take your job with reference to a thought experiment, or with reference to how it performs when asked to write down all the steps of Tower of Hanoi puzzles. The way to answer the question of whether AI will take your job is to invite it to try. And, uh, here's what I got when I asked ChatGPT to write this section of this newsletter: Is it 'truly reasoning'? Maybe not. But it doesn't need to be to render me potentially unemployable. 'Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse,' Cambridge professor of AI philosophy and governance Harry Law argued in a recent piece, and I think he's unambiguously right. If Vox hands me a pink slip, I don't think I'll get anywhere if I argue that I shouldn't be replaced because o3, above, can't solve a sufficiently complicated Towers of Hanoi puzzle — which, guess what, I can't do either. Critics are making themselves irrelevant when we need them most In his piece, Law surveys the state of AI criticisms and finds it fairly grim. 'Lots of recent critical writing about AI…read like extremely wishful thinking about what exactly systems can and cannot do.' This is my experience, too. Critics are often trapped in 2023, giving accounts of what AI can and cannot do that haven't been correct for two years. 'Many [academics] dislike AI, so they don't follow it closely,' Law argues. 'They don't follow it closely so they still think that the criticisms of 2023 hold water. They don't. And that's regrettable because academics have important contributions to make.' But of course, for the employment effects of AI — and in the longer run, for the global catastrophic risk concerns they may present — what matters isn't whether AIs can be induced to make silly mistakes, but what they can do when set up for success. I have my own list of 'easy' problems AIs still can't solve — they're pretty bad at chess puzzles — but I don't think that kind of work should be sold to the public as a glimpse of the 'real truth' about AI. And it definitely doesn't debunk the really quite scary future that experts increasingly believe we're headed toward.

He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.
He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.

Vox

time4 days ago

  • Vox

He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.

is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. The science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a 'zeroth law,' which is so important that it precedes all the others: 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' This month, the computer scientist Yoshua Bengio — known as the 'godfather of AI' because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won't harm humanity. Even though he helped lay the foundation for today's advanced AI, Bengio is increasingly worried about the technology over the past few years. In 2023, he signed an open letter urging AI companies to press pause on state-of-the-art AI development. Both because of AI's present harms (like bias against marginalized groups) and AI's future risks (like engineered bioweapons), there are very strong reasons to think that slowing down would have been a good thing. But companies are companies. They did not slow down. In fact, they created autonomous AIs known as AI agents, which can view your computer screen, select buttons, and perform tasks — just like you can. Whereas ChatGPT needs to be prompted by a human every step of the way, an agent can accomplish multistep goals with very minimal prompting, similar to a personal assistant. Right now, those goals are simple — create a website, say — and the agents don't work that well yet. But Bengio worries that giving AIs agency is an inherently risky move: Eventually, they could escape human control and go 'rogue.' So now, Bengio is pivoting to a backup plan. If he can't get companies to stop trying to build AI that matches human smarts (artificial general intelligence, or AGI) or even surpasses human smarts (artificial superintelligence, or ASI), then he wants to build something that will block those AIs from harming humanity. He calls it 'Scientist AI.' Scientist AI won't be like an AI agent — it'll have no autonomy and no goals of its own. Instead, its main job will be to calculate the probability that some other AI's action would cause harm — and, if the action is too risky, block it. AI companies could overlay Scientist AI onto their models to stop them from doing something dangerous, akin to how we put guardrails along highways to stop cars from veering off course. I talked to Bengio about why he's so disturbed by today's AI systems, whether he regrets doing the research that led to their creation, and whether he thinks throwing yet more AI at the problem will be enough to solve it. A transcript of our unusually candid conversation, edited for length and clarity, follows. Sigal Samuel When people express worry about AI, they often express it as a worry about artificial general intelligence or superintelligence. Do you think that's the wrong thing to be worrying about? Should we only worry about AGI or ASI insofar as it includes agency? Yoshua Bengio Yes. You could have a superintelligent AI that doesn't 'want' anything, and it's totally not dangerous because it doesn't have its own goals. It's just like a very smart encyclopedia. Sigal Samuel Researchers have been warning for years about the risks of AI systems, especially systems with their own goals and general intelligence. Can you explain what's making the situation increasingly scary to you now? Yoshua Bengio In the last six months, we've gotten evidence of AIs that are so misaligned that they would go against our moral instructions. They would plan and do these bad things — lying, cheating, trying to persuade us with deceptions, and — worst of all — trying to escape our control and not wanting to be shut down, and doing anything [to avoid shutdown], including blackmail. These are not an immediate danger because they're all controlled we don't know how to really deal with this. Sigal Samuel And these bad behaviors increase the more agency the AI system has? Yoshua Bengio Yes. The systems we had last year, before we got into reasoning models, were much less prone to this. It's just getting worse and worse. That makes sense because we see that their planning ability is improving exponentially. And [the AIs] need good planning to strategize about things like 'How am I going to convince these people to do what I want?' or 'How do I escape their control?' So if we don't fix these problems quickly, we may end up with, initially, funny accidents, and later, not-funny accidents. That's motivating what we're trying to do at LawZero. We're trying to think about how we design AI more precisely, so that, by construction, it's not even going to have any incentive or reason to do such things. In fact, it's not going to want anything. Sigal Samuel Tell me about how Scientist AI could be used as a guardrail against the bad actions of an AI agent. I'm imagining Scientist AI as the babysitter of the agentic AI, double-checking what it's doing. Yoshua Bengio So, in order to do the job of a guardrail, you don't need to be an agent yourself. The only thing you need to do is make a good prediction. And the prediction is this: Is this action that my agent wants to do acceptable, morally speaking? Does it satisfy the safety specifications that humans have provided? Or is it going to harm somebody? And if the answer is yes, with some probability that's not very small, then the guardrail says: No, this is a bad action. And the agent has to [try a different] action. Sigal Samuel But even if we build Scientist AI, the domain of 'What is moral or immoral?' is famously contentious. There's just no consensus. So how would Scientist AI learn what to classify as a bad action? Yoshua Bengio It's not for any kind of AI to decide what is right or wrong. We should establish that using democracy. Law should be about trying to be clear about what is acceptable or not. Now, of course, there could be ambiguity in the law. Hence you can get a corporate lawyer who is able to find loopholes in the law. But there's a way around this: Scientist AI is planned so that it will see the ambiguity. It will see that there are different interpretations, say, of a particular rule. And then it can be conservative about the interpretation — as in, if any of the plausible interpretations would judge this action as really bad, then the action is rejected. Sigal Samuel I think a problem there would be that almost any moral choice arguably has ambiguity. We've got some of the most contentious moral issues — think about gun control or abortion in the US — where, even democratically, you might get a significant proportion of the population that says they're opposed. How do you propose to deal with that? Yoshua Bengio I don't. Except by having the strongest possible honesty and rationality in the answers, which, in my opinion, would already be a big gain compared to the sort of democratic discussions that are happening. One of the features of the Scientist AI, like a good human scientist, is that you can ask: Why are you saying this? And he would come up with — not 'he,' sorry! — it would come up with a justification. The AI would be involved in the dialogue to try to help us rationalize what are the pros and cons and so on. So I actually think that these sorts of machines could be turned into tools to help democratic debates. It's a little bit more than fact-checking — it's also like reasoning-checking. Sigal Samuel This idea of developing Scientist AI stems from your disillusionment with the AI we've been developing so far. And your research was very foundational in laying the groundwork for that kind of AI. On a personal level, do you feel some sense of inner conflict or regret about having done the research that laid that groundwork? Yoshua Bengio I should have thought of this 10 years ago. In fact, I could have, because I read some of the early works in AI safety. But I think there are very strong psychological defenses that I had, and that most of the AI researchers have. You want to feel good about your work, and you want to feel like you're the good guy, not doing something that could cause in the future lots of harm and death. So we kind of look the other way. And for myself, I was thinking: This is so far into the future! Before we get to the science-fiction-sounding things, we're going to have AI that can help us with medicine and climate and education, and it's going to be great. So let's worry about these things when we get there. But that was before ChatGPT came. When ChatGPT came, I couldn't continue living with this internal lie, because, well, we are getting very close to human-level. Sigal Samuel The reason I ask this is because it struck me when reading your plan for Scientist AI that you say it's modeled after the platonic idea of a scientist — a selfless, ideal person who's just trying to understand the world. I thought: Are you in some way trying to build the ideal version of yourself, this 'he' that you mentioned, the ideal scientist? Is it like what you wish you could have been? Yoshua Bengio You should do psychotherapy instead of journalism! Yeah, you're pretty close to the mark. In a way, it's an ideal that I have been looking toward for myself. I think that's an ideal that scientists should be looking toward as a model. Because, for the most part in science, we need to step back from our emotions so that we avoid biases and preconceived ideas and ego. Sigal Samuel A couple of years ago you were one of the signatories of the letter urging AI companies to pause cutting-edge work. Obviously, the pause did not happen. For me, one of the takeaways from that moment was that we're at a point where this is not predominantly a technological problem. It's political. It's really about power and who gets the power to shape the incentive structure. We know the incentives in the AI industry are horribly misaligned. There's massive commercial pressure to build cutting-edge AI. To do that, you need a ton of compute so you need billions of dollars, so you're practically forced to get in bed with a Microsoft or an Amazon. How do you propose to avoid that fate? Yoshua Bengio That's why we're doing this as a nonprofit. We want to avoid the market pressure that would force us into the capability race and, instead, focus on the scientific aspects of safety. I think we could do a lot of good without having to train frontier models ourselves. If we come up with a methodology for training AI that is convincingly safer, at least on some aspects like loss of control, and we hand it over almost for free to companies that are building AI — well, no one in these companies actually wants to see a rogue AI. It's just that they don't have the incentive to do the work! So I think just knowing how to fix the problem would reduce the risks considerably. I also think that governments will hopefully take these questions more and more seriously. I know right now it doesn't look like it, but when we start seeing more evidence of the kind we've seen in the last six months, but stronger and more scary, public opinion might push sufficiently that we'll see regulation or some way to incentivize companies to behave better. It might even happen just for market reasons — like, [AI companies] could be sued. So, at some point, they might reason that they should be willing to pay some money to reduce the risks of accidents. Sigal Samuel I was happy to see that LawZero isn't only talking about reducing the risks of accidents but is also talking about 'protecting human joy and endeavor.' A lot of people fear that if AI gets better than them at things, well, what is the meaning of their life? How would you advise people to think about the meaning of their human life if we enter an era where machines have both agency and extreme intelligence? Yoshua Bengio I understand it would be easy to be discouraged and to feel powerless. But the decisions that human beings are going to make in the coming years as AI becomes more powerful — these decisions are incredibly consequential. So there's a sense in which it's hard to get more meaning than that! If you want to do something about it, be part of the thinking, be part of the democratic debate. I would advise us all to remind ourselves that we have agency. And we have an amazing task in front of us: to shape the future.

What we learned the last time we put AI in a Barbie
What we learned the last time we put AI in a Barbie

Vox

time4 days ago

  • Vox

What we learned the last time we put AI in a Barbie

is a senior technology correspondent at Vox and author of the User Friendly newsletter. He's spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice. The first big Christmas gift I remember getting was an animatronic bear named Teddy Ruxpin. Thanks to a cassette tape hidden in his belly, he could talk, his eyes and mouth moving in a famously creepy way. Later that winter, when I was sick with a fever, I hallucinated that the toy came alive and attacked me. I never saw Teddy again after that. These days, toys can do a lot more than tell pre-recorded stories. So-called smart toys, many of which are internet-connected, are a $20 billion business, and increasingly, they're artificially intelligent. Mattel and OpenAI announced a partnership last week to 'bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety.' They're planning to announce their first product later this year. It's unclear what this might entail: maybe it's Barbies that can gossip with you or a self-driving Hot Wheels or something we haven't even dreamed up yet. All of this makes me nervous as a young parent. I already knew that generative AI was invading classrooms and filling the internet with slop, but I wasn't expecting it to take over the toy aisle so soon. After all, we're already struggling to figure out how to manage our kids' relationship with the technology in their lives, from screen time to the uncanny videos made to trick YouTube's algorithm. As it seeps further into our society, a growing number of people are using AI without even realizing it. So you can't blame me for being anxious about how children might encounter the technology in unexpected ways. User Friendly A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. AI-powered toys are not as new as you might think. They're not even new for Mattel. A decade ago, the toy giant released Hello Barbie, an internet-connected doll that listened to kids and used AI to respond (think Siri, not ChatGPT). It was essentially the same concept as Teddy Ruxpin except with a lot of digital vulnerabilities. Naturally, security researchers took notice and hacked Hello Barbie, revealing that bad actors could steal personal information or eavesdrop on conversations children were having with the doll. Mattel discontinued the doll in 2017. Hello Barbie later made an appearance in the Barbie movie alongside other poor toy choices like Sugar Daddy Ken and Pregnant Midge. Despite this cautionary tale, companies keep trying to make talking AI toys a thing. One more recent example comes from the mind of Grimes, of all people. Inspired by the son she shares with Elon Musk, the musician teamed up with a company called Curio to create a stuffed rocket ship named Grok. The embodied chatbot is supposed to learn about whomever is playing with it and become a personalized companion. In real life, Grok is frustratingly dumb, according to Katie Arnold-Ratliff, a mom and writer who chronicled her son's experience with the toy in New York magazine last year. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' 'When it started remembering things about my kid, and speaking back to him, he was amazed,' Arnold-Ratliff told me this week. 'That awe very quickly dissipated once it was like, why are you talking about this completely unrelated thing.' Grok is still somewhere in their house, she said, but it has been turned off for quite some time. It turns out Arnold-Ratliff's son is more interested in inanimate objects that he can make come alive with his imagination. Sure, he'll play Mario on his Nintendo Switch for long stretches of time, but afterward, he'll draw his own worlds on paper. He'll even create digital versions of new levels on Super Mario Maker but get frustrated when the software can't keep up with his imagination. This is a miraculous paradox when it comes to kids and certain tech-powered toys. Although an adult might think that, for instance, AI could prompt kids to think about play in new ways or become an innovative new imaginary friend, kids tend to prefer imagining on their own terms. That's according to Naomi Aguiar, PhD, a researcher at Oregon State University who studies how children form relationships with AI chatbots. 'There's nothing wrong with children's imaginations. They work fine,' Aguiar said. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' Aguiar did concede that AI can be a powerful educational tool for kids, especially for those who don't have access to resources or who may be on the spectrum. 'If we focus on solutions to specific problems and train the models to do that, it could open up a lot of opportunities,' she told me. Putting AI in a Barbie, however, is not solving a particular problem. None of this means that I'm allergic to the concept of tech-centric toys for kids. Quite the opposite, in fact. Ahead of the Mattel-OpenAI announcement, I'd started researching toys my kid might like that incorporated some technology — enough to make them especially interesting and engaging — but stopped short of triggering dystopian nightmares. Much to my surprise, what I found was something of a mashup between completely inanimate objects and that terrifying Teddy Ruxpin. One of these toys is called a Toniebox, a screen-free audio player with little figurines called Tonies that you put atop the box to unlock content — namely songs, stories, and so forth. Licenses abound, so you can buy a Tonie that corresponds with pretty much any popular kids character, like Disney princesses or Paddington Bear. There are also so-called Creative Tonies that allow you to upload your own audio. For instance, you could ostensibly have a stand-in for a grandparent to enable story time, even if Grandma and Grandpa are not physically there. The whole experience is mediated with an app that the kid never needs to see. There's also the Yoto Player and the Yoto Mini, which are similar to the Toniebox but use cards instead of figurines and have a very low-resolution display that can show a clock or a pixelated character. Because it has that display, kids can also create custom icons to show up when they record their own content onto a card. Yoto has been beta-testing an AI-powered story generator, which is designed for parents to create custom stories for their kids. If those audio players are geared toward story time, a company called Nex makes a video game console for playtime. It's called Nex Playground, and kids use their movements to control it. This happens thanks to a camera equipped with machine-learning capabilities to recognize your movements and expressions. So imagine playing Wii Sports, but instead of throwing the Nintendo controller through your TV screen when you're trying to bowl, you make the bowling motion to play the game. Nex makes most of its games in-house, and all of the computation needed for its gameplay happens on the device itself. That means there's no data being collected or sent to the cloud. Once you download a game, you don't even have to be online to play it. 'We envision toys that can just grow in a way where they become a new way to interact with technology for kids and evolve into something that's much deeper, much more meaningful for families,' David Lee, CEO of Nex, said when I asked him about the future of toys. It will be a few more years before I have to worry about my kid's interactions with a video game console, much less an AI-powered Barbie — and certainly not Teddy Ruxpin. But she loves her Toniebox. She talks to the figurines and lines them up alongside each other, like a little posse. I have no idea what she's imagining them saying back. In a way, that's the point.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store