Latest news with #Sigal


Vox
3 days ago
- Vox
He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. The science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a 'zeroth law,' which is so important that it precedes all the others: 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' This month, the computer scientist Yoshua Bengio — known as the 'godfather of AI' because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won't harm humanity. Even though he helped lay the foundation for today's advanced AI, Bengio is increasingly worried about the technology over the past few years. In 2023, he signed an open letter urging AI companies to press pause on state-of-the-art AI development. Both because of AI's present harms (like bias against marginalized groups) and AI's future risks (like engineered bioweapons), there are very strong reasons to think that slowing down would have been a good thing. But companies are companies. They did not slow down. In fact, they created autonomous AIs known as AI agents, which can view your computer screen, select buttons, and perform tasks — just like you can. Whereas ChatGPT needs to be prompted by a human every step of the way, an agent can accomplish multistep goals with very minimal prompting, similar to a personal assistant. Right now, those goals are simple — create a website, say — and the agents don't work that well yet. But Bengio worries that giving AIs agency is an inherently risky move: Eventually, they could escape human control and go 'rogue.' So now, Bengio is pivoting to a backup plan. If he can't get companies to stop trying to build AI that matches human smarts (artificial general intelligence, or AGI) or even surpasses human smarts (artificial superintelligence, or ASI), then he wants to build something that will block those AIs from harming humanity. He calls it 'Scientist AI.' Scientist AI won't be like an AI agent — it'll have no autonomy and no goals of its own. Instead, its main job will be to calculate the probability that some other AI's action would cause harm — and, if the action is too risky, block it. AI companies could overlay Scientist AI onto their models to stop them from doing something dangerous, akin to how we put guardrails along highways to stop cars from veering off course. I talked to Bengio about why he's so disturbed by today's AI systems, whether he regrets doing the research that led to their creation, and whether he thinks throwing yet more AI at the problem will be enough to solve it. A transcript of our unusually candid conversation, edited for length and clarity, follows. Sigal Samuel When people express worry about AI, they often express it as a worry about artificial general intelligence or superintelligence. Do you think that's the wrong thing to be worrying about? Should we only worry about AGI or ASI insofar as it includes agency? Yoshua Bengio Yes. You could have a superintelligent AI that doesn't 'want' anything, and it's totally not dangerous because it doesn't have its own goals. It's just like a very smart encyclopedia. Sigal Samuel Researchers have been warning for years about the risks of AI systems, especially systems with their own goals and general intelligence. Can you explain what's making the situation increasingly scary to you now? Yoshua Bengio In the last six months, we've gotten evidence of AIs that are so misaligned that they would go against our moral instructions. They would plan and do these bad things — lying, cheating, trying to persuade us with deceptions, and — worst of all — trying to escape our control and not wanting to be shut down, and doing anything [to avoid shutdown], including blackmail. These are not an immediate danger because they're all controlled we don't know how to really deal with this. Sigal Samuel And these bad behaviors increase the more agency the AI system has? Yoshua Bengio Yes. The systems we had last year, before we got into reasoning models, were much less prone to this. It's just getting worse and worse. That makes sense because we see that their planning ability is improving exponentially. And [the AIs] need good planning to strategize about things like 'How am I going to convince these people to do what I want?' or 'How do I escape their control?' So if we don't fix these problems quickly, we may end up with, initially, funny accidents, and later, not-funny accidents. That's motivating what we're trying to do at LawZero. We're trying to think about how we design AI more precisely, so that, by construction, it's not even going to have any incentive or reason to do such things. In fact, it's not going to want anything. Sigal Samuel Tell me about how Scientist AI could be used as a guardrail against the bad actions of an AI agent. I'm imagining Scientist AI as the babysitter of the agentic AI, double-checking what it's doing. Yoshua Bengio So, in order to do the job of a guardrail, you don't need to be an agent yourself. The only thing you need to do is make a good prediction. And the prediction is this: Is this action that my agent wants to do acceptable, morally speaking? Does it satisfy the safety specifications that humans have provided? Or is it going to harm somebody? And if the answer is yes, with some probability that's not very small, then the guardrail says: No, this is a bad action. And the agent has to [try a different] action. Sigal Samuel But even if we build Scientist AI, the domain of 'What is moral or immoral?' is famously contentious. There's just no consensus. So how would Scientist AI learn what to classify as a bad action? Yoshua Bengio It's not for any kind of AI to decide what is right or wrong. We should establish that using democracy. Law should be about trying to be clear about what is acceptable or not. Now, of course, there could be ambiguity in the law. Hence you can get a corporate lawyer who is able to find loopholes in the law. But there's a way around this: Scientist AI is planned so that it will see the ambiguity. It will see that there are different interpretations, say, of a particular rule. And then it can be conservative about the interpretation — as in, if any of the plausible interpretations would judge this action as really bad, then the action is rejected. Sigal Samuel I think a problem there would be that almost any moral choice arguably has ambiguity. We've got some of the most contentious moral issues — think about gun control or abortion in the US — where, even democratically, you might get a significant proportion of the population that says they're opposed. How do you propose to deal with that? Yoshua Bengio I don't. Except by having the strongest possible honesty and rationality in the answers, which, in my opinion, would already be a big gain compared to the sort of democratic discussions that are happening. One of the features of the Scientist AI, like a good human scientist, is that you can ask: Why are you saying this? And he would come up with — not 'he,' sorry! — it would come up with a justification. The AI would be involved in the dialogue to try to help us rationalize what are the pros and cons and so on. So I actually think that these sorts of machines could be turned into tools to help democratic debates. It's a little bit more than fact-checking — it's also like reasoning-checking. Sigal Samuel This idea of developing Scientist AI stems from your disillusionment with the AI we've been developing so far. And your research was very foundational in laying the groundwork for that kind of AI. On a personal level, do you feel some sense of inner conflict or regret about having done the research that laid that groundwork? Yoshua Bengio I should have thought of this 10 years ago. In fact, I could have, because I read some of the early works in AI safety. But I think there are very strong psychological defenses that I had, and that most of the AI researchers have. You want to feel good about your work, and you want to feel like you're the good guy, not doing something that could cause in the future lots of harm and death. So we kind of look the other way. And for myself, I was thinking: This is so far into the future! Before we get to the science-fiction-sounding things, we're going to have AI that can help us with medicine and climate and education, and it's going to be great. So let's worry about these things when we get there. But that was before ChatGPT came. When ChatGPT came, I couldn't continue living with this internal lie, because, well, we are getting very close to human-level. Sigal Samuel The reason I ask this is because it struck me when reading your plan for Scientist AI that you say it's modeled after the platonic idea of a scientist — a selfless, ideal person who's just trying to understand the world. I thought: Are you in some way trying to build the ideal version of yourself, this 'he' that you mentioned, the ideal scientist? Is it like what you wish you could have been? Yoshua Bengio You should do psychotherapy instead of journalism! Yeah, you're pretty close to the mark. In a way, it's an ideal that I have been looking toward for myself. I think that's an ideal that scientists should be looking toward as a model. Because, for the most part in science, we need to step back from our emotions so that we avoid biases and preconceived ideas and ego. Sigal Samuel A couple of years ago you were one of the signatories of the letter urging AI companies to pause cutting-edge work. Obviously, the pause did not happen. For me, one of the takeaways from that moment was that we're at a point where this is not predominantly a technological problem. It's political. It's really about power and who gets the power to shape the incentive structure. We know the incentives in the AI industry are horribly misaligned. There's massive commercial pressure to build cutting-edge AI. To do that, you need a ton of compute so you need billions of dollars, so you're practically forced to get in bed with a Microsoft or an Amazon. How do you propose to avoid that fate? Yoshua Bengio That's why we're doing this as a nonprofit. We want to avoid the market pressure that would force us into the capability race and, instead, focus on the scientific aspects of safety. I think we could do a lot of good without having to train frontier models ourselves. If we come up with a methodology for training AI that is convincingly safer, at least on some aspects like loss of control, and we hand it over almost for free to companies that are building AI — well, no one in these companies actually wants to see a rogue AI. It's just that they don't have the incentive to do the work! So I think just knowing how to fix the problem would reduce the risks considerably. I also think that governments will hopefully take these questions more and more seriously. I know right now it doesn't look like it, but when we start seeing more evidence of the kind we've seen in the last six months, but stronger and more scary, public opinion might push sufficiently that we'll see regulation or some way to incentivize companies to behave better. It might even happen just for market reasons — like, [AI companies] could be sued. So, at some point, they might reason that they should be willing to pay some money to reduce the risks of accidents. Sigal Samuel I was happy to see that LawZero isn't only talking about reducing the risks of accidents but is also talking about 'protecting human joy and endeavor.' A lot of people fear that if AI gets better than them at things, well, what is the meaning of their life? How would you advise people to think about the meaning of their human life if we enter an era where machines have both agency and extreme intelligence? Yoshua Bengio I understand it would be easy to be discouraged and to feel powerless. But the decisions that human beings are going to make in the coming years as AI becomes more powerful — these decisions are incredibly consequential. So there's a sense in which it's hard to get more meaning than that! If you want to do something about it, be part of the thinking, be part of the democratic debate. I would advise us all to remind ourselves that we have agency. And we have an amazing task in front of us: to shape the future.


Vox
08-06-2025
- General
- Vox
First comes marriage. Then comes a flirtatious colleague.
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. To submit a question, fill out this anonymous form or email Here's this week's question from a reader, condensed and edited for clarity: My husband and I have a good relationship. We're both committed to personal growth and continual learning and have developed very strong communication skills. A couple of years ago we were exposed to some friends with an open marriage and had our own conversations about ethical non-monogamy. At first, neither of us were interested. Now, my husband is interested and currently is attracted to a colleague who is also into him. She's married and has no idea that he and I talk about all of their interactions. He doesn't know what her relationship agreements are with her husband. I'm not currently interested in ethical non-monogamy. I see things in our relationship that I'd like to work on together with my husband. I want more of his attention and energy, to be frank. I don't want his attention and energy being funneled into another relationship. I don't have moral issues with ethical non-monogamy, I just don't actually see any value-add for me right now. The cost-benefit analysis leaves me saying 'not now.' My husband admitted that he's hoping I will have a change of mind. I don't want to force his hand, although I am continuing to say very clearly what I want in my relationship. How do we reach a compromise? If he cuts ties with this woman, he has resentment towards me. If he continues to pursue something with her, I feel disrespected, and while I don't want to leave him I would feel the need to do something. Dear Monogamously Married, I want to start by commending you for two things. First, for your openness to discussing and exploring all this with your husband. Second, for your insistence on clearly stating what you actually want — and don't want. I think Erich Fromm, the 20th-century German philosopher and psychologist, would back me up in saying that you'd do well to hold tight to both those qualities. For starters, radical openness is important because, according to Fromm, the basic premise of love is freedom. He writes: Love is a passionate affirmation of its 'object.' That means that love is not an 'affect' but an active striving, the aim of which is the happiness, development, and freedom of its 'object.' In other words, love is not a feeling. It's work, and the work of love is to fully support the flourishing of the person you love. That can be scary — what if the person discovers that they're actually happier with somebody else? — which is why Fromm specifies that only someone with a strong self 'which can stand alone and bear solitude' will be up for the job. He continues: This passionate affirmation is not possible if one's own self is crippled, since genuine affirmation is always rooted in strength. The person whose self is thwarted can only love in an ambivalent way; that is, with the strong part of his self he can love, with the crippled part he must hate. So far, it might sound like Fromm is saying that to be a good lover is to be a doormat: you just have to do whatever's best for the other person, even if it screws you over. But his view is very much the opposite. In fact, Fromm cautions us against both 'masochistic love' and 'sadistic love.' In the first, you give up your self and sacrifice your needs in order to become submerged in another person. In the second, you try to exert power over the other person. Both of these are rooted in 'a deep anxiety and an inability to stand alone,' writes Fromm; whether by dissolving yourself into them or by controlling them, you're trying to make it impossible for the other person to abandon you. Both approaches are 'pseudo-love.' Have a question you want me to answer in the next Your Mileage May Vary column? Feel free to email me at or fill out this anonymous form! Newsletter subscribers will get my column before anyone else does and their questions will be prioritized for future editions. Sign up here! So although Fromm doesn't want you to try to control your partner, and although he suggests that the philosophical ideal is for you to passionately affirm your partner's freedom, he's not advising you to do that if, for you, that will mean masochism. If you're not up for ethical non-monogamy — if you feel, like many people, that the idea of giving your partner free rein is too big a threat to your relationship or your own well-being — then pretending otherwise is not real love. It's just masochistic self-annihilation. I'm personally partial to Fromm's non-possessive approach to love. But I equally appreciate his point that the philosophical ideal could become a practical bloodbath if it doesn't work for the actual humans involved. I think the question, then, is this: Do you think it's possible for you to get to a place where you genuinely feel ready for and interested in ethical non-monogamy? It sounds like you're intellectually open to the idea, and given that you said you're committed to personal growth and continual learning, non-monogamy could offer you some benefits; lots of people who practice it say that part of its appeal lies in the growth it catalyzes. And if practicing non-monogamy makes you and/or your husband more fulfilled, it could enrich your relationship and deepen your appreciation for each other. But right now, you've got a problem: Your husband is pushing on your boundaries by flirting with a woman even after you've expressed that you don't want him pursuing something with her. And you already feel like he isn't giving you enough attention and energy, so the prospect of having to divvy up those resources with another woman feels threatening. Fair! Notice, though, that that isn't a worry about non-monogamy per se — it's a worry about the state of your current monogamous relationship. In a marriage, what partners typically want is to feel emotionally secure. But that comes from how consistently and lovingly we show up for and attune to one another, not from the relationship structure. A monogamous marriage may give us some feeling of security, but it's obviously no guarantee; some people cheat, some get divorced, and some stay loyally married while neglecting their partner emotionally. 'Monogamy can serve as a stand-in for actual secure attachment,' writes therapist Jessica Fern in Polysecure, a book on how to build healthy non-monogamous relationships. She urges readers to take an honest look at any relationship insecurities or dissatisfactions that are being disguised by monogamy, and work with partners to strengthen the emotional experience of the relationship. Since you feel that your husband isn't giving you enough attention and energy, be sure to talk to him about it. Explain that it doesn't feel safe for you to open up the relationship without him doing more to be fully present with you and to make you feel understood and precious. See if he starts implementing these skills more reliably. In the meantime, while you two are trying to reset your relationship, it's absolutely reasonable to ask him to cool it with the colleague he's attracted to; he doesn't have to cut ties with her entirely (and may not be able to if they work together), but he can certainly avoid feeding the flames with flirtation. Right now, the fantasy of her is a distraction from the work he needs to be doing to improve the reality of your marriage. He should understand why a healthy practice of ethical non-monogamy can't emerge from a situation where he's pushing things too far with someone else before you've agreed to change the terms of your relationship (and if he doesn't, have him read Polysecure!). It's probably a good idea for you to each do your own inner work, too. Fern, like Fromm, insists that if we want to be capable of a secure attachment with someone else, we need to cultivate that within ourselves. That means being aware of our feelings, desires, and needs, and knowing how to tend to them. Understanding your attachment style can help with this; for example, if you're anxiously attached and you very often reach out to your partner for reassurance, you can practice spending time alone. After taking some time to work on these interpersonal and intrapersonal skills, come back together to discuss how you're feeling. Do you feel more receptive to opening up the relationship? Do you think it would add more than it would subtract? If the answer is 'yes' or 'maybe,' you can create a temporary relationship structure — or 'vessel,' as Fern calls it — to help you ease into non-monogamy. One option is to adopt a staggered approach to dating, where one partner (typically the more hesitant one) starts dating new people first, and the other partner starts after a predetermined amount of time. Another option is to try a months-long experiment where both partners initially engage in certain romantic or sexual experiences that are less triggering to each other, then assess what worked and what didn't, and go from there. If the answer is 'no' — if you're not receptive to opening up your relationship — then by all means say that! Given you'll have sincerely done the work to explore whether non-monogamy works for you, your husband doesn't get to resent you. He can be sad, he can be disappointed, and he can choose to leave if the outcome is intolerable to him. But he'll have to respect you, and what's more important, you'll have to respect yourself. Bonus: What I'm reading This week's question prompted me to go back to the famous psychologist Abraham Maslow, who was influenced by Fromm. Maslow spoke of two kinds of love : Deficit-Love and Being-Love. The former is about trying to satiate your own needs, while the latter is about giving without expecting something in return. Maslow characterizes Being-Love as an almost spiritual experience, likening it to 'the perfect love of their God that some mystics have described.' In addition to Polysecure, which has become something of a poly bible in the past few years, I recommend reading What Love Is — and What It Could Be , written by the philosopher Carrie Jenkins. I appreciated Jenkins's functionalist take on romantic love: She explains that we've constructed the idea of romantic love a certain way in order to serve a certain function (structuring society into nuclear family units), but we can absolutely revise it if we want.


Vox
02-06-2025
- Vox
My students think it's fine to cheat with AI. Maybe they're onto something.
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. To submit a question, fill out this anonymous form or email Here's this week's question from a reader, condensed and edited for clarity: I am a university teaching assistant, leading discussion sections for large humanities lecture classes. This also means I grade a lot of student writing — and, inevitably, see a lot of AI writing too. Of course, many of us are working on developing assignments and pedagogies to make that less tempting. But as a TA, I only have limited ability to implement these policies. And in the meantime, AI-generated writing is so ubiquitous that to take course policy on it seriously, or even to escalate every suspected instance to the professor who runs the course, would be to make dozens of accusations, some of them false positives, for basically every assignment. I believe in the numinous, ineffable value of a humanities education, but I'm also not going to convince stressed 19-year-olds of that value by cracking down hard on something everyone does. How do I think about the ethics of enforcing the rules of an institution that they don't take seriously, or letting things slide in the name of building a classroom that feels less like an obstacle to circumvent? Dear Troubled Teacher, I know you said you believe in the 'ineffable value of a humanities education,' but if we want to actually get clear on your dilemma, that ineffable value must be effed! So: What is the real value of a humanities education? Looking at the modern university, one might think the humanities aren't so different from the STEM fields. Just as the engineering department or the math department justifies its existence by pointing to the products it creates — bridge designs, weather forecasts — humanities departments nowadays justify their existence by noting that their students create products, too: literary interpretations, cultural criticism, short films. But let's be real: It's the neoliberalization of the university that has forced the humanities into that weird contortion. That's never what they were supposed to be. Their real aim, as the philosopher Megan Fritts writes, is 'the formation of human persons.' In other words, while the purpose of other departments is ultimately to create a product, a humanities education is meant to be different, because the student herself is the product. She is what's getting created and recreated by the learning process. Have a question you want me to answer in the next Your Mileage May Vary column? Feel free to email me at or fill out this anonymous form! Newsletter subscribers will get my column before anyone else does and their questions will be prioritized for future editions. Sign up here! This vision of education — as a pursuit that's supposed to be personally transformative — is what Aristotle proposed back in Ancient Greece. He believed the real goal was not to impart knowledge, but to cultivate the virtues: honesty, justice, courage, and all the other character traits that make for a flourishing life. But because flourishing is devalued in our hypercapitalist society, you find yourself caught between that original vision and today's product-based, utilitarian vision. And students sense — rightly! — that generative AI proves the utilitarian vision for the humanities is a sham. As one student said to his professor at New York University, in an effort to justify using AI to do his work for him, 'You're asking me to go from point A to point B, why wouldn't I use a car to get there?' It's a completely logical argument — as long as you accept the utilitarian vision. The real solution, then, is to be honest about what the humanities are for: You're in the business of helping students with the cultivation of their character. I know, I know: Lots of students will say, 'I don't have time to work on cultivating my character! I just need to be able to get a job!' It's totally fair for them to be focusing on their job prospects. But your job is to focus on something else — something that will help them flourish in the long run, even if they don't fully see the value in it now. Your job is to be their Aristotle. For the Ancient Greek philosopher, the mother of all virtues was phronesis, or practical wisdom. And I'd argue there's nothing more useful you can do for your students than help them cultivate this virtue, which is made more, not less, relevant by the advent of AI. Practical wisdom goes beyond just knowing general rules — 'don't lie,' for example — and applying them mechanically like some sort of moral robot. It's about knowing how to make good judgments when faced with the complex, dynamic situations life throws at you. Sometimes that'll actually mean violating a classic rule (in certain cases, you should lie!). If you've honed your practical wisdom, you'll be able to discern the morally salient features of a particular situation and come up with a response that's well-attuned to that context. This is exactly the sort of deliberation that students will need to be good at as they step into the wider world. The breakneck pace of technological innovation means they're going to have to choose, again and again and again, how to make use of emerging technologies — and how not to. The best training they can get now is training in how to wisely make this type of choice. Unfortunately, that's exactly what using generative AI in the classroom threatens to short-circuit, because it removes something incredibly valuable: friction. AI is removing cognitive friction from education. We need to add it back in. Encountering friction is how we give our cognitive muscles a workout. Taking it out of the picture makes things easier in the short term, but in the long term, it can lead to intellectual deskilling, where our cognitive muscles gradually become weaker for lack of use. 'Practical wisdom is built up by practice just like all the other virtues, so if you don't have the opportunity to reason and don't have practice in deliberating about certain things, you won't be able to deliberate well later,' philosopher of technology Shannon Vallor told me last year. 'We need a lot of cognitive exercise in order to develop practical wisdom and retain it. And there is reason to worry about cognitive automation depriving us of the opportunity to build and retain those cognitive muscles.' So, how do you help your students retain and build their phronesis? You add friction back in, by giving them as many opportunities as possible to practice deliberating and choosing. If I were designing the curriculum, I wouldn't do that by adopting a strict 'no AI' policy. Instead, I'd be honest with students about the real benefit of the humanities and about why mindless AI cheating would be cheating themselves out of that benefit. Then, I'd offer them two choices when it comes time to write an essay: They can either write it with help from AI, or without. Both are totally fine. But if they do get help from AI, they have to also write an in-class reflection piece, explaining why they chose to use a chatbot and how they think it changed their thinking and learning process. I'd make it shorter than the original assignment but longer than a paragraph, so it forces them to develop the very reasoning skills they were trying to avoid using. As a TA, you could suggest this to professors, but they may not go for it. Unfortunately, you've got limited agency here (unless you're willing to risk your job or walk away from it). All you can do in such a situation is exercise the agency you do have. So use every bit of it. Since you lead discussion sections, you're well-placed to prompt your students to work their cognitive muscles in conversation. You could even stage a debate about AI: Assign half of them to argue the case for using chatbots to write papers and half of them to argue the opposite. If a professor insists on a strict 'no AI' policy, and you encounter essays that seem clearly AI-written, you may have little choice but to report them. But if there's room for doubt about a given essay, you might err on the side of leniency if the student has engaged very thoughtfully in the discussion. At least then you know they've achieved the most important aim. None of this is easy. I feel for you and all other educators who are struggling in this confusing environment. In fact, I wouldn't be surprised if some educators are suffering from moral injury, a psychological condition that arises when you feel you've been forced to violate your own values. But maybe it can comfort you to remember that this is much bigger than you. Generative AI is an existential threat to a humanities education as currently constituted. Over the next few years, humanities departments will have to paradigm-shift or perish. If they want to survive, they'll need to get brutally honest about their true mission. For now, from your pre-paradigm-shift perch, all you can do is make the choices that are left for you to make. Bonus: What I'm reading This week I went back to Shannon Vallor's first book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting . If there's one book I could get everyone in the AI world to read, it would be this one. And I think it can be useful to everyone else, too, because we all need to cultivate what Vallor calls the 'technomoral virtues' — the traits that will allow us to adapt well to emerging technologies. New Yorker piece in April about AI and cognitive atrophy led me to a 2024 psychology paper titled 'The Unpleasantness of Thinking: A Meta-Analytic Review of the Association Between Mental Effort and Negative Affect.' The authors' conclusion: 'We suggest that mental effort is inherently aversive.' Come again? Yes, sometimes I just want to turn off my brain and watch Netflix, but sometimes thinking about a challenging topic is so pleasurable! To me, it feels like running or weight lifting: Too much is exhausting, but the right amount is exhilarating. And what feels like 'the right amount' can go up or down depending on how much I practice. Astrobiologist Sara Imari Walker recently published an essay in Noema provocatively titled ' AI Is Life .' She reminds us that evolution produced us and we produced AI. 'It is therefore part of the same ancient lineage of information that emerged with the origin of life,' she writes. 'Technology is not artificially replacing life — it is life.' To be clear, she's not arguing that tech is alive; she's saying it's an outgrowth of human life, an extension of our own species.


Vox
13-05-2025
- Vox
How to find a meaningful job: try 'moral ambition,' says Rutger Bregman
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. We're told from a young age to achieve. Get good grades. Get into a good school. Get a good job. Be ambitious about earning a high salary or a high-status position. But many of us eventually find ourselves asking: What's the point of all this ambition? The fat salary or the fancy title…are those really meaningful measures of success? There's another possibility: Instead of measuring our success in terms of fame or fortune, we could measure it in terms of how much good we do for others. And we could get super ambitious about using our lives to do a gargantuan amount of good. That's the message of Moral Ambition, a new book by historian and author Rutger Bregman. He wants us to stop wasting our talents on meaningless work and start devoting ourselves to solving the world's biggest problems, like malaria and pandemics and climate change. I recently got the chance to talk to Bregman on The Gray Area, Vox's philosophically-minded podcast. I invited him on the show because I find his message inspiring — and, to be honest, because I also had some questions about it. I want to dedicate myself to work that feels meaningful, but I'm not sure work that helps the greatest number of people is the only way to do that. Moral optimization — the effort to mathematically quantify moral goodness so that we can then maximize it — is, in my experience, agonizing and ultimately counterproductive. I also noticed that Bregman's 'moral ambition' has a lot in common with effective altruism (EA), the movement that's all about using reason and evidence to do the most good possible. After the downfall of Sam Bankman-Fried, the EA crypto billionaire who was convicted of fraud in 2023, EA suffered a major reputational blow. I wondered: Is Bregman just trying to rescue the EA baby from the bathwater? (Disclosure: In 2022, Future Perfect was awarded a one-time $200,000 grant from Building a Stronger Future, a family foundation run by Sam and Gabe Bankman-Fried. Future Perfect has returned the balance of the grant and is no longer pursuing this project.) So in our conversation, I talked to Bregman about all the different things that can make our lives feel meaningful, and asked: Are some objectively better than others? And how is moral ambition different from ideas that came before it, like effective altruism? This interview has been edited for length and clarity. There's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. Why should people be morally ambitious? My whole career, I've been fascinated with the waste of talent that's going on in modern economies. There's this one study from two Dutch economists and they estimate that around 25 percent of all workers think that their own job is socially meaningless, or at least doubt the value of their job. That is just insane to me. I mean, this is five times the unemployment rate. And we're talking about people who often have excellent resumes, who went to very nice universities. Harvard is an interesting case in point: 45 percent of Harvard graduates end up in consultancy or finance. I'm not saying all of that is totally socially useless, but I do wonder whether that is the best allocation of talent. [Note: In 2020, 45 percent of Harvard graduating seniors entering the workforce went into consulting and finance. Among the class of 2024, the number was 34 percent.] We face some pretty big problems out there, whether it's the threat of the next pandemic that may be just around the corner, terrible diseases like malaria and tuberculosis killing millions of people, the problem with democracy breaking down. I mean, the list goes on and on. And so I've always been frustrated by this enormous waste of talent. If we're going to have a career anyway, we might as well do a lot of good with it. What role does personal passion play in this? You write in the book, 'Don't start out by asking, what's my passion? Ask instead, how can I contribute most? And then choose the role that suits you best. Don't forget, your talents are but a means to an end.' I think 'follow your passion' is probably the worst career advice out there. At the School for Moral Ambition, an organization I co-founded, we deeply believe in the Gandalf-Frodo model of changing the world. Frodo didn't follow his passion. Gandalf never asked him, 'What's your passion, Frodo?' He said, 'Look, this really needs to be done, you've got to throw the ring into the mountain.' If Frodo would have followed his passion, he would have probably been a gardener having a life full of second breakfasts and being pretty comfortable in the Shire. And then the orcs would have turned up and murdered everyone he ever loved. So the point here is, find yourself some wise old wizard, a Gandalf. Figure out what some of the most pressing issues that we face as a species are. And ask yourself, how can I make a difference? And then you will find out that you can become very passionate about it. In your book, there's a Venn diagram with three circles. The first is labeled 'sizable.' The second is 'solvable.' And the third is 'sorely overlooked.' And in the middle, where they all overlap, it says 'moral ambition.' I wonder about the 'sizable' part of that. Does moral ambition always have to be about scale? I'm a journalist now, but before that I was a novelist. And I didn't care how many people my work impacted. My feeling was: If my novel deeply moves just one reader and helps them feel less alone or more understood, I will be happy. Are you telling me I shouldn't be happy with that? I think there is absolutely a place for, as the French say, art pour l'art — art for the sake of art itself. I don't want to let everything succumb to a utilitarian calculus. But I do think it's better to help a lot of people than just a few people. On the margins, I think in the world today, we need much more moral ambition than we currently have. When I was reading your book, I kept thinking of the philosopher Susan Wolf, who has this great essay called 'Moral Saints.' She argues that you shouldn't try to be a moral saint — someone who tries to make all their actions as morally good as possible. She writes, 'If the moral saint is devoting all his time to feeding the hungry or healing the sick or raising money for Oxfam, then necessarily he is not reading Victorian novels, playing the oboe or improving his backhand. A life in which none of these possible aspects of character are developed may seem to be a life strangely barren.' How do you square that with your urge to be morally ambitious? We are living in a world where a huge amount of people have a career that they consider socially meaningless and then they spend the rest of their time swiping TikTok. That's the reality, right? I really don't think that there's a big danger of people reading my book and moving all the way in the other direction. There's only one community I know of where this has become a problem. It's the effective altruism community. In a way, moral ambition could be seen as effective altruism for normies. Let's talk about that. I'm not an effective altruist, but I am a journalist who has reported a lot on EA, so I'm curious where you stand on this. You talk about EA in the book and you echo a lot of its core ideas. Your call to prioritize causes that are sizable, solvable, and sorely overlooked is a rephrase of EA's call to prioritize the 'important, tractable, and neglected.' And then there's this idea that you shouldn't just be trying to do good, you should try to do the most good possible. So is being morally ambitious different from being an effective altruist? So, I wouldn't say the most good. I would say, you should do a lot of good — which is different, right? That's not about being perfect, but just being ambitious. Effective altruism is a movement that I admire quite a bit. I think there's a lot we can learn from them. And there are also quite a few things that I don't really like about them. What I really like about them is their moral seriousness. I come from the political left, and if there's one thing that's often quite annoying about lefties it's that they preach a lot, but they do little. For example, I think it's pretty easy to make the case that donating to charity is one of the most effective things you can do. But very few of my progressive leftist friends donate anything. So I really like the moral seriousness of the EAs. Go to EA conferences and you will meet quite a few people who have donated kidneys to random strangers, which is pretty impressive. The main thing I dislike is where the motivation comes from. One of the founding fathers of effective altruism was the philosopher Peter Singer, who has a thought experiment of the child drowning in the shallow pond… That's the thought experiment where Singer says, if you see a kid drowning in a shallow pond, and you could save this kid without putting your own life in danger, but you will ruin your expensive clothes, should you do it? Yes, obviously. And by analogy, if we have money, we could easily save the lives of people in developing countries, so we should donate it instead of spending it on frivolous stuff. Yes. I never really liked the thought experiment because it always felt like a form of moral blackmail to me. It's like, now I'm suddenly supposed to see drowning children everywhere. Like, this microphone is way too expensive, I could have donated that money to some charity in Malawi! It's a totally inhuman way of looking at life. It just doesn't resonate with me at all. But there are quite a few people who instantly thought, 'Yes, that is true.' They said, 'Let's build a movement together.' And I do really like that. I see EAs as very weird, but pretty impressive. Let's pick up on that weirdness. In your book, you straight up tell readers, 'Join a cult — or start your own. Regardless, you can't be afraid to come across as weird if you want to make a difference. Every milestone of civilization was first seen as the crazy idea of some subculture.' But how do you think about the downsides of being in a cult? A cult is a group of thoughtful, committed citizens who want to change the world, and they have some shared beliefs that make them very weird to the rest of society. Sometimes that's exactly what's necessary. To give you one simple example, in a world that doesn't really seem to care about animals all that much, it's easy to become disillusioned. But when you join a safe space of ambitious do-gooders, you can suddenly get this feeling of, 'Hey, I'm not the only one! There are other people who deeply care about animals as well. And I can do much more than I'm currently doing.' So it can have a radicalizing effect. Now, I totally acknowledge that there are signs of dangers here. You can become too dogmatic, and you can be quite hostile to people who don't share all your beliefs. I just want to recognize that if you look at some of these great movements of history — the abolitionists, the suffragettes — they had cultish aspects. They were, in a way, a little bit like a cult. Do you have any advice for people on how to avoid the downside — that you can become deaf to criticism from the outside? Yes. Don't let it suck up your whole life. When I hear about all these EAs living in group houses, you know, they're probably taking things too far. I think it helps if you're a normie in other respects of your life. It gives you a certain groundedness and stability. In general, it's super important to surround yourself with people who are critical of your work, who don't take you too seriously, who can laugh at you or see your foolishness and call it out — and still be a good friend.


Vox
13-05-2025
- Vox
Does your job feel meaningless? Try 'moral ambition.'
is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. We're told from a young age to achieve. Get good grades. Get into a good school. Get a good job. Be ambitious about earning a high salary or a high-status position. But many of us eventually find ourselves asking: What's the point of all this ambition? The fat salary or the fancy title…are those really meaningful measures of success? There's another possibility: Instead of measuring our success in terms of fame or fortune, we could measure it in terms of how much good we do for others. And we could get super ambitious about using our lives to do a gargantuan amount of good. That's the message of Moral Ambition, a new book by historian and author Rutger Bregman. He wants us to stop wasting our talents on meaningless work and start devoting ourselves to solving the world's biggest problems, like malaria and pandemics and climate change. I recently got the chance to talk to Bregman on The Gray Area, Vox's philosophically-minded podcast. I invited him on the show because I find his message inspiring — and, to be honest, because I also had some questions about it. I want to dedicate myself to work that feels meaningful, but I'm not sure work that helps the greatest number of people is the only way to do that. Moral optimization — the effort to mathematically quantify moral goodness so that we can then maximize it — is, in my experience, agonizing and ultimately counterproductive. I also noticed that Bregman's 'moral ambition' has a lot in common with effective altruism (EA), the movement that's all about using reason and evidence to do the most good possible. After the downfall of Sam Bankman-Fried, the EA crypto billionaire who was convicted of fraud in 2023, EA suffered a major reputational blow. I wondered: Is Bregman just trying to rescue the EA baby from the bathwater? (Disclosure: In 2022, Future Perfect was awarded a one-time $200,000 grant from Building a Stronger Future, a family foundation run by Sam and Gabe Bankman-Fried. Future Perfect has returned the balance of the grant and is no longer pursuing this project.) So in our conversation, I talked to Bregman about all the different things that can make our lives feel meaningful, and asked: Are some objectively better than others? And how is moral ambition different from ideas that came before it, like effective altruism? This interview has been edited for length and clarity. There's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. Why should people be morally ambitious? My whole career, I've been fascinated with the waste of talent that's going on in modern economies. There's this one study from two Dutch economists and they estimate that around 25 percent of all workers think that their own job is socially meaningless, or at least doubt the value of their job. That is just insane to me. I mean, this is five times the unemployment rate. And we're talking about people who often have excellent resumes, who went to very nice universities. Harvard is an interesting case in point: 45 percent of Harvard graduates end up in consultancy or finance. I'm not saying all of that is totally socially useless, but I do wonder whether that is the best allocation of talent. [Note: In 2020, 45 percent of Harvard graduating seniors entering the workforce went into consulting and finance. Among the class of 2024, the number was 34 percent.] We face some pretty big problems out there, whether it's the threat of the next pandemic that may be just around the corner, terrible diseases like malaria and tuberculosis killing millions of people, the problem with democracy breaking down. I mean, the list goes on and on. And so I've always been frustrated by this enormous waste of talent. If we're going to have a career anyway, we might as well do a lot of good with it. What role does personal passion play in this? You write in the book, 'Don't start out by asking, what's my passion? Ask instead, how can I contribute most? And then choose the role that suits you best. Don't forget, your talents are but a means to an end.' I think 'follow your passion' is probably the worst career advice out there. At the School for Moral Ambition, an organization I co-founded, we deeply believe in the Gandalf-Frodo model of changing the world. Frodo didn't follow his passion. Gandalf never asked him, 'What's your passion, Frodo?' He said, 'Look, this really needs to be done, you've got to throw the ring into the mountain.' If Frodo would have followed his passion, he would have probably been a gardener having a life full of second breakfasts and being pretty comfortable in the Shire. And then the orcs would have turned up and murdered everyone he ever loved. So the point here is, find yourself some wise old wizard, a Gandalf. Figure out what some of the most pressing issues that we face as a species are. And ask yourself, how can I make a difference? And then you will find out that you can become very passionate about it. In your book, there's a Venn diagram with three circles. The first is labeled 'sizable.' The second is 'solvable.' And the third is 'sorely overlooked.' And in the middle, where they all overlap, it says 'moral ambition.' I wonder about the 'sizable' part of that. Does moral ambition always have to be about scale? I'm a journalist now, but before that I was a novelist. And I didn't care how many people my work impacted. My feeling was: If my novel deeply moves just one reader and helps them feel less alone or more understood, I will be happy. Are you telling me I shouldn't be happy with that? I think there is absolutely a place for, as the French say, art pour l'art — art for the sake of art itself. I don't want to let everything succumb to a utilitarian calculus. But I do think it's better to help a lot of people than just a few people. On the margins, I think in the world today, we need much more moral ambition than we currently have. When I was reading your book, I kept thinking of the philosopher Susan Wolf, who has this great essay called 'Moral Saints.' She argues that you shouldn't try to be a moral saint — someone who tries to make all their actions as morally good as possible. She writes, 'If the moral saint is devoting all his time to feeding the hungry or healing the sick or raising money for Oxfam, then necessarily he is not reading Victorian novels, playing the oboe or improving his backhand. A life in which none of these possible aspects of character are developed may seem to be a life strangely barren.' How do you square that with your urge to be morally ambitious? We are living in a world where a huge amount of people have a career that they consider socially meaningless and then they spend the rest of their time swiping TikTok. That's the reality, right? I really don't think that there's a big danger of people reading my book and moving all the way in the other direction. There's only one community I know of where this has become a problem. It's the effective altruism community. In a way, moral ambition could be seen as effective altruism for normies. Let's talk about that. I'm not an effective altruist, but I am a journalist who has reported a lot on EA, so I'm curious where you stand on this. You talk about EA in the book and you echo a lot of its core ideas. Your call to prioritize causes that are sizable, solvable, and sorely overlooked is a rephrase of EA's call to prioritize the 'important, tractable, and neglected.' And then there's this idea that you shouldn't just be trying to do good, you should try to do the most good possible. So is being morally ambitious different from being an effective altruist? So, I wouldn't say the most good. I would say, you should do a lot of good — which is different, right? That's not about being perfect, but just being ambitious. Effective altruism is a movement that I admire quite a bit. I think there's a lot we can learn from them. And there are also quite a few things that I don't really like about them. What I really like about them is their moral seriousness. I come from the political left, and if there's one thing that's often quite annoying about lefties it's that they preach a lot, but they do little. For example, I think it's pretty easy to make the case that donating to charity is one of the most effective things you can do. But very few of my progressive leftist friends donate anything. So I really like the moral seriousness of the EAs. Go to EA conferences and you will meet quite a few people who have donated kidneys to random strangers, which is pretty impressive. The main thing I dislike is where the motivation comes from. One of the founding fathers of effective altruism was the philosopher Peter Singer, who has a thought experiment of the child drowning in the shallow pond… That's the thought experiment where Singer says, if you see a kid drowning in a shallow pond, and you could save this kid without putting your own life in danger, but you will ruin your expensive clothes, should you do it? Yes, obviously. And by analogy, if we have money, we could easily save the lives of people in developing countries, so we should donate it instead of spending it on frivolous stuff. Yes. I never really liked the thought experiment because it always felt like a form of moral blackmail to me. It's like, now I'm suddenly supposed to see drowning children everywhere. Like, this microphone is way too expensive, I could have donated that money to some charity in Malawi! It's a totally inhuman way of looking at life. It just doesn't resonate with me at all. But there are quite a few people who instantly thought, 'Yes, that is true.' They said, 'Let's build a movement together.' And I do really like that. I see EAs as very weird, but pretty impressive. Let's pick up on that weirdness. In your book, you straight up tell readers, 'Join a cult — or start your own. Regardless, you can't be afraid to come across as weird if you want to make a difference. Every milestone of civilization was first seen as the crazy idea of some subculture.' But how do you think about the downsides of being in a cult? A cult is a group of thoughtful, committed citizens who want to change the world, and they have some shared beliefs that make them very weird to the rest of society. Sometimes that's exactly what's necessary. To give you one simple example, in a world that doesn't really seem to care about animals all that much, it's easy to become disillusioned. But when you join a safe space of ambitious do-gooders, you can suddenly get this feeling of, 'Hey, I'm not the only one! There are other people who deeply care about animals as well. And I can do much more than I'm currently doing.' So it can have a radicalizing effect. Now, I totally acknowledge that there are signs of dangers here. You can become too dogmatic, and you can be quite hostile to people who don't share all your beliefs. I just want to recognize that if you look at some of these great movements of history — the abolitionists, the suffragettes — they had cultish aspects. They were, in a way, a little bit like a cult. Do you have any advice for people on how to avoid the downside — that you can become deaf to criticism from the outside? Yes. Don't let it suck up your whole life. When I hear about all these EAs living in group houses, you know, they're probably taking things too far. I think it helps if you're a normie in other respects of your life. It gives you a certain groundedness and stability. In general, it's super important to surround yourself with people who are critical of your work, who don't take you too seriously, who can laugh at you or see your foolishness and call it out — and still be a good friend.