logo

Latest from Vox

AI doesn't have to reason to take your job
AI doesn't have to reason to take your job

Vox

time9 hours ago

  • Vox

AI doesn't have to reason to take your job

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. A humanoid robot shakes hands with a visitor at the Zhiyuan Robotics stand at the Shanghai New International Expo Centre in Shanghai, China, on June 18, 2025, during the first day of the Mobile World Conference. Ying Tang/NurPhoto via Getty Images In 2023, one popular perspective on AI went like this: Sure, it can generate lots of impressive text, but it can't truly reason — it's all shallow mimicry, just 'stochastic parrots' squawking. At the time, it was easy to see where this perspective was coming from. Artificial intelligence had moments of being impressive and interesting, but it also consistently failed basic tasks. Tech CEOs said they could just keep making the models bigger and better, but tech CEOs say things like that all the time, including when, behind the scenes, everything is held together with glue, duct tape, and low-wage workers. It's now 2025. I still hear this dismissive perspective a lot, particularly when I'm talking to academics in linguistics and philosophy. Many of the highest profile efforts to pop the AI bubble — like the recent Apple paper purporting to find that AIs can't truly reason — linger on the claim that the models are just bullshit generators that are not getting much better and won't get much better. Future Perfect Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. But I increasingly think that repeating those claims is doing our readers a disservice, and that the academic world is failing to step up and grapple with AI's most important implications. I know that's a bold claim. So let me back it up. 'The illusion of thinking's' illusion of relevance The instant the Apple paper was posted online (it hasn't yet been peer reviewed), it took off. Videos explaining it racked up millions of views. People who may not generally read much about AI heard about the Apple paper. And while the paper itself acknowledged that AI performance on 'moderate difficulty' tasks was improving, many summaries of its takeaways focused on the headline claim of 'a fundamental scaling limitation in the thinking capabilities of current reasoning models.' For much of the audience, the paper confirmed something they badly wanted to believe: that generative AI doesn't really work — and that's something that won't change any time soon. The paper looks at the performance of modern, top-tier language models on 'reasoning tasks' — basically, complicated puzzles. Past a certain point, that performance becomes terrible, which the authors say demonstrates the models haven't developed true planning and problem-solving skills. 'These models fail to develop generalizable problem-solving capabilities for planning tasks, with performance collapsing to zero beyond a certain complexity threshold,' as the authors write. That was the topline conclusion many people took from the paper and the wider discussion around it. But if you dig into the details, you'll see that this finding is not surprising, and it doesn't actually say that much about AI. Much of the reason why the models fail at the given problem in the paper is not because they can't solve it, but because they can't express their answers in the specific format the authors chose to require. If you ask them to write a program that outputs the correct answer, they do so effortlessly. By contrast, if you ask them to provide the answer in text, line by line, they eventually reach their limits. That seems like an interesting limitation to current AI models, but it doesn't have a lot to do with 'generalizable problem-solving capabilities' or 'planning tasks.' Imagine someone arguing that humans can't 'really' do 'generalizable' multiplication because while we can calculate 2-digit multiplication problems with no problem, most of us will screw up somewhere along the way if we're trying to do 10-digit multiplication problems in our heads. The issue isn't that we 'aren't general reasoners.' It's that we're not evolved to juggle large numbers in our heads, largely because we never needed to do so. If the reason we care about 'whether AIs reason' is fundamentally philosophical, then exploring at what point problems get too long for them to solve is relevant, as a philosophical argument. But I think that most people care about what AI can and cannot do for far more practical reasons. AI is taking your job, whether it can 'truly reason' or not I fully expect my job to be automated in the next few years. I don't want that to happen, obviously. But I can see the writing on the wall. I regularly ask the AIs to write this newsletter — just to see where the competition is at. It's not there yet, but it's getting better all the time. Employers are doing that too. Entry-level hiring in professions like law, where entry-level tasks are AI-automatable, appears to be already contracting. The job market for recent college graduates looks ugly. The optimistic case around what's happening goes something like this: 'Sure, AI will eliminate a lot of jobs, but it'll create even more new jobs.' That more positive transition might well happen — though I don't want to count on it — but it would still mean a lot of people abruptly finding all of their skills and training suddenly useless, and therefore needing to rapidly develop a completely new skill set. It's this possibility, I think, that looms large for many people in industries like mine, which are already seeing AI replacements creep in. It's precisely because this prospect is so scary that declarations that AIs are just 'stochastic parrots' that can't really think are so appealing. We want to hear that our jobs are safe and the AIs are a nothingburger. But in fact, you can't answer the question of whether AI will take your job with reference to a thought experiment, or with reference to how it performs when asked to write down all the steps of Tower of Hanoi puzzles. The way to answer the question of whether AI will take your job is to invite it to try. And, uh, here's what I got when I asked ChatGPT to write this section of this newsletter: Is it 'truly reasoning'? Maybe not. But it doesn't need to be to render me potentially unemployable. 'Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse,' Cambridge professor of AI philosophy and governance Harry Law argued in a recent piece, and I think he's unambiguously right. If Vox hands me a pink slip, I don't think I'll get anywhere if I argue that I shouldn't be replaced because o3, above, can't solve a sufficiently complicated Towers of Hanoi puzzle — which, guess what, I can't do either. Critics are making themselves irrelevant when we need them most In his piece, Law surveys the state of AI criticisms and finds it fairly grim. 'Lots of recent critical writing about AI…read like extremely wishful thinking about what exactly systems can and cannot do.' This is my experience, too. Critics are often trapped in 2023, giving accounts of what AI can and cannot do that haven't been correct for two years. 'Many [academics] dislike AI, so they don't follow it closely,' Law argues. 'They don't follow it closely so they still think that the criticisms of 2023 hold water. They don't. And that's regrettable because academics have important contributions to make.' But of course, for the employment effects of AI — and in the longer run, for the global catastrophic risk concerns they may present — what matters isn't whether AIs can be induced to make silly mistakes, but what they can do when set up for success. I have my own list of 'easy' problems AIs still can't solve — they're pretty bad at chess puzzles — but I don't think that kind of work should be sold to the public as a glimpse of the 'real truth' about AI. And it definitely doesn't debunk the really quite scary future that experts increasingly believe we're headed toward.

The top priority of progressive politics may be slipping out of reach forever
The top priority of progressive politics may be slipping out of reach forever

Vox

time10 hours ago

  • Politics
  • Vox

The top priority of progressive politics may be slipping out of reach forever

is a senior correspondent at Vox. He covers a wide range of political and policy issues with a special focus on questions that internally divide the American left and right. Before coming to Vox in 2024, he wrote a column on politics and economics for New York Magazine. A protester wearing a Trump paper mâché head stands in front of a barricade and holds a sign that reads, 'Death and taxes' in New York in years ago, America was on the cusp of the largest expansion of its welfare state since the 1960s. Under Joe Biden in 2021, House Democrats passed legislation that would have established a monthly child allowance for most families, an expansion of Medicaid's elder care services, federal child care subsidies, universal prekindergarten, and a paid family leave program, among other new social benefits. But that bill failed — and then, so did Biden's presidency. Now, Republicans are on the brink of enacting the largest cut to public health insurance in American history. And the outlook for future expansions of the safety net looks dimmer than at any time in recent memory. There are two primary reasons why progressives' prospects for growing the welfare state have darkened. This story was first featured in The Rebuild. Sign up here for more stories on the lessons liberals should take away from their election defeat — and a closer look at where they should go next. From senior correspondent Eric Levitz. First (and most straightforwardly), the Democrats are not well-positioned to win full control of the federal government anytime soon. To win a Senate majority in 2026, the party would need to win multiple states that Trump carried by double digits last year. And the 2028 map isn't that much better. The basic problem is that Democrats have built a coalition that's heavily concentrated on the coasts and thus, systematically underrepresented in the Senate. To win the robust congressional majorities typically necessary for enacting large social programs, Democrats would likely need to transform their party's brand. Second, although Democrats developed grander ambitions for social spending over the past decade, they simultaneously grew more averse to raising taxes on anyone but the super-rich. In the 2010s, when inflation and interest rates were persistently low, the party could paper over this tension with deficit spending. But Biden-era inflation revealed the limits of this strategy. And if Congress passes President Donald Trump's tax cut plan, then interest rates and inflationary risk are likely to remain elevated for years, while the cost of servicing America's debts will soar. Add to this the impending exhaustion of Social Security's trust fund, and space for new welfare programs is likely to be scant, unless Democrats find a way to enact broad-based tax increases. Liberals could respond to all this by paring back their ambitions for the welfare state, while seeking to advance progressive goals through regulatory policy. It is perhaps not a coincidence that the two most prominent policy movements in Democratic circles today — the anti-monopoly and 'abundance' crusades — are both principally concerned with reforms that require no new tax revenue (antitrust enforcement in the former case, zoning liberalization in the latter). But expanding America's safety net remains a moral imperative. In the long-term, Democrats must therefore strive to build the electoral power and political will necessary for raising taxes on the middle-class (or at least, on its upper reaches). Related The US government has to start paying for things again Democrats like social welfare programs. But they like low taxes on the upper middle-class even more. Over the course of the 2010s, the Democratic leadership's appetite for new social spending grew. Bernie Sanders's insurgent campaigns in 2016 and 2020 put Medicare-for-All at the center of the party's discourse, and moved its consensus on the welfare state sharply leftward. In the latter primary, even the Democrats' most moderate contender — Joe Biden — vowed to establish a public option for health insurance and tuition-free community colleges, among other social programs. Biden's agenda only grew more ambitious upon taking office. No president since Lyndon B. Johnson had proposed a more sweeping expansion of social welfare than the Build Back Better Act. And yet, while Democrats' aspirations for social spending had become historically bold, the party's position on taxes had grown exceptionally timid. In 2016, Hillary Clinton had promised not to raise taxes on any American family earning less than $250,000. Four years later, Biden vowed to spare all households earning less than $400,000 – despite the fact that tax rates on upper middle-class families had fallen during Trump's first term. Meanwhile, the Democrats' congressional leadership was actually pushing to cut taxes on rich blue state homeowners by increasing the state and local income tax deduction. In other words: In 2021, Democrats were promising to establish an unprecedentedly large welfare state, while keeping taxes on 98 percent of households historically low. Officially, the party believed that it could square this circle by soaking the super-rich. After all, America's highest-earning 1 percent had commandeered more than 20 percent of the nation's annual income. The government could therefore extract a lot of revenue by merely shaking down the upper class. In reality, though, Biden's vision was also premised on the assumption that America could deficit-finance new spending with little risk of sparking inflation or high interest rates. The Build Back Better Act did not actually raise taxes on the rich by enough to offset its social spending. Instead, Democrats leaned on budget gimmicks to 'pay for' its agenda: Although the party intended the law's new programs to be permanent, it scheduled many of them to expire after just a few years, so as to make the policies look cheaper over a decade-long budget window. Absent these arbitrary expiration dates, the bill would have added $2.8 trillion to the deficit over a decade. Even as written, the law would have increased deficits by $749 billion in its first five years. More fundamentally, Biden's basic fiscal objective — to establish wide-ranging social benefits through taxes on the super rich alone — only made sense in a world of low inflation. Western Europe's robust welfare states are all funded through broad-based taxation. This is partly because administering a large safety net requires managing economic demand. When the government expands its provision of elder care, social housing, child care, and pre-K, it increases overall demand for workers and resources in the economy. And if the supply of labor and materials doesn't rise in line with this new demand, then inflation can ensue. Taxes effectively 'pay for' new spending by freeing up such resources. When households see their post-tax income decline, they're often forced to make fewer discretionary purchases. Raise taxes on an upper middle-class family and it might need to postpone its dreams of a lake house. That in turn frees up labor for public programs: The fewer construction workers needed to build vacation homes, the more that will be available to build affordable housing. But soaking the extremely rich does less to dampen demand than taxing the upper middle-class does. Even if you increase Elon Musk's tax rate by 50 percent, he won't actually need to reduce his consumption at all — the billionaire will still have more money than he can spend in a lifetime. The same general principle applies to multimillionaires, albeit to a lesser extent: Raise their taxes, and they're liable to save less money, but won't necessarily consume fewer resources. And if they do not curb their consumption in response to a tax hike, then that tax hike will not actually free up resources. In 2021, Democrats felt no obligation to sweat these details. For nearly a decade after the Great Recession, economic demand had been too low. Workers and materials had stood idle on the economy's sidelines, as there wasn't enough spending to catalyze their employment. In that context, unfunded welfare benefits can boost growth without generating inflation. But as Democrats moved Build Back Better through Congress, the macroeconomic terrain shifted beneath their feet. Biden likely would have struggled to get his social agenda through the Senate (where Democrats held only 50 votes) even in the absence of 2022's inflation. But that surge in prices all but guaranteed the legislation's defeat: Suddenly, it became clear that the government could not increase economic demand without pushing up inflation and interest rates. America had returned to a world of fiscal constraints. Unfortunately, those constraints could prove lasting, especially if Donald Trump's tax agenda makes it into law. Related The reconciliation bill is Republicans doing what they do best Building a comprehensive welfare state is about to get harder The most lamentable aspect of Trump's 'Big Beautiful Bill' are its cuts to healthcare and food assistance for the poor. Yet even as it takes health insurance from 10 million Americans and reduces food assistance to low-income families by about $100 a month, the legislation would add $2.4 trillion to the debt over the coming decade, according to the Congressional Budget Office. Yet the actual cost of the GOP's fiscal vision is even larger. To reduce their bill's price tag, Republicans' set some of their tax cuts to arbitrarily expire. Were these tax cuts made permanent, the bill would add roughly $5 trillion to the deficit over the next 10 years. This is likely to render the US economy more vulnerable to inflation and high interest rates in the future. Thus, the next Democratic government probably won't have much freedom to deficit spend without increasing Americans' borrowing costs or bills. Meanwhile, if that administration holds power after 2032, it will also need to find a ton of new revenue, just to maintain America's existing welfare state. Social Security currently pays out more in benefits than it takes in through payroll taxes. For now, the program's dedicated trust fund fills in the gap. But in 2033, that fund will likely be exhausted, according to government projections. At that point, the government will need to find upward of $414.5 billion in new revenue, each year, to maintain existing Social Security benefits without increasing the deficit. Given Democrats' current stance on taxes, the imperative to keep Social Security funded would likely crowd out the rest of the party's social welfare agenda. Indeed, merely sustaining Americans' existing retirement benefits would almost certainly require raising taxes on households earning less than $400,000. Maintaining such benefits while also creating new welfare programs — in a context of structurally high deficits and interest rates — would plausibly entail large, broad-based tax increases, the likes of which today's Democrats scarcely dare to contemplate. Granted, the robots could solve all this To be sure, it is possible that technological progress could render this entire analysis obsolete. Some analysts expect artificial intelligence to radically increase productivity over the next decade, while devaluing white-collar labor. This could slow the pace of wage and price growth, while turbo-charging income inequality. In a world where robots can instantly perform work that presently requires millions of humans, America could plausibly finance a vast social welfare state solely through taxes on capital. But until AI actually yields a discernible leap in productivity, I don't think it is safe to take an impending robo-utopia as a given. Democrats eventually need to sell Americans on higher taxes Democrats probably can't escape the tension between their commitments on taxation and social spending. But they can seek to mitigate it in a few different ways. One is to scale down the party's ambitions for the welfare state, while seeking to advance progressive economic goals through other means. Such a retreat would be understandable. The party's fear of raising taxes is not baseless. In a 2021 Gallup poll, only 19 percent of Americans said they would like to have more government services in exchange for higher taxes, while 50 percent said they'd prefer lower taxes in exchange for fewer services. Meanwhile, Democrats have grown increasingly reliant on the support of upper middle-class voters. In 2024, the highest-earning 5 percent of white voters were more than 10 percentage points more Democratic than America as a whole. The lowest earning two-thirds of whites, by contrast, were more Republican than the nation writ large. In this political environment, calling for large middle-class tax hikes could well ensure perpetual Republican rule. In the short term then, Democrats might therefore be wise to narrow their agenda for social welfare, focusing on modest programs that can be funded exclusively with taxes on the rich. At the same time, the party could seek to better working people's lot through regulatory policy. You don't need to raise middle-class taxes to expand collective bargaining rights, guarantee worker representation on corporate boards, or raise the minimum wage. And the same can be said of relaxing regulatory barriers to housing construction and energy infrastructure. (Of course, achieving any of these goals federally would require Democrats to win a robust Senate majority — one sufficiently large and progressive enough to abolish the legislative filibuster, which currently establishes a 60-vote threshold for enacting new, non-budgetary legislation.) In the long run though, Democrats must not forfeit the pursuit of a comprehensive welfare state. America lets more of its children suffer poverty — and more of its adults go without health insurance — than similarly rich countries. These deprivations are largely attributable to our nation's comparatively threadbare safety net. And they can only be fully eliminated through redistributive policy. A higher minimum wage will not ensure that children with unemployed parents never go hungry, or that every worker with cancer can afford treatment. Furthermore, as technological progress threatens to rapidly disemploy large segments of the public, robust unemployment insurance is as important as ever. And as the population ages, increasing investment in eldercare will be increasingly imperative. Democrats should seek to make incremental progress on all these fronts as soon as possible. Even if the party is only willing to tax the rich, it can still finance targeted anti-poverty spending. But absent an AI-induced productivity revolution, building a holistic welfare state will require persuading the middle-class to accept higher taxes. How this can be done is not clear. But part of the solution is surely to demonstrate that Democratic governments can spend taxpayer funds efficiently and effectively. So long as blue areas struggle to build a single public toilet for less than $1.7 million — or a high-speed rail line in less than 17 years — it will be hard to persuade ordinary Americans to forfeit a larger chunk of their paychecks to Uncle Sam. All this said, Democrats have plenty of time to debate the future of fiscal policy. In the immediate term, the party's task is plain: to do everything in its power to prevent Trump's cuts to Medicaid and food assistance from becoming law. The path to a comprehensive welfare state won't be easy to traverse. Better then not to begin the journey toward it by taking several steps backward.

He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.
He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.

Vox

timea day ago

  • Vox

He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.

is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. The science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a 'zeroth law,' which is so important that it precedes all the others: 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' This month, the computer scientist Yoshua Bengio — known as the 'godfather of AI' because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won't harm humanity. Even though he helped lay the foundation for today's advanced AI, Bengio is increasingly worried about the technology over the past few years. In 2023, he signed an open letter urging AI companies to press pause on state-of-the-art AI development. Both because of AI's present harms (like bias against marginalized groups) and AI's future risks (like engineered bioweapons), there are very strong reasons to think that slowing down would have been a good thing. But companies are companies. They did not slow down. In fact, they created autonomous AIs known as AI agents, which can view your computer screen, select buttons, and perform tasks — just like you can. Whereas ChatGPT needs to be prompted by a human every step of the way, an agent can accomplish multistep goals with very minimal prompting, similar to a personal assistant. Right now, those goals are simple — create a website, say — and the agents don't work that well yet. But Bengio worries that giving AIs agency is an inherently risky move: Eventually, they could escape human control and go 'rogue.' So now, Bengio is pivoting to a backup plan. If he can't get companies to stop trying to build AI that matches human smarts (artificial general intelligence, or AGI) or even surpasses human smarts (artificial superintelligence, or ASI), then he wants to build something that will block those AIs from harming humanity. He calls it 'Scientist AI.' Scientist AI won't be like an AI agent — it'll have no autonomy and no goals of its own. Instead, its main job will be to calculate the probability that some other AI's action would cause harm — and, if the action is too risky, block it. AI companies could overlay Scientist AI onto their models to stop them from doing something dangerous, akin to how we put guardrails along highways to stop cars from veering off course. I talked to Bengio about why he's so disturbed by today's AI systems, whether he regrets doing the research that led to their creation, and whether he thinks throwing yet more AI at the problem will be enough to solve it. A transcript of our unusually candid conversation, edited for length and clarity, follows. Sigal Samuel When people express worry about AI, they often express it as a worry about artificial general intelligence or superintelligence. Do you think that's the wrong thing to be worrying about? Should we only worry about AGI or ASI insofar as it includes agency? Yoshua Bengio Yes. You could have a superintelligent AI that doesn't 'want' anything, and it's totally not dangerous because it doesn't have its own goals. It's just like a very smart encyclopedia. Sigal Samuel Researchers have been warning for years about the risks of AI systems, especially systems with their own goals and general intelligence. Can you explain what's making the situation increasingly scary to you now? Yoshua Bengio In the last six months, we've gotten evidence of AIs that are so misaligned that they would go against our moral instructions. They would plan and do these bad things — lying, cheating, trying to persuade us with deceptions, and — worst of all — trying to escape our control and not wanting to be shut down, and doing anything [to avoid shutdown], including blackmail. These are not an immediate danger because they're all controlled we don't know how to really deal with this. Sigal Samuel And these bad behaviors increase the more agency the AI system has? Yoshua Bengio Yes. The systems we had last year, before we got into reasoning models, were much less prone to this. It's just getting worse and worse. That makes sense because we see that their planning ability is improving exponentially. And [the AIs] need good planning to strategize about things like 'How am I going to convince these people to do what I want?' or 'How do I escape their control?' So if we don't fix these problems quickly, we may end up with, initially, funny accidents, and later, not-funny accidents. That's motivating what we're trying to do at LawZero. We're trying to think about how we design AI more precisely, so that, by construction, it's not even going to have any incentive or reason to do such things. In fact, it's not going to want anything. Sigal Samuel Tell me about how Scientist AI could be used as a guardrail against the bad actions of an AI agent. I'm imagining Scientist AI as the babysitter of the agentic AI, double-checking what it's doing. Yoshua Bengio So, in order to do the job of a guardrail, you don't need to be an agent yourself. The only thing you need to do is make a good prediction. And the prediction is this: Is this action that my agent wants to do acceptable, morally speaking? Does it satisfy the safety specifications that humans have provided? Or is it going to harm somebody? And if the answer is yes, with some probability that's not very small, then the guardrail says: No, this is a bad action. And the agent has to [try a different] action. Sigal Samuel But even if we build Scientist AI, the domain of 'What is moral or immoral?' is famously contentious. There's just no consensus. So how would Scientist AI learn what to classify as a bad action? Yoshua Bengio It's not for any kind of AI to decide what is right or wrong. We should establish that using democracy. Law should be about trying to be clear about what is acceptable or not. Now, of course, there could be ambiguity in the law. Hence you can get a corporate lawyer who is able to find loopholes in the law. But there's a way around this: Scientist AI is planned so that it will see the ambiguity. It will see that there are different interpretations, say, of a particular rule. And then it can be conservative about the interpretation — as in, if any of the plausible interpretations would judge this action as really bad, then the action is rejected. Sigal Samuel I think a problem there would be that almost any moral choice arguably has ambiguity. We've got some of the most contentious moral issues — think about gun control or abortion in the US — where, even democratically, you might get a significant proportion of the population that says they're opposed. How do you propose to deal with that? Yoshua Bengio I don't. Except by having the strongest possible honesty and rationality in the answers, which, in my opinion, would already be a big gain compared to the sort of democratic discussions that are happening. One of the features of the Scientist AI, like a good human scientist, is that you can ask: Why are you saying this? And he would come up with — not 'he,' sorry! — it would come up with a justification. The AI would be involved in the dialogue to try to help us rationalize what are the pros and cons and so on. So I actually think that these sorts of machines could be turned into tools to help democratic debates. It's a little bit more than fact-checking — it's also like reasoning-checking. Sigal Samuel This idea of developing Scientist AI stems from your disillusionment with the AI we've been developing so far. And your research was very foundational in laying the groundwork for that kind of AI. On a personal level, do you feel some sense of inner conflict or regret about having done the research that laid that groundwork? Yoshua Bengio I should have thought of this 10 years ago. In fact, I could have, because I read some of the early works in AI safety. But I think there are very strong psychological defenses that I had, and that most of the AI researchers have. You want to feel good about your work, and you want to feel like you're the good guy, not doing something that could cause in the future lots of harm and death. So we kind of look the other way. And for myself, I was thinking: This is so far into the future! Before we get to the science-fiction-sounding things, we're going to have AI that can help us with medicine and climate and education, and it's going to be great. So let's worry about these things when we get there. But that was before ChatGPT came. When ChatGPT came, I couldn't continue living with this internal lie, because, well, we are getting very close to human-level. Sigal Samuel The reason I ask this is because it struck me when reading your plan for Scientist AI that you say it's modeled after the platonic idea of a scientist — a selfless, ideal person who's just trying to understand the world. I thought: Are you in some way trying to build the ideal version of yourself, this 'he' that you mentioned, the ideal scientist? Is it like what you wish you could have been? Yoshua Bengio You should do psychotherapy instead of journalism! Yeah, you're pretty close to the mark. In a way, it's an ideal that I have been looking toward for myself. I think that's an ideal that scientists should be looking toward as a model. Because, for the most part in science, we need to step back from our emotions so that we avoid biases and preconceived ideas and ego. Sigal Samuel A couple of years ago you were one of the signatories of the letter urging AI companies to pause cutting-edge work. Obviously, the pause did not happen. For me, one of the takeaways from that moment was that we're at a point where this is not predominantly a technological problem. It's political. It's really about power and who gets the power to shape the incentive structure. We know the incentives in the AI industry are horribly misaligned. There's massive commercial pressure to build cutting-edge AI. To do that, you need a ton of compute so you need billions of dollars, so you're practically forced to get in bed with a Microsoft or an Amazon. How do you propose to avoid that fate? Yoshua Bengio That's why we're doing this as a nonprofit. We want to avoid the market pressure that would force us into the capability race and, instead, focus on the scientific aspects of safety. I think we could do a lot of good without having to train frontier models ourselves. If we come up with a methodology for training AI that is convincingly safer, at least on some aspects like loss of control, and we hand it over almost for free to companies that are building AI — well, no one in these companies actually wants to see a rogue AI. It's just that they don't have the incentive to do the work! So I think just knowing how to fix the problem would reduce the risks considerably. I also think that governments will hopefully take these questions more and more seriously. I know right now it doesn't look like it, but when we start seeing more evidence of the kind we've seen in the last six months, but stronger and more scary, public opinion might push sufficiently that we'll see regulation or some way to incentivize companies to behave better. It might even happen just for market reasons — like, [AI companies] could be sued. So, at some point, they might reason that they should be willing to pay some money to reduce the risks of accidents. Sigal Samuel I was happy to see that LawZero isn't only talking about reducing the risks of accidents but is also talking about 'protecting human joy and endeavor.' A lot of people fear that if AI gets better than them at things, well, what is the meaning of their life? How would you advise people to think about the meaning of their human life if we enter an era where machines have both agency and extreme intelligence? Yoshua Bengio I understand it would be easy to be discouraged and to feel powerless. But the decisions that human beings are going to make in the coming years as AI becomes more powerful — these decisions are incredibly consequential. So there's a sense in which it's hard to get more meaning than that! If you want to do something about it, be part of the thinking, be part of the democratic debate. I would advise us all to remind ourselves that we have agency. And we have an amazing task in front of us: to shape the future.

What we learned the last time we put AI in a Barbie
What we learned the last time we put AI in a Barbie

Vox

timea day ago

  • Entertainment
  • Vox

What we learned the last time we put AI in a Barbie

is a senior technology correspondent at Vox and author of the User Friendly newsletter. He's spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice. The first big Christmas gift I remember getting was an animatronic bear named Teddy Ruxpin. Thanks to a cassette tape hidden in his belly, he could talk, his eyes and mouth moving in a famously creepy way. Later that winter, when I was sick with a fever, I hallucinated that the toy came alive and attacked me. I never saw Teddy again after that. These days, toys can do a lot more than tell pre-recorded stories. So-called smart toys, many of which are internet-connected, are a $20 billion business, and increasingly, they're artificially intelligent. Mattel and OpenAI announced a partnership last week to 'bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety.' They're planning to announce their first product later this year. It's unclear what this might entail: maybe it's Barbies that can gossip with you or a self-driving Hot Wheels or something we haven't even dreamed up yet. All of this makes me nervous as a young parent. I already knew that generative AI was invading classrooms and filling the internet with slop, but I wasn't expecting it to take over the toy aisle so soon. After all, we're already struggling to figure out how to manage our kids' relationship with the technology in their lives, from screen time to the uncanny videos made to trick YouTube's algorithm. As it seeps further into our society, a growing number of people are using AI without even realizing it. So you can't blame me for being anxious about how children might encounter the technology in unexpected ways. User Friendly A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. AI-powered toys are not as new as you might think. They're not even new for Mattel. A decade ago, the toy giant released Hello Barbie, an internet-connected doll that listened to kids and used AI to respond (think Siri, not ChatGPT). It was essentially the same concept as Teddy Ruxpin except with a lot of digital vulnerabilities. Naturally, security researchers took notice and hacked Hello Barbie, revealing that bad actors could steal personal information or eavesdrop on conversations children were having with the doll. Mattel discontinued the doll in 2017. Hello Barbie later made an appearance in the Barbie movie alongside other poor toy choices like Sugar Daddy Ken and Pregnant Midge. Despite this cautionary tale, companies keep trying to make talking AI toys a thing. One more recent example comes from the mind of Grimes, of all people. Inspired by the son she shares with Elon Musk, the musician teamed up with a company called Curio to create a stuffed rocket ship named Grok. The embodied chatbot is supposed to learn about whomever is playing with it and become a personalized companion. In real life, Grok is frustratingly dumb, according to Katie Arnold-Ratliff, a mom and writer who chronicled her son's experience with the toy in New York magazine last year. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' 'When it started remembering things about my kid, and speaking back to him, he was amazed,' Arnold-Ratliff told me this week. 'That awe very quickly dissipated once it was like, why are you talking about this completely unrelated thing.' Grok is still somewhere in their house, she said, but it has been turned off for quite some time. It turns out Arnold-Ratliff's son is more interested in inanimate objects that he can make come alive with his imagination. Sure, he'll play Mario on his Nintendo Switch for long stretches of time, but afterward, he'll draw his own worlds on paper. He'll even create digital versions of new levels on Super Mario Maker but get frustrated when the software can't keep up with his imagination. This is a miraculous paradox when it comes to kids and certain tech-powered toys. Although an adult might think that, for instance, AI could prompt kids to think about play in new ways or become an innovative new imaginary friend, kids tend to prefer imagining on their own terms. That's according to Naomi Aguiar, PhD, a researcher at Oregon State University who studies how children form relationships with AI chatbots. 'There's nothing wrong with children's imaginations. They work fine,' Aguiar said. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' Aguiar did concede that AI can be a powerful educational tool for kids, especially for those who don't have access to resources or who may be on the spectrum. 'If we focus on solutions to specific problems and train the models to do that, it could open up a lot of opportunities,' she told me. Putting AI in a Barbie, however, is not solving a particular problem. None of this means that I'm allergic to the concept of tech-centric toys for kids. Quite the opposite, in fact. Ahead of the Mattel-OpenAI announcement, I'd started researching toys my kid might like that incorporated some technology — enough to make them especially interesting and engaging — but stopped short of triggering dystopian nightmares. Much to my surprise, what I found was something of a mashup between completely inanimate objects and that terrifying Teddy Ruxpin. One of these toys is called a Toniebox, a screen-free audio player with little figurines called Tonies that you put atop the box to unlock content — namely songs, stories, and so forth. Licenses abound, so you can buy a Tonie that corresponds with pretty much any popular kids character, like Disney princesses or Paddington Bear. There are also so-called Creative Tonies that allow you to upload your own audio. For instance, you could ostensibly have a stand-in for a grandparent to enable story time, even if Grandma and Grandpa are not physically there. The whole experience is mediated with an app that the kid never needs to see. There's also the Yoto Player and the Yoto Mini, which are similar to the Toniebox but use cards instead of figurines and have a very low-resolution display that can show a clock or a pixelated character. Because it has that display, kids can also create custom icons to show up when they record their own content onto a card. Yoto has been beta-testing an AI-powered story generator, which is designed for parents to create custom stories for their kids. If those audio players are geared toward story time, a company called Nex makes a video game console for playtime. It's called Nex Playground, and kids use their movements to control it. This happens thanks to a camera equipped with machine-learning capabilities to recognize your movements and expressions. So imagine playing Wii Sports, but instead of throwing the Nintendo controller through your TV screen when you're trying to bowl, you make the bowling motion to play the game. Nex makes most of its games in-house, and all of the computation needed for its gameplay happens on the device itself. That means there's no data being collected or sent to the cloud. Once you download a game, you don't even have to be online to play it. 'We envision toys that can just grow in a way where they become a new way to interact with technology for kids and evolve into something that's much deeper, much more meaningful for families,' David Lee, CEO of Nex, said when I asked him about the future of toys. It will be a few more years before I have to worry about my kid's interactions with a video game console, much less an AI-powered Barbie — and certainly not Teddy Ruxpin. But she loves her Toniebox. She talks to the figurines and lines them up alongside each other, like a little posse. I have no idea what she's imagining them saying back. In a way, that's the point.

The Supreme Court's incoherent new attack on trans rights, explained
The Supreme Court's incoherent new attack on trans rights, explained

Vox

time2 days ago

  • Politics
  • Vox

The Supreme Court's incoherent new attack on trans rights, explained

is a senior correspondent at Vox, where he focuses on the Supreme Court, the Constitution, and the decline of liberal democracy in the United States. He received a JD from Duke University and is the author of two books on the Supreme Court. A transgender rights supporter takes part in a rally outside of the US Supreme Court as the high court hears arguments in a case on transgender health rights on December 4, 2024, in Washington, was obvious, if you listened to the Supreme Court's oral argument in United States v. Skrmetti last December, that the Court would vote — most likely along party lines — to uphold state laws banning many forms of transgender health care for minors. So nothing about Chief Justice John Roberts's majority opinion in Skrmetti is really surprising. All six of the Court's Republicans voted to uphold these laws, and all three of the Court's Democrats dissented. But, as a matter of judicial craftsmanship, Roberts's opinion is disappointing even by the standards of the Roberts Court. It draws incoherent distinctions. It relies on old and widely criticized precedents to undermine legal principles that are well established by more recent cases. At times in his opinion, Roberts seems to misread statutory language that he just quoted a paragraph or two earlier. SCOTUS, Explained Get the latest developments on the US Supreme Court from senior correspondent Ian Millhiser. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. It appears, in other words, that the six justices in the majority started with the outcome that they wanted — bans on transgender health care for minors must be upheld — and then contorted their legal reasoning to fit that result. Even if you share that goal, the decision in this case was unnecessary. As Justice Elena Kagan points out in a brief dissenting opinion, the issue before the Supreme Court Skrmetti concerned a threshold question: whether the Tennessee law at issue in this case should receive a heightened level of scrutiny from the courts before it was either upheld or discarded. The ultimate question of whether to uphold Tennessee's law was not before the justices. The Court's Republicans, in other words, could have applied existing law, sent the case back down to the lower courts to apply this 'heightened scrutiny,' and then ruled on the bans in a future case. Instead, Roberts's Skrmetti opinion went further to rule on the legality of the bans, and consists of about two dozen pages of excuses for why the Court's previous anti-discrimination decisions somehow do not apply to Tennessee's law. Related The Supreme Court just let Trump ban trans people from the military One virtue of this approach is that it minimizes the broader implications of Skrmetti. At oral arguments, several justices suggested that, in order to uphold Tennessee's law, they might make sweeping changes to the rules governing all sex-based discrimination by the government — Roberts, for example, floated giving the government broad authority to discriminate on the basis of sex in the medical context. Roberts's actual opinion contains some language suggesting that the general rule against sex discrimination is weaker when the government regulates medical practice, but those sections of his opinion are so difficult to parse that they fall short of the broad changes he discussed at oral argument. Ultimately, Roberts's Skrmetti opinion largely reveals something that close observers of this Supreme Court already know. The Court's Republican majority is impatient. They are often so eager to reach ideological or partisan results that they hand down poorly reasoned opinions and incomprehensible legal standards. Because the Skrmetti opinion is so incoherent, it is difficult to predict its broader implications for US anti-discrimination law. One thing that is certain, however, is that this decision is a historic loss for transgender Americans. So what were the precise legal questions before the Court in Skrmetti? To understand why the Skrmetti opinion is so difficult to reconcile with the Court's previous decisions, it's helpful to understand the precise legal questions before the Supreme Court. The first of two questions is whether Tennessee's ban on trans health care for minors classifies patients based on their sex assigned at birth. In United States v. Virginia (1996), the Supreme Court held that ''all gender-based classifications today' warrant 'heightened scrutiny.'' 'All' means that all laws that classify people based on their sex must receive additional scrutiny from the courts, not just some laws that do so. About half of the states have laws targeting transgender health care, but the Tennessee law at issue in Skrmetti is among the strictest. It prohibits people under the age of 18 from receiving many medical treatments to treat gender dysphoria or other conditions related to their transgender status — including bans on puberty blockers and hormone therapy. Significantly, Tennessee's law is also quite explicit that the purpose of this law is to ensure that young people do not depart from their sex assigned at birth. The law declares that its purpose is to 'encourag[e] minors to appreciate their sex' and to prevent young people from becoming 'disdainful of their sex.' That is an explicit sex-based classification. Patients who Roberts refers to as 'biological women' are allowed to fully embrace femininity in Tennessee. But a child who is assigned male at birth may not. Under Virginia, in other words, Tennessee's law — which relies on a sex-based classification — must be subject to heightened scrutiny. To be clear, the mere fact that courts must give heightened review to Tennessee's law does not mean that the law will necessarily be struck down. As the Court held in Craig v. Boren (1976), 'to withstand constitutional challenge…classifications by gender must serve important governmental objectives and must be substantially related to achievement of those objectives.' Some laws do survive this level of scrutiny. Roberts's opinion raises several policy arguments for Tennessee's law, claiming that the procedures targeted by Tennessee are 'experimental,' that they 'can lead to later regret,' and that they carry 'risks.' A court applying heightened scrutiny could consider these arguments and whether they justify upholding the law. But Roberts bypasses this inquiry altogether, instead denying that the Tennessee law engages in sex-based classifications at all. The law, Roberts claims, only 'incorporates two classifications.' It 'classifies on the basis of age' by banning certain treatments only for minors. And it 'classifies on the basis of medical use' by prohibiting doctors from prescribing those treatments to address gender dysphoria or similar conditions affecting transgender people, while simultaneously permitting those treatments to address other conditions. Roberts is correct that Tennessee's law does draw lines based on these two classifications. But a law can do more than two things at once. And this law explicitly states that it exists to classify every child as either a boy or a girl, and then to lock them into that classification until their 18th birthday. Under Virginia, that classification demands heightened scrutiny. The second legal question before the Court in Skrmetti was whether all laws that discriminate against transgender people are themselves subject to heightened scrutiny. Roberts, however, dodges this question by claiming that Tennessee's law 'does not classify on the basis of transgender status.' Instead, he argues, the law classifies people based on whether they have conditions such as 'gender dysphoria, gender identity disorder, or gender incongruence.' Gender dysphoria, gender identity disorder, or gender incongruence are among the defining traits that make someone transgender. Roberts might as well have argued that Jim Crow laws do not discriminate on the basis of race, but instead discriminate based on the color of a person's skin. To justify this distinction, Roberts points to the Court's decision in Geduldig v. Aiello (1974), which held that discrimination against pregnant people is not a form of sex discrimination because not all women become pregnant. But, even if it is true that not all transgender people experience gender dysphoria or a similar condition, post-Geduldig decisions have long held that the government cannot evade a ban on discrimination by claiming that it is merely discriminating based on a trait that closely correlates with a particular identity. As the Court said in Bray v. Alexandria Women's Health Clinic (1993), 'a tax on wearing yarmulkes is a tax on Jews' — even though many Jews do not wear yarmulkes. That said, the Court's decision not to rule on whether laws that classify on the basis of transgender status must receive heightened review is probably a blessing for transgender people, even if it is a small one. While Roberts's reasoning on this question is muddled, his opinion leaves open the possibility that a future Court may resolve this question in favor of transgender people — although that is highly unlikely to happen unless the Court's membership changes significantly. Notably, Justice Amy Coney Barrett, who is close to the center of the current Court, wrote a separate concurring opinion arguing that discrimination against trans people does not trigger heightened scrutiny.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store