logo
#

Latest news with #FuturePerfect

AI doesn't have to reason to take your job
AI doesn't have to reason to take your job

Vox

time10 hours ago

  • Vox

AI doesn't have to reason to take your job

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. A humanoid robot shakes hands with a visitor at the Zhiyuan Robotics stand at the Shanghai New International Expo Centre in Shanghai, China, on June 18, 2025, during the first day of the Mobile World Conference. Ying Tang/NurPhoto via Getty Images In 2023, one popular perspective on AI went like this: Sure, it can generate lots of impressive text, but it can't truly reason — it's all shallow mimicry, just 'stochastic parrots' squawking. At the time, it was easy to see where this perspective was coming from. Artificial intelligence had moments of being impressive and interesting, but it also consistently failed basic tasks. Tech CEOs said they could just keep making the models bigger and better, but tech CEOs say things like that all the time, including when, behind the scenes, everything is held together with glue, duct tape, and low-wage workers. It's now 2025. I still hear this dismissive perspective a lot, particularly when I'm talking to academics in linguistics and philosophy. Many of the highest profile efforts to pop the AI bubble — like the recent Apple paper purporting to find that AIs can't truly reason — linger on the claim that the models are just bullshit generators that are not getting much better and won't get much better. Future Perfect Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. But I increasingly think that repeating those claims is doing our readers a disservice, and that the academic world is failing to step up and grapple with AI's most important implications. I know that's a bold claim. So let me back it up. 'The illusion of thinking's' illusion of relevance The instant the Apple paper was posted online (it hasn't yet been peer reviewed), it took off. Videos explaining it racked up millions of views. People who may not generally read much about AI heard about the Apple paper. And while the paper itself acknowledged that AI performance on 'moderate difficulty' tasks was improving, many summaries of its takeaways focused on the headline claim of 'a fundamental scaling limitation in the thinking capabilities of current reasoning models.' For much of the audience, the paper confirmed something they badly wanted to believe: that generative AI doesn't really work — and that's something that won't change any time soon. The paper looks at the performance of modern, top-tier language models on 'reasoning tasks' — basically, complicated puzzles. Past a certain point, that performance becomes terrible, which the authors say demonstrates the models haven't developed true planning and problem-solving skills. 'These models fail to develop generalizable problem-solving capabilities for planning tasks, with performance collapsing to zero beyond a certain complexity threshold,' as the authors write. That was the topline conclusion many people took from the paper and the wider discussion around it. But if you dig into the details, you'll see that this finding is not surprising, and it doesn't actually say that much about AI. Much of the reason why the models fail at the given problem in the paper is not because they can't solve it, but because they can't express their answers in the specific format the authors chose to require. If you ask them to write a program that outputs the correct answer, they do so effortlessly. By contrast, if you ask them to provide the answer in text, line by line, they eventually reach their limits. That seems like an interesting limitation to current AI models, but it doesn't have a lot to do with 'generalizable problem-solving capabilities' or 'planning tasks.' Imagine someone arguing that humans can't 'really' do 'generalizable' multiplication because while we can calculate 2-digit multiplication problems with no problem, most of us will screw up somewhere along the way if we're trying to do 10-digit multiplication problems in our heads. The issue isn't that we 'aren't general reasoners.' It's that we're not evolved to juggle large numbers in our heads, largely because we never needed to do so. If the reason we care about 'whether AIs reason' is fundamentally philosophical, then exploring at what point problems get too long for them to solve is relevant, as a philosophical argument. But I think that most people care about what AI can and cannot do for far more practical reasons. AI is taking your job, whether it can 'truly reason' or not I fully expect my job to be automated in the next few years. I don't want that to happen, obviously. But I can see the writing on the wall. I regularly ask the AIs to write this newsletter — just to see where the competition is at. It's not there yet, but it's getting better all the time. Employers are doing that too. Entry-level hiring in professions like law, where entry-level tasks are AI-automatable, appears to be already contracting. The job market for recent college graduates looks ugly. The optimistic case around what's happening goes something like this: 'Sure, AI will eliminate a lot of jobs, but it'll create even more new jobs.' That more positive transition might well happen — though I don't want to count on it — but it would still mean a lot of people abruptly finding all of their skills and training suddenly useless, and therefore needing to rapidly develop a completely new skill set. It's this possibility, I think, that looms large for many people in industries like mine, which are already seeing AI replacements creep in. It's precisely because this prospect is so scary that declarations that AIs are just 'stochastic parrots' that can't really think are so appealing. We want to hear that our jobs are safe and the AIs are a nothingburger. But in fact, you can't answer the question of whether AI will take your job with reference to a thought experiment, or with reference to how it performs when asked to write down all the steps of Tower of Hanoi puzzles. The way to answer the question of whether AI will take your job is to invite it to try. And, uh, here's what I got when I asked ChatGPT to write this section of this newsletter: Is it 'truly reasoning'? Maybe not. But it doesn't need to be to render me potentially unemployable. 'Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse,' Cambridge professor of AI philosophy and governance Harry Law argued in a recent piece, and I think he's unambiguously right. If Vox hands me a pink slip, I don't think I'll get anywhere if I argue that I shouldn't be replaced because o3, above, can't solve a sufficiently complicated Towers of Hanoi puzzle — which, guess what, I can't do either. Critics are making themselves irrelevant when we need them most In his piece, Law surveys the state of AI criticisms and finds it fairly grim. 'Lots of recent critical writing about AI…read like extremely wishful thinking about what exactly systems can and cannot do.' This is my experience, too. Critics are often trapped in 2023, giving accounts of what AI can and cannot do that haven't been correct for two years. 'Many [academics] dislike AI, so they don't follow it closely,' Law argues. 'They don't follow it closely so they still think that the criticisms of 2023 hold water. They don't. And that's regrettable because academics have important contributions to make.' But of course, for the employment effects of AI — and in the longer run, for the global catastrophic risk concerns they may present — what matters isn't whether AIs can be induced to make silly mistakes, but what they can do when set up for success. I have my own list of 'easy' problems AIs still can't solve — they're pretty bad at chess puzzles — but I don't think that kind of work should be sold to the public as a glimpse of the 'real truth' about AI. And it definitely doesn't debunk the really quite scary future that experts increasingly believe we're headed toward.

He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.
He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.

Vox

timea day ago

  • Vox

He's the godfather of AI. Now, he has a bold new plan to keep us safe from it.

is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic. The science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a 'zeroth law,' which is so important that it precedes all the others: 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' This month, the computer scientist Yoshua Bengio — known as the 'godfather of AI' because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won't harm humanity. Even though he helped lay the foundation for today's advanced AI, Bengio is increasingly worried about the technology over the past few years. In 2023, he signed an open letter urging AI companies to press pause on state-of-the-art AI development. Both because of AI's present harms (like bias against marginalized groups) and AI's future risks (like engineered bioweapons), there are very strong reasons to think that slowing down would have been a good thing. But companies are companies. They did not slow down. In fact, they created autonomous AIs known as AI agents, which can view your computer screen, select buttons, and perform tasks — just like you can. Whereas ChatGPT needs to be prompted by a human every step of the way, an agent can accomplish multistep goals with very minimal prompting, similar to a personal assistant. Right now, those goals are simple — create a website, say — and the agents don't work that well yet. But Bengio worries that giving AIs agency is an inherently risky move: Eventually, they could escape human control and go 'rogue.' So now, Bengio is pivoting to a backup plan. If he can't get companies to stop trying to build AI that matches human smarts (artificial general intelligence, or AGI) or even surpasses human smarts (artificial superintelligence, or ASI), then he wants to build something that will block those AIs from harming humanity. He calls it 'Scientist AI.' Scientist AI won't be like an AI agent — it'll have no autonomy and no goals of its own. Instead, its main job will be to calculate the probability that some other AI's action would cause harm — and, if the action is too risky, block it. AI companies could overlay Scientist AI onto their models to stop them from doing something dangerous, akin to how we put guardrails along highways to stop cars from veering off course. I talked to Bengio about why he's so disturbed by today's AI systems, whether he regrets doing the research that led to their creation, and whether he thinks throwing yet more AI at the problem will be enough to solve it. A transcript of our unusually candid conversation, edited for length and clarity, follows. Sigal Samuel When people express worry about AI, they often express it as a worry about artificial general intelligence or superintelligence. Do you think that's the wrong thing to be worrying about? Should we only worry about AGI or ASI insofar as it includes agency? Yoshua Bengio Yes. You could have a superintelligent AI that doesn't 'want' anything, and it's totally not dangerous because it doesn't have its own goals. It's just like a very smart encyclopedia. Sigal Samuel Researchers have been warning for years about the risks of AI systems, especially systems with their own goals and general intelligence. Can you explain what's making the situation increasingly scary to you now? Yoshua Bengio In the last six months, we've gotten evidence of AIs that are so misaligned that they would go against our moral instructions. They would plan and do these bad things — lying, cheating, trying to persuade us with deceptions, and — worst of all — trying to escape our control and not wanting to be shut down, and doing anything [to avoid shutdown], including blackmail. These are not an immediate danger because they're all controlled we don't know how to really deal with this. Sigal Samuel And these bad behaviors increase the more agency the AI system has? Yoshua Bengio Yes. The systems we had last year, before we got into reasoning models, were much less prone to this. It's just getting worse and worse. That makes sense because we see that their planning ability is improving exponentially. And [the AIs] need good planning to strategize about things like 'How am I going to convince these people to do what I want?' or 'How do I escape their control?' So if we don't fix these problems quickly, we may end up with, initially, funny accidents, and later, not-funny accidents. That's motivating what we're trying to do at LawZero. We're trying to think about how we design AI more precisely, so that, by construction, it's not even going to have any incentive or reason to do such things. In fact, it's not going to want anything. Sigal Samuel Tell me about how Scientist AI could be used as a guardrail against the bad actions of an AI agent. I'm imagining Scientist AI as the babysitter of the agentic AI, double-checking what it's doing. Yoshua Bengio So, in order to do the job of a guardrail, you don't need to be an agent yourself. The only thing you need to do is make a good prediction. And the prediction is this: Is this action that my agent wants to do acceptable, morally speaking? Does it satisfy the safety specifications that humans have provided? Or is it going to harm somebody? And if the answer is yes, with some probability that's not very small, then the guardrail says: No, this is a bad action. And the agent has to [try a different] action. Sigal Samuel But even if we build Scientist AI, the domain of 'What is moral or immoral?' is famously contentious. There's just no consensus. So how would Scientist AI learn what to classify as a bad action? Yoshua Bengio It's not for any kind of AI to decide what is right or wrong. We should establish that using democracy. Law should be about trying to be clear about what is acceptable or not. Now, of course, there could be ambiguity in the law. Hence you can get a corporate lawyer who is able to find loopholes in the law. But there's a way around this: Scientist AI is planned so that it will see the ambiguity. It will see that there are different interpretations, say, of a particular rule. And then it can be conservative about the interpretation — as in, if any of the plausible interpretations would judge this action as really bad, then the action is rejected. Sigal Samuel I think a problem there would be that almost any moral choice arguably has ambiguity. We've got some of the most contentious moral issues — think about gun control or abortion in the US — where, even democratically, you might get a significant proportion of the population that says they're opposed. How do you propose to deal with that? Yoshua Bengio I don't. Except by having the strongest possible honesty and rationality in the answers, which, in my opinion, would already be a big gain compared to the sort of democratic discussions that are happening. One of the features of the Scientist AI, like a good human scientist, is that you can ask: Why are you saying this? And he would come up with — not 'he,' sorry! — it would come up with a justification. The AI would be involved in the dialogue to try to help us rationalize what are the pros and cons and so on. So I actually think that these sorts of machines could be turned into tools to help democratic debates. It's a little bit more than fact-checking — it's also like reasoning-checking. Sigal Samuel This idea of developing Scientist AI stems from your disillusionment with the AI we've been developing so far. And your research was very foundational in laying the groundwork for that kind of AI. On a personal level, do you feel some sense of inner conflict or regret about having done the research that laid that groundwork? Yoshua Bengio I should have thought of this 10 years ago. In fact, I could have, because I read some of the early works in AI safety. But I think there are very strong psychological defenses that I had, and that most of the AI researchers have. You want to feel good about your work, and you want to feel like you're the good guy, not doing something that could cause in the future lots of harm and death. So we kind of look the other way. And for myself, I was thinking: This is so far into the future! Before we get to the science-fiction-sounding things, we're going to have AI that can help us with medicine and climate and education, and it's going to be great. So let's worry about these things when we get there. But that was before ChatGPT came. When ChatGPT came, I couldn't continue living with this internal lie, because, well, we are getting very close to human-level. Sigal Samuel The reason I ask this is because it struck me when reading your plan for Scientist AI that you say it's modeled after the platonic idea of a scientist — a selfless, ideal person who's just trying to understand the world. I thought: Are you in some way trying to build the ideal version of yourself, this 'he' that you mentioned, the ideal scientist? Is it like what you wish you could have been? Yoshua Bengio You should do psychotherapy instead of journalism! Yeah, you're pretty close to the mark. In a way, it's an ideal that I have been looking toward for myself. I think that's an ideal that scientists should be looking toward as a model. Because, for the most part in science, we need to step back from our emotions so that we avoid biases and preconceived ideas and ego. Sigal Samuel A couple of years ago you were one of the signatories of the letter urging AI companies to pause cutting-edge work. Obviously, the pause did not happen. For me, one of the takeaways from that moment was that we're at a point where this is not predominantly a technological problem. It's political. It's really about power and who gets the power to shape the incentive structure. We know the incentives in the AI industry are horribly misaligned. There's massive commercial pressure to build cutting-edge AI. To do that, you need a ton of compute so you need billions of dollars, so you're practically forced to get in bed with a Microsoft or an Amazon. How do you propose to avoid that fate? Yoshua Bengio That's why we're doing this as a nonprofit. We want to avoid the market pressure that would force us into the capability race and, instead, focus on the scientific aspects of safety. I think we could do a lot of good without having to train frontier models ourselves. If we come up with a methodology for training AI that is convincingly safer, at least on some aspects like loss of control, and we hand it over almost for free to companies that are building AI — well, no one in these companies actually wants to see a rogue AI. It's just that they don't have the incentive to do the work! So I think just knowing how to fix the problem would reduce the risks considerably. I also think that governments will hopefully take these questions more and more seriously. I know right now it doesn't look like it, but when we start seeing more evidence of the kind we've seen in the last six months, but stronger and more scary, public opinion might push sufficiently that we'll see regulation or some way to incentivize companies to behave better. It might even happen just for market reasons — like, [AI companies] could be sued. So, at some point, they might reason that they should be willing to pay some money to reduce the risks of accidents. Sigal Samuel I was happy to see that LawZero isn't only talking about reducing the risks of accidents but is also talking about 'protecting human joy and endeavor.' A lot of people fear that if AI gets better than them at things, well, what is the meaning of their life? How would you advise people to think about the meaning of their human life if we enter an era where machines have both agency and extreme intelligence? Yoshua Bengio I understand it would be easy to be discouraged and to feel powerless. But the decisions that human beings are going to make in the coming years as AI becomes more powerful — these decisions are incredibly consequential. So there's a sense in which it's hard to get more meaning than that! If you want to do something about it, be part of the thinking, be part of the democratic debate. I would advise us all to remind ourselves that we have agency. And we have an amazing task in front of us: to shape the future.

The stunning reversal of humanity's oldest bias
The stunning reversal of humanity's oldest bias

Vox

time5 days ago

  • Politics
  • Vox

The stunning reversal of humanity's oldest bias

is a senior editorial director at Vox overseeing the climate teams and the Unexplainable and The Gray Area podcasts. He is also the editor of Vox's Future Perfect section and writes the Good News newsletter. He worked at Time magazine for 15 years as a foreign correspondent in Asia, a climate writer, and an international editor, and he wrote a book on existential risk. The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million the oldest, most pernicious form of human bias is that of men toward women. It often started at the moment of birth. In ancient Athens, at a public ceremony called the amphidromia, fathers would inspect a newborn and decide whether it would be part of the family, or be cast away. One often socially acceptable reason for abandoning the baby: It was a girl. Female infanticide has been distressingly common in many societies — and its practice is not just ancient history. In 1990, the Nobel Prize-winning economist Amartya Sen looked at birth ratios in Asia, North Africa, and China and calculated that more than 100 million women were essentially 'missing' — meaning that, based on the normal ratio of boys to girls at birth and the longevity of both genders, there was a huge missing number of girls who should have been born, but weren't. Sen's estimate came before the truly widespread adoption of ultrasound tests that could determine the sex of a fetus in utero — which actually made the problem worse, leading to a wave of sex-selective abortions. These were especially common in countries like India and China; the latter's one-child policy and old biases made families desperate for their one child to be a boy. The Economist has estimated that since 1980 alone, there have been approximately 50 million fewer girls born worldwide than would naturally be expected, which almost certainly means that roughly that nearly all of those girls were aborted for no other reason than their sex. The preference for boys was a bias that killed in mass numbers. But in one of the most important social shifts of our time, that bias is changing. In a great cover story earlier this month, The Economist reported that the number of annual excess male births has fallen from a peak of 1.7 million in 2000 to around 200,000, which puts it back within the biologically standard birth ratio of 105 boys for every 100 girls. Countries that once had highly skewed sex ratios — like South Korea, which saw almost 116 boys born for every 100 girls in 1990 — now have normal or near-normal ratios. Altogether, The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million girls. That's comparable to the number of lives saved by anti-smoking efforts in the US. So how, exactly, have we overcome a prejudice that seemed so embedded in human society? Related The movement desperately trying to get people to have more babies Success in school and the workplace For one, we have relaxed discrimination against girls and women in other ways — in school and in the workplace. With fewer limits, girls are outperforming boys in the classroom. In the most recent international PISA tests, considered the gold standard for evaluating student performance around the world, 15-year-old girls beat their male counterparts in reading in 79 out of 81 participating countries or economies, while the historic male advantage in math scores has fallen to single digits. Girls are also dominating in higher education, with 113 female students at that level for every 100 male students. While women continue to earn less than men, the gender pay gap has been shrinking, and in a number of urban areas in the US, young women have actually been outearning young men. Government policies have helped accelerate that shift, in part because they have come to recognize the serious social problems that eventually result from decades of anti-girl discrimination. In countries like South Korea and China, which have long had some of the most skewed gender ratios at birth, governments have cracked down on technologies that enable sex-selective abortion. In India, where female infanticide and neglect have been particularly horrific, slogans like 'Save the Daughter, Educate the Daughter' have helped change opinions. A changing preference The shift is being seen not just in birth sex ratios, but in opinion polls — and in the actions of would-be parents. Between 1983 and 2003, The Economist reported, the proportion of South Korean women who said it was 'necessary' to have a son fell from 48 percent to 6 percent, while nearly half of women now say they want daughters. In Japan, the shift has gone even further — as far back as 2002, 75 percent of couples who wanted only one child said they hoped for a daughter. In the US, which allows sex selection for couples doing in-vitro fertilization, there is growing evidence that would-be parents prefer girls, as do potential adoptive parents. While in the past, parents who had a girl first were more likely to keep trying to have children in an effort to have a boy, the opposite is now true — couples who have a girl first are less likely to keep trying. A more equal future There's still more progress to be made. In northwest of India, for instance, birth ratios that overly skew toward boys are still the norm. In regions of sub-Saharan Africa, birth sex ratios may be relatively normal, but post-birth discrimination in the form of poorer nutrition and worse medical care still lingers. And course, women around the world are still subject to unacceptable levels of violence and discrimination from men. And some of the reasons for this shift may not be as high-minded as we'd like to think. Boys around the world are struggling in the modern era. They increasingly underperform in education, are more likely to be involved in violent crime, and in general, are failing to launch into adulthood. In the US, 20 percent of American men between 25 and 34 still live with their parents, compared to 15 percent of similarly aged women. It also seems to be the case that at least some of the increasing preference for girls is rooted in sexist stereotypes. Parents around the world may now prefer girls partly because they see them as more likely to take care of them in their old age — meaning a different kind of bias against women, that they are more natural caretakers, may be paradoxically driving the decline in prejudice against girls at birth. But make no mistake — the decline of boy preference is a clear mark of social progress, one measured in millions of girls' lives saved. And maybe one Father's Day, not too long from now, we'll reach the point where daughters and sons are simply children: equally loved and equally welcomed.

What drove the tech right's — and Elon Musk's — big, failed bet on Trump
What drove the tech right's — and Elon Musk's — big, failed bet on Trump

Vox

time13-06-2025

  • Business
  • Vox

What drove the tech right's — and Elon Musk's — big, failed bet on Trump

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. While tech has generally been very liberal in its political support and giving, there's been an emergence of a real and influential tech right over the last few years. Allison Robbert/AFP via Getty Images I live and work in the San Francisco Bay Area, and I don't know anyone who says they voted for Donald Trump in 2016 or 2020. I know, on the other hand, quite a few who voted for him in 2024, and quite a few more who — while they didn't vote for Trump because of his many crippling personal foibles, corruption, penchant for destroying the global economy, etc. — have thoroughly soured on the Democratic Party. Future Perfect Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. It's not just my professional networks. While tech has generally been very liberal in its political support and giving, the last few years have seen the emergence of a real and influential tech right. Elon Musk, of course, is by far the most famous, but he didn't start the tech right by himself. And while his break with Trump — which Musk now seems to be backpedaling on — might have changed his role within the tech right, I don't think this shift will end with him. The rise of the tech right The Bay Area tech scene has always to my mind been best understood as left-libertarian — socially liberal, but suspicious of big government and excited about new things from cryptocurrency to charter cities to mosquito gene drives to genetically engineered superbabies to tooth bacteria. That array of attitudes sometimes puts them at odds with governments (and much of the public, which tends to be much less welcoming of new technology). The tech world valorizes founders and doers, and everyone knows two or three stories about a company that only succeeded because it was willing to break some city regulations. Lots of founders are immigrants; lots are LGBTQ+. For a long time, this set of commitments put tech firmly on the political left — and indeed tech employees overwhelmingly vote and donate to the Democratic Party. Related The AI that apparently wants Elon Musk to die But over the last 10 years, I think three things changed. The first was what Vox at the time called the Great Awokening — a sweeping adoption of what had been a bunch of niche liberal social justice ideas, from widespread acceptance of trans people to suspicion of any sex or race disparity in hiring to #MeToo awareness of sexual harassment in the workplace. A lot of this shift at tech companies was employee driven; again, tech employees are mostly on the left. And some of it was good! But some of it was illiberal — rejecting the idea that we can and should work with people we profoundly disagree with — and identitarian, in that it focused more on what demographic categories we belong to than our commonalities. We're now in the middle of a backlash, which I think is all the more intense in tech because the original woke movement was all the more intense in tech. The second thing that changed was the macroeconomic environment. When I first joined a tech company in 2017, interest rates were low and VC funding was incredibly easy to get. Startups were everywhere, and companies were desperately competing to hire employees. As a result, employees had a lot of power; CEOs were often scared of them. The third was a deliberate effort by many liberals to go after a tech scene they saw as their enemy. The Biden administration ended up staffed by a lot of people ideologically committed to Sen. Elizabeth Warren's view of the world, where big tech was the enemy of liberal democracy and the tools of antitrust should be used to break it up. Lina Khan's Federal Trade Commission acted on those convictions, going after big tech companies like Amazon. Whether you think this was the right call in economic terms — I mostly think it was not — it was decidedly self-destructive in political terms. So in 2024, some of tech (still not a majority, but a smaller minority than in the past two Trump elections) went right. The tech world watched with bated breath as Musk announced DOGE: Would the administration bring about the deregulation, tax cuts, and anti-woke wish list they believed that only the administration could? …and the immediate failure The answer so far has been no. (Many people on the tech right are still more optimistic than me, and point at a small handful of victories, but my assessment is that they're wearing rose-colored glasses to the point of outright blindness.) Some deregulation has happened, but any beneficial effects it would have had on investment have been more than canceled out by the tariffs' catastrophic effects on businesses' ability to plan for the future. They did at least get the tax cuts for the rich, if the 'big, beautiful bill' passes, but that's about all they got — and the ultra-rich will be poorer this year anyway thanks to the unsteady stock market. The Republicans, when out of power, had a critique of the Democrats which spoke to the tech right, the populist right, the white supremacists and moderate Black and Latino voters alike. But it's much easier to complain about Democrats in a way that all of those disparate interest groups find compelling than to govern in a way that keeps them all happy. Once the Trump administration actually had to choose, it chose basically none of the tech right's priorities. They took a bad bet — and I think it'd behoove the Democrats to think, as Trump's coalition fractures, about which of those voters can be won back.

The one drug RFK Jr. should actually ban
The one drug RFK Jr. should actually ban

Vox

time12-06-2025

  • Health
  • Vox

The one drug RFK Jr. should actually ban

is a senior reporter for Vox's Future Perfect section, with a focus on animal welfare and the future of meat. Pigs fed ractopamine has been linked to their inability to stand up, difficulty breathing, and even death. It also carries a number of environmental and human health becoming secretary of the US Department of Health and Human Services and leader of the Make America Healthy Again movement, Robert F. Kennedy Jr. was a swashbuckling environmental attorney who regularly took aim at the meat industry. He sued large meat companies and the Environmental Protection Agency over water pollution from factory farms, and criticized factory farming for its 'unspeakable' animal cruelty and overreliance on feeding animals hormones and drugs. For over a decade, a group of food safety, environmental, and animal welfare nonprofits has petitioned the US Food and Drug Administration — which Kennedy now oversees — to ban the use of one of the most controversial of those drugs: ractopamine hydrochloride. Fed to pigs in the final weeks of their lives, ractopamine speeds up muscle gain so that pork producers can squeeze more profit from each animal. But the drug has been linked to severe adverse events in pigs, including trembling, reluctance to move, collapse, inability to stand up, hoof disorders, difficulty breathing, and even death. It also carries a number of environmental and human health concerns. Earlier this year, the FDA denied the petition to ban the drug, arguing that current regulations ensure a 'reasonable certainty of no harm to consumers.' While the agency doesn't dispute that ractopamine can harm animals, and it halved the maximum dose in pigs in 2006, it has argued welfare issues can be mitigated by simply asking meat producers to handle ractopamine-fed animals more carefully — a response that the petitioning organizations called 'toothless.' This story was first featured in the Processing Meat newsletter Sign up here for Future Perfect's biweekly newsletter from Marina Bolotnikova and Kenny Torrella, exploring how the meat and dairy industries shape our health, politics, culture, environment, and more. Have questions or comments on this newsletter? Email us at futureperfect@ The FDA didn't respond to a request for comment in time for publication. Elanco, the pharmaceutical company that developed ractopamine, didn't respond to an interview request for this story. While 26 countries have approved ractopamine use in livestock, more than 165 have banned or restricted it, and many have set restrictions on or have altogether prohibited the import of pork and beef from ractopamine-fed animals — actions that have set off trade disputes. The bans stem primarily from concerns that the trace amounts of the drug found in meat could harm consumers, especially those with cardiovascular conditions, since ractopamine belongs to a class of drugs (beta-agonists) that can increase people's heart rates. There's only been one tiny study on ractopamine in humans who took the drug directly, which European regulators — prone to taking a precautionary approach with new food additives — say is insufficient to prove its safety. Chinese scientists are concerned about the drug because its residues concentrate at higher rates in pigs' organs, which are more commonly consumed in Chinese diets. The heated international debate led one team of biotechnology researchers to call ractopamine 'the most controversial food additive in the world.' An inflatable pig has the words 'I am a ractopamine pig' written on it during a march in Taipei, Taiwan, in November 2020, as thousands demand the reversal of a decision to allow US pork imports into the country, citing food safety issues. Chiang Ying-ying/Associated Press Daniel Waltz, managing attorney of the Animal Legal Defense Fund — one of the organizations petitioning the FDA to ban ractopamine — told me it seems like just the kind of thing Kennedy would want to prohibit. 'So why isn't the FDA jumping at the opportunity to do something about ractopamine?' Waltz said. Kennedy and the broader MAHA movement have long elevated fears over pharmaceuticals and food chemicals, and it can sometimes be difficult to parse their valid concerns from their dangerous conspiracy theories. But he doesn't appear to have ever publicly criticized ractopamine, and it's unknown whether it's even on his radar. Given the lack of trials, ractopamine's threat to human health is unclear, and reasonable people can disagree on how government agencies should handle it. But there's a clear case to be made that ractopamine ought to be banned because of its awful effects on animals. The FDA's decision to continue to allow it in meat production represents a missed opportunity to challenge the factory farm system that Kennedy has long railed against, and to ban a chemical that no one — except the industry — really wants. Related Why Big Pharma wants you to eat more meat 'Ractopamine divides the world' There's ample real-world evidence that ractopamine can be terrible for pigs. Over an 11-year period, the FDA received reports that over 218,000 pigs fed ractopamine suffered adverse events, like trembling, an inability to stand up, hoof disorders, and difficulty breathing. That's a relatively small share of the billion or so pigs raised and slaughtered for meat during that time period, but the number only includes adverse events reported to the FDA — many more could've occurred without being reported. The next most reported drug had a little over 32,738 cases spanning 24 years. The FDA has said that reports of adverse events don't establish that the drug caused the effects — essentially that it's correlation, not proof of causation. But shortly after the drug came onto market, the FDA also received reports of an uptick in ractopamine-fed pigs unable to stand or walk at slaughterhouses. Some studies, including a couple conducted by the drugmaker — Elanco — have shown that ractopamine is associated with a number of issues in pigs, including hoof lesions, fatigue, increased aggression, and metabolic stress. Over the years, Elanco has added warning labels that ractopamine-fed pigs are at an increased risk of fatigue and inability to walk. A 'downer pig' — a pig unable to walk or stand — is dragged at a slaughterhouse that supplies to Hormel. When ractopamine first came onto the market, the FDA received reports of an uptick in ractopamine-fed pigs unable to stand or walk at slaughterhouses. Animal Outlook At the same time, a literature review by Elanco employees and university researchers looking at ractopamine studies found it had minimal effect on pig mortality, inconsistent effects on aggression and acute stress, and mixed results on a number of physiological responses, like cortisol and heart rate, with some research showing little to no effects, and others showing moderate effects. The size of the dose — and how workers handle the animals — were often important factors. Elanco has updated its label to clarify that there's no benefit to feeding pigs more than the lowest dose. There's also some evidence to suggest ractopamine negatively impacts the welfare of cattle, some of whom are fed the drug. Even more than concerns over animal welfare, the uncertainty over ractopamine's effect on consumers' health has courted international controversy. Those concerns have led to countries rejecting shipments of US pork and beef; Taiwanese lawmakers throwing pig intestines at one another and mass protests in a dispute over the country's decision to allow US pork imports from ractopamine-fed pigs; and a highly contentious, multiyear debate at the United Nations-run Codex Alimentarius Commission, which sets food standards important for international trade. By the late 2000s, numerous countries had restricted imports of meat from ractopamine-fed animals, which posed a financial threat to the US meat industry. So the US Department of Agriculture spent five years advocating for the Codex commission to approve maximum residue levels of ractopamine in beef and pork as safe, which would give the US more legal leverage to challenge other countries' import bans. The commission's fight over ractopamine was 'really, really ugly,' Michael Hansen, a senior scientist at Consumers Union — the publisher of Consumer Reports — who attended commission meetings, told me. European Union officials argued there wasn't enough data to ensure consumers would be safe from ingesting trace amounts of ractopamine. While the drug had been tested on various animal species, only one human clinical trial had been conducted in 1994, which included just six healthy young men taking the drug, one of whom dropped out after complaints that his heart was pounding. Related The myths we tell ourselves about American farming In response to the trial, an FDA official at the time stated that 'the data from this study do not provide adequate assurance that the expected ractopamine levels in meat products will be without cardiovascular pharmacological effects in man.' In 2012, the UN commission narrowly voted to set maximum safe ractopamine residue levels in beef and pork by a margin of just two votes — an unusual outcome for a commission that historically ran on consensus. China and EU representatives, Hansen told me, were furious. US meat industry groups and the USDA secretary at the time, Tom Vilsack, cheered the decision. Writing about the commission fight, trade lawyer Michael Burkard wrote that ractopamine 'divides the world.' However, ractopamine remains controversial and the subject of trade disputes; just last year, China blocked shipments of US beef that contained traces of the drug. Make animals suffer less The fight over ractopamine is a microcosm of a broader problem in the meat industry: The government's reluctance to regulate it. Over the last century, meat companies have transformed how animals are raised for food. They've packed animals into crowded, sprawling warehouses; bred them to grow bigger and faster to the detriment of their welfare; stored vast amounts of their manure in open-air lagoons that leach into the environment; and designed complex drug regimens to keep them alive in unsanitary conditions or, like in the case of ractopamine, make a little more money off each animal. Whenever consumers and advocacy groups raise concerns over the problems factory farming has created, more often than not, a government agency tasked with regulating it takes action to defend the meat industry, not reform it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store