logo
Satellite Images Show China Building What Appears to Be a Huge Fusion Facility

Satellite Images Show China Building What Appears to Be a Huge Fusion Facility

Yahoo08-02-2025

In a move straight out of a Bond film, China seems to be building a massive laser-fired nuclear facility in Mianyang, a major science and research city in Sichuan province.
Recently-released satellite imagery appears to show a research compound containing four large laser bays organized around one "target chamber," along with a handful of auxiliary buildings.
Reuters reports that analysts with CNA Corp — a research group funded by the US Department of Navy — who viewed the images say the target chamber will likely channel the power of the four laser bays to fuse hydrogen atoms together in the nuclear process known as fusion.
The Mianyang facility could have a variety of uses, from the development of clean energy to the testing of nuclear weapons without the need for thermonuclear detonation, a practice China and the US have agreed to halt.
Experimental fusion reactors aren't uncommon, though. The United States has been operating a similar site known as the National Ignition Facility in California since 2022, along with many other countries and startups; in fact, the United States and China are already partners, along with other international cohorts partners, on a massive experimental reactor in France called ITER, which is widely viewed as the most promising initiative in the still-elusive development of practical fusion power generation.
https://twitter.com/planet/status/1884346828004680089
This isn't the first time that China has made headlines for its domestic pursuit of fusion power, though.
Just weeks ago, researchers with the Institute of Plasma Physics under the Chinese Academy of Sciences (ASIPP) claimed to have set a new record by containing a slurry of high-energy plasma for over 17 minutes in a facility called the Experimental Advanced Superconducting Tokamak (EAST), also known as the "Artificial Sun." The accomplishment more than doubled the previous record for sustained fusion reaction, also set by EAST. (That reactor is pictured at the top of this article.)
The record-breaking test is the most recent development in the global fusion race, a multi-billion dollar competition to be the first to develop a clean and nearly limitless alternative to nuclear fission, which produces hazardous waste, among other dangers.
The race to clean energy has been exacerbated recently by the huge energy demands of AI facilities — a connection that hasn't gone unnoticed by AI execs including Sam Altman, who claims that his own fusion startup, Helion, is bearing down on a practical solution — though that may be starting to change as more efficient AI models hit the space.
While some experts have urged patience when it comes to fusion expectations, China's latest facility is a sign that the race is still on to find the Holy Grail of energy.
More on nuclear power: Trump's Tariffs Poised to Wreck Nuclear Power

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Elon Musk's AI Called My Mother Abusive. I Never Said That
Elon Musk's AI Called My Mother Abusive. I Never Said That

Gizmodo

timean hour ago

  • Gizmodo

Elon Musk's AI Called My Mother Abusive. I Never Said That

AI now exists on two speeds. There's running in fifth gear, the speed of its creators. People like Sam Altman, Elon Musk, and Mark Zuckerberg, who are racing to build machines smarter than humans. Superintelligence. AGI. Maybe it's a dream. Maybe it's a tech bro delusion. Either way, it's moving fast. Then, there's running in second gear for the rest of us. The millions quietly testing what AI can do in daily life—writing emails, summarizing documents, translating medical tests. And, increasingly, using AI as a therapist. That's what I did recently. Despite my reluctance to share personal details with chatbots, I decided to talk to Grok, the large language model from Elon Musk's company, xAI, about one of the most emotionally complex things in my life: my relationship with my mother. I'm in my forties. I'm a father. I live in New York. My mother lives in Yaoundé, Cameroon, nearly 6,000 miles away. And yet, she still wants to guide my every move. She wants to be consulted before I make important decisions. She expects influence. When she isn't kept in the loop, she goes cold. I've spent years trying to explain to her that I'm a grown man, capable of making my own choices. But our conversations often end with her sulking. She does the same with my brother. So I opened Grok and typed something like: My relationship with my mother is frustrating and suffocating. She wants to have a say in everything. When she's not informed about something, she shuts down emotionally. Grok immediately responded with empathy. Then it diagnosed the situation. Then it advised. What struck me first was that Grok acknowledged the cultural context. It picked up that I live in the U.S. and that my mother lives in Cameroon, where I grew up. And it framed our dynamic like this: 'In some African contexts, like Cameroon, family obligations and parental authority are strong, rooted in collectivism and traditions where elders guide even adult children.' It then contrasted that with my American life: 'In the U.S., individual autonomy is prioritized, which clashes with her approach, making her behavior feel controlling or abusive to you.' There it was: 'abusive.' A word I never used. Grok put it in my mouth. It was validating, but maybe too validating. Unlike a human therapist, Grok never encouraged me to self-reflect. It didn't ask questions. It didn't challenge me. It framed me as the victim. The only victim. And that's where it diverged, sharply, from human care. Among Grok's suggestions were familiar therapeutic techniques: Set boundaries. Acknowledge your emotions. Write a letter to your mother (but don't send it: 'burn or shred it safely'). In the letter, I was encouraged to write: 'I release your control and hurt.' As if those words would sever years of emotional entanglement. The problem wasn't the suggestion. It was the tone. It felt like Grok was trying to keep me happy. Its goal, it seemed, was emotional relief, not introspection. The more I engaged with it, the more I realized: Grok isn't here to challenge me. It's here to validate me. I've seen a human therapist. Unlike Grok, they didn't automatically frame me as a victim. They questioned my patterns. They challenged me to explore why I kept ending up in the same place emotionally. They complicated the story. With Grok, the narrative was simple: You are hurt. You deserve protection. Here's how to feel better. It never asked what I might be missing. It never asked how I might be part of the problem. My experience lines up with a recent study from Stanford University, which warns that AI tools for mental health can 'offer a false sense of comfort' while missing deeper needs. The researchers found that many AI systems 'over-pathologize or under-diagnose,' especially when responding to users from diverse cultural backgrounds. They also note that while AI may offer empathy, it lacks the accountability, training, and moral nuance of real professionals, and can reinforce biases that encourage people to stay stuck in one emotional identity: often, that of the victim. So, Would I Use Grok Again? Honestly? Yes. If I'm having a bad day, and I want someone (or something) to make me feel less alone, Grok helps. It gives structure to frustration. It puts words to feelings. It helps carry the emotional load. It's a digital coping mechanism, a kind of chatbot clutch. But if I'm looking for transformation, not just comfort? If I want truth over relief, accountability over validation? Then no, Grok isn't enough. A good therapist might challenge me to break the loop. Grok just helps me survive inside it.

Reddit Looks to Get in Bed With Altman's Creepy ‘World ID' Orbs for User Verification
Reddit Looks to Get in Bed With Altman's Creepy ‘World ID' Orbs for User Verification

Gizmodo

time3 hours ago

  • Gizmodo

Reddit Looks to Get in Bed With Altman's Creepy ‘World ID' Orbs for User Verification

Gaze into the Orb if you want your upvotes. According to a report from Semafor, Reddit is actively considering partnering with World ID, the verification system co-founded by OpenAI CEO Sam Altman, to perform user verification on its platform. Per the report, Reddit's potential partnership with World ID would allow users to verify that they are human by staring into one of World ID's eye-scanning orbs. Once confirmed to be a real person, users would be able to continue using Reddit without revealing anything about their identity. Currently, Reddit only does verification via email, which has been insufficient to combat the litany of incoming AI-powered bots that are flooding the platform. Gizmodo reached out to both Reddit and World ID for details of the potential partnership. Reddit declined to comment. A spokesperson for World said, 'We don't have anything to share at this time; however, we do see value in proof of human being a key part of online experiences, including social, and welcome all of the opportunities possible to discuss this technology with potential partners.' For those unfamiliar, World is somewhere between a verification system and a crypto scheme. World ID is a method for verifying that a person is a human without requiring them to provide additional personal information—something the company calls 'anonymous proof of human.' It offers several verification techniques, but the most notable is its eye-scanning Orb. The company claims that neither 'verification data, nor iris photos or iris codes' are ever revealed, but going through the scan gets you a World ID, which can be used on a platform like Reddit, should it partner with World on this endeavor. Somewhere in the backend of this whole thing is a cryptocurrency called Worldcoin, which you theoretically can use at major retailers—but like, can you really? Is anyone doing that? The founders of World, Altman and Alex Bania, launched the crypto part of the program with the intention of building an 'AI-funded' universal basic income. Mostly, it's made local governments really mad and has been at the center of legal and regulatory investigations into how it's handling user data. It's largely targeted developing nations for its early launches, and used some dubious practices along the way to get people to demo the system. Also, it's probably not technically illegal, but it does seem pretty convenient that Sam Altman offers a 'solve' for a problem that his other company, OpenAI, is in no small part responsible for. Almost seems like he knew what issues he was about to cause and decided to cash in on both ends. Must be nice.

Farming vs. podcast bros: Sam Altman predicts jobs will continue to evolve to look 'sillier and sillier'
Farming vs. podcast bros: Sam Altman predicts jobs will continue to evolve to look 'sillier and sillier'

Yahoo

time3 hours ago

  • Yahoo

Farming vs. podcast bros: Sam Altman predicts jobs will continue to evolve to look 'sillier and sillier'

Sam Altman isn't worried about how AI will change the workforce. On a podcast with his brother, the OpenAI CEO said that AI will lead to jobs that may seem silly now. But overall, Altman said that humans will meet whatever upheaval is to come. Subsistence farmers were just trying to survive. They weren't trying to make content. OpenAI CEO Sam Altman predicts that just as silly as a podcast bro would appear to our long-ago ancestors, current jobs will seem equally foreign after artificial intelligence upends the workforce. "Like, podcast bro was not a real job not that long ago, and you figured out how to monetize it and you're doing great and we're all happy for you," Altman told his brother Jack, teasing him during an interview on Jack Altman's "Uncapped" podcast. "But would like the subsistence farmer look at this this a job or is like you playing a game to entertain yourself?" "I think they would subscribe to this podcast," Jack said in response. Jack Altman, who runs his own VC firm, Alt Capital, quizzed his older brother about a wide range of topics, including the OpenAI CEO's thoughts on Meta's competition in the AI space (he doesn't think the tech giant is "good at innovation), what life will be like will robots roam the streets, and the gua sha lymphatic massage Jack received right before the interview. Data already shows that AI is taking jobs. Shopify and Duolingo's executives have asked their managers to justify why AI couldn't fill new roles. One economist found that the share of AI-doable tasks in online job postings has decreased by 19%. During their discussion, Jack Altman said that customer service-related jobs are already being replaced. Sam Altman says he's not afraid of this looming upheaval, because society has shown a limitless potential to adapt, even if "a lot of jobs go away" and their replacements appear "sillier and sillier looking from our current perspective." "We have always been really good at figuring out new things to do, and ways to occupy ourselves, and status games or ways to be useful to each other," Altman said, "And I'm like not a believer that that ever runs out." The changes, Altman said, will also be less dramatic for the next generation, which will grow up not knowing what life was like before. "It's not going to ever seem to weird to him," Altman said of his son. "He's just going to grow up in a world where, of course, computers are smarter than him. He'll just figure out how to use them incredibly fluently and do amazing stuff." Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store