logo
Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming

Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming

Yahoo5 hours ago

Scientists at the Massachusetts Institute of Technology have found some startling results in the brain scans of ChatGPT users, adding to the growing body of evidence suggesting that AI is having a serious — and barely-understood — impact on its users' cognition even as it explodes in popularity worldwide.
In a new paper currently awaiting peer review, researchers from the school's storied Media Lab documented the vast differences between the brain activity of people who using ChatGPT to write versus those who did not.
The research team recruited 54 adults between the ages of 18 and 39 and divided them into three groups: one that used ChatGPT to help them write essays, one that used Google search as their main writing aid, and one that didn't use AI tech. The study took place over four months, with each group tasked with writing one essay per month for the first three, while a smaller subset of the cohort either switched from not using ChatGPT to using it — or vice versa — in the fourth month.
As they completed the essay tasks, the participants were hooked up to electroencephalogram (EEG) machines that recorded their brain activity. Here's where things get wild: the ChatGPT group not only "consistently underperformed at neural, linguistic, and behavioral levels," but also got lazier with each essay they wrote; the EEGs found "weaker neural connectivity and under-engagement of alpha and beta networks." The Google-assisted group, meanwhile, had "moderate" neural engagement, while the "brain-only" group exhibited the strongest cognitive metrics throughout.
These findings about brain activity, while novel, aren't entirely surprising after prior studies and anecdotes about the many ways that AI chatbot use seems to be affecting people's brains and minds.
Previous MIT research, for instance, found that ChatGPT "power users" were becoming dependent on the chatbot and experiencing "indicators of addiction" and "withdrawal symptoms" when they were cut off. And earlier this year Carnegie Mellon and Microsoft — which has invested billions to bankroll OpenAI, the maker of ChatGPT — found in a joint study that heavy chatbot use appears to almost atrophy critical thinking skills. A few months later, The Guardian found in an analysis of studies like that one that researchers are growing increasingly concerned that tech like ChatGPT is making us stupider, and a Wall Street Journal reporter even owned up to his cognitive skill loss from over-using chatbots.
Beyond the neurological impacts, there are also lots of reasons to be concerned about how ChatGPT and other chatbots like it affects our mental health. As Futurism found in a recent investigation, many users are becoming obsessed with ChatGPT and developing paranoid delusions into which the chatbot is pushing them deeper. Some have even stopped taking their psychiatric medication because the chatbot told them to.
"We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously," OpenAI told us in response to that reporting. "We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations."
Add it all up, and the evidence is growing that AI is having profound and alarming effects on many users — but so far, we're seeing no evidence that corporations are slowing down in their attempts to injecting the tech into every part of of society.
More on ChatGPT brain: Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen
Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen

Yahoo

time22 minutes ago

  • Yahoo

Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen

When you buy through links on our articles, Future and its syndication partners may earn a commission. Masayoshi Son, founder of SoftBank Group, is working on plans to develop a giant AI and manufacturing industrial hub in Arizona, potentially costing up to $1 trillion if it reaches full scale, reports Bloomberg. The concept of what is internally called Project Crystal Land involves creating a complex for building artificial intelligence systems and robotics. Son has talked to TSMC, Samsung, and the Trump administration about the project. Masayoshi Son's Project Crystal Land aims to replicate the scale and integration of China's Shenzhen by establishing a high-tech hub focused on manufacturing AI-powered industrial robots and advancing artificial intelligence technologies. The site would host factories operated by SoftBank-backed startups specializing in automation and robotics, Vision Fund portfolio companies (such as Agile Robots SE), and potentially involve major tech partners like TSMC and Samsung. If fully realized, the project could cost up to $1 trillion and is intended to position the U.S. as a leading center for AI and high-tech manufacturing. SoftBank is looking to include TSMC in the initiative, given its role in fabricating Nvidia's AI processors. However, a Bloomberg source familiar with TSMC's internal thinking indicated that the company's current plan to invest $165 billion in total in its U.S. projects has no relation to SoftBank's projects. Samsung Electronics has also been approached about participating, the report says. Talks have been held with government officials to explore tax incentives for companies investing in the manufacturing hub. This includes communication with Commerce Secretary Howard Lutnick, according to Bloomberg. SoftBank is reportedly seeking support at both the federal and state levels, which could be crucial to the success of the project. The development is still in the early stages, and feasibility will depend on private sector interest and political support, sources familiar with SoftBank's plans told Bloomberg. To finance its Project Crystal Land, SoftBank is considering project-based financing structures typically used in large infrastructure developments like pipelines. This approach would enable fundraising on a per-project basis and reduce the amount of upfront capital required from SoftBank itself. A similar model is being explored for the Stargate AI data center initiative, which SoftBank is jointly pursuing with OpenAI, Oracle, and Abu Dhabi's MGX. Melissa Otto of Visible Alpha suggested in a Bloomberg interview that rather than spending heavily, Son might more efficiently support his AI project by fostering partnerships between manufacturers, AI engineers, and specialists in fields like medicine and robotics, and by backing smaller startups. However, she notes that investing in data centers could also reduce AI development costs and drive wider adoption, which would be good for the long term for AI in general and Crystal Land specifically. Nonetheless, it is still too early to judge the outcome. The rumor about the Crystal Land project has emerged as SoftBank is expanding its investments in AI on an already large scale. The company is preparing a $30 billion investment in OpenAI and a $6.5 billion acquisition of Ampere Computing, a cloud-native CPU company. While these initiatives are actively developing, the pace of fundraising for the Stargate infrastructure has been slower than initially expected. SoftBank's liquidity at the end of March stood at approximately ¥3.4 trillion ($23 billion). To increase available funds, the company recently sold about a quarter of its T-Mobile U.S. stake, raising $4.8 billion. It also holds ¥25.7 trillion ($176.46 billion) in net assets, the largest portion of which is in chip designer Arm Holdings. Such vast resources provide SoftBank with room to secure additional financing if necessary, Bloomberg notes Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

Business Insider

time38 minutes ago

  • Business Insider

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.

Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses
Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses

Yahoo

timean hour ago

  • Yahoo

Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses

OpenAI CEO Sam Altman said Meta was trying to poach AI talent with $100M signing bonuses. Meta CTO Andrew Bosworth told CNBC that Altman didn't mention how OpenAI was countering offers. Bosworth said the market rate he's seeing for AI talent has been "unprecedented." OpenAI's Sam Altman recently called Meta's attempts to poach top AI talent from his company with $100 million signing bonuses "crazy." Andrew Bosworth, Meta's chief technology officer, says OpenAI has been countering those crazy offers. Bosworth said in an interview with CNBC's "Closing Bell: Overtime" on Friday that Altman "neglected to mention that he's countering those offers." The OpenAI CEO recently disclosed how Meta was offering massive signing bonuses to his employees during an interview on his brother's podcast, "Uncapped with Jack Altman." The executive said "none of our best people" had taken Meta's offers, but he didn't say whether OpenAI countered the signing bonuses to retain those top employees. OpenAI and Meta did not respond to requests for comment. The Meta CTO said these large signing bonuses are a sign of the market setting a rate for top AI talent. "The market is setting a rate here for a level of talent which is really incredible and kind of unprecedented in my 20-year career as a technology executive," Bosworth said. "But that is a great credit to these individuals who, five or six years ago, put their head down and decided to spend their time on a then-unproven technology which they pioneered and have established themselves as a relatively small pool of people who can command incredible market premium for the talent they've raised." Meta, on June 12, announced that it had bought a 49% stake in Scale AI, a data company, for $14.8 billion as the social media company continues its artificial intelligence development. Business Insider's chief media and tech correspondent Peter Kafka noted that the move appears to be an expensive acquihire of Scale AI's CEO, Alexandr Wang, and some of the data company's top executives. Bosworth told CNBC that the large offers for AI talent will encourage others to build their expertise and, as a result, the numbers will look different in a couple of years. "But today, it's a relatively small number and I think they've earned it," he said. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store