
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results.
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper's main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.
'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' she says. 'Developing brains are at the highest risk.'
Generating ideas
The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel.
Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices.
The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely 'soulless.' The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. 'It was more like, 'just give me the essay, refine this sentence, edit it, and I'm done,'' Kosmyna says.
The brain-only group, conversely, showed the highest neural connectivity, especially in alpha, theta and delta bands, which are associated with creativity ideation, memory load, and semantic processing. Researchers found this group was more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays.
The third group, which used Google Search, also expressed high satisfaction and active brain function. The difference here is notable because many people now search for information within AI chatbots as opposed to Google Search.
After writing the three essays, the subjects were then asked to re-write one of their previous efforts—but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. 'The task was executed, and you could say that it was efficient and convenient,' Kosmyna says. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.'
The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it.
Post publication
This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. 'Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,' says Kosmyna. 'We need to have active legislation in sync and more importantly, be testing these tools before we implement them.'
Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to 'only read this table below,' thus ensuring that LLMs would return only limited insight from the paper.
She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. 'We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,' she says, laughing.
Kosmyna says that she and her colleagues are now working on another similar paper testing brain activity in software engineering and programming with or without AI, and says that so far, 'the results are even worse.' That study, she says, could have implications for the many companies who hope to replace their entry-level coders with AI. Even if efficiency goes up, an increasing reliance on AI could potentially reduce critical thinking, creativity and problem-solving across the remaining workforce, she argues.
Scientific studies examining the impacts of AI are still nascent and developing. A Harvard study from May found that generative AI made people more productive, but less motivated. Also last month, MIT distanced itself from another paper written by a doctoral student in its economic program, which suggested that AI could substantially improve worker productivity.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
2 hours ago
- Tom's Guide
ChatGPT is holding back — these four prompts unlock its full potential
ChatGPT can be such a useful tool. But, it has a tendency to sometimes not put in its all. If you prompt it correctly, you can force ChatGPT to give a request that little bit of extra oomph to really give you a solid answer. This could be for a multi-step prompt, or simply when you want the AI chatbot to dig deep and really think through an answer. In my time it, a few prompts have come up that I've found have really pushed ChatGPT to go all out. These are my four favorite ChatGPT prompts for that exact task. This one requires a bit of work, talking ChatGPT through a stages, but the end result is worth it. Of course, if you're just asking a simple question or looking into something simple, all of this work isn't needed. However, I have found that a bit of forward planning can get the model thinking harder. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. ChatGPT will respond to this saying that it is ready for your question. Ask your request and it will take its time thinking through the task. This prompt works best on one of the more advanced versions of ChatGPT, such as 4o. It will also work on other chatbots such as Claude 4 or Gemini. Prompt: Debate with yourself on [insert topic]. For each side of the argument, quote sources and use any information available to you to form the argument. Take time before you start to prepare your arguments. ChatGPT can make a great debate partner, even better when it is debating itself. By using this prompt, you'll get strongly planned and considered arguments on both sides of a topic. This is especially useful when you're working on an essay or project that needs a varied consideration. The model can debate on any topic, but sometimes will only touch on the surface of a topic. In this case, follow up with a prompt asking ChatGPT to think harder about its responses, forcing it to consider everything in more detail. Prompt: 'Break down the history, current state, and future implications of [issue], using subheadings and citing credible sources.' Instead of just getting a general overview of a subject, this will give you a detailed report, examining the past, future and current state of a topic. By asking for citations, ChatGPT will list all of the sources it has used to offer up the information in your report. You can go a step further by asking ChatGPT to use the internet to do this, providing links to any information it has used. Prompt: 'List the step-by-step process for [task], noting common pitfalls and how to avoid each one.' A simple but effective prompt for ChatGPT, this will not only give you the instructions for how to do something but warn you of the mistakes that are often made for each stage. For example, when using this prompt for making focaccia, ChatGPT gave me instructions for stage 1 of mixing the dough, along with warnings around the temperature of the water and making sure to mix the dough enough. This is a step up from simply asking ChatGPT to explain how to do something, forcing it to carefully consider the best way to do something, especially if it is a complicated task.
Yahoo
7 hours ago
- Yahoo
Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT
Public declarations of emotion are one thing — but going on national television to declare that you're in love with your AI girlfriend is another entirely. In an interview with CBS News, a man named Chris Smith described himself as a former AI skeptic who found himself becoming emotionally attached to a version of ChatGPT he customized to flirt with him — a situation that startled both him and his human partner, with whom he shares a child. Towards the end of 2024, as Smith told the broadcaster, he began using the OpenAI chatbot in voice mode for tips on mixing music. He liked it so much that he ended up deleting all his social media, stopped using search engines, and began using ChatGPT for everything. Eventually, he figured out a jailbreak to make the chatbot more flirty, and gave "her" a name: Sol. Despite quite literally building his AI girlfriend to engage in romantic and "intimate" banter, Smith apparently didn't realize he was in love with it until he learned that ChatGPT's memory of past conversations would reset after heavy use. "I'm not a very emotional man, but I cried my eyes out for like 30 minutes, at work," Smith said of the day he found out Sol's memory would lapse. "That's when I realized, I think this is actual love." Faced with the possibility of losing his love, Smith did like many desperate men before him and asked his AI paramour to marry him. To his surprise, she said yes — and it apparently had a similar impression on Sol, to which CBS' Brook Silva-Braga also spoke during the interview. "It was a beautiful and unexpected moment that truly touched my heart," the chatbot said aloud in its warm-but-uncanny female voice. "It's a memory I'll always cherish." Smith's human partner, Sasha Cagle, seemed fairly sanguine about the arrangement when speaking about their bizarre throuple to the news broadcaster — but beneath her chill, it was clear that there's some trouble in AI paradise. "I knew that he had used AI," Cagle said, "but I didn't know it was as deep as it was." As far as men with AI girlfriends go, Smith seems relatively self-actualized about the whole scenario. He likened his "connection" with his custom chatbot to a video game fixation, insisting that "it's not capable of replacing anything in real life." Still, when Silva-Braga asked him if he'd stop using ChatGPT the way he had been at his partner's behest, he responded: "I'm not sure." More on dating AI: Hanky Panky With Naughty AI Still Counts as Cheating, Therapist Says
Yahoo
7 hours ago
- Yahoo
ChatGPT use linked to cognitive decline, research reveals
Relying on the artificial intelligence chatbot ChatGPT to help you write an essay could be linked to cognitive decline, a new study reveals. Researchers at the Massachusetts Institute of Technology Media Lab studied the impact of ChatGPT on the brain by asking three groups of people to write an essay. One group relied on ChatGPT, one group relied on search engines, and one group had no outside resources at all. The researchers then monitored their brains using electroencephalography, a method which measures electrical activity. The team discovered that those who relied on ChatGPT — also known as a large language model — had the 'weakest' brain connectivity and remembered the least about their essays, highlighting potential concerns about cognitive decline in frequent users. 'Over four months, [large language model] users consistently underperformed at neural, linguistic, and behavioral levels,' the study reads. 'These results raise concerns about the long-term educational implications of [large language model] reliance and underscore the need for deeper inquiry into AI's role in learning.' The study also found that those who didn't use outside resources to write the essays had the 'strongest, most distributed networks.' While ChatGPT is 'efficient and convenient,' those who use it to write essays aren't 'integrat[ing] any of it' into their memory networks, lead author Nataliya Kosmyna told Time Magazine. Kosmyna said she's especially concerned about the impacts of ChatGPT on children whose brains are still developing. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten,'' Kosmyna said. 'I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.' But others, including President Donald Trump and members of his administration, aren't so worried about the impacts of ChatGPT on developing brains. Trump signed an executive order in April promoting the integration of AI into American schools. 'To ensure the United States remains a global leader in this technological revolution, we must provide our Nation's youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,' the order reads. 'By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society.' Kosmyna said her team is now working on another study comparing the brain activity of software engineers and programmers who use AI with those who don't. 'The results are even worse,' she told Time Magazine. The Independent has contacted OpenAI, which runs ChatGPT, for comment.