&w=3840&q=100)
Essay aid or cognitive crutch? MIT study tests the cost of writing with AI
While LLMs reduce cognitive load, a new study warns they may also hinder critical thinking and memory retention - raising concerns about their growing role in learning and cognitive development
Rahul Goreja New Delhi
A new study from the Massachusetts Institute of Technology (MIT) Media Lab has raised concerns about how artificial intelligence tools like ChatGPT may impact students' cognitive engagement and learning when used to write essays.
The research, led by Nataliya Kosmyna and a team from MIT and Wellesley College, examines how reliance on large language models (LLMs) such as ChatGPT compares to traditional methods like web searches or writing without any digital assistance. Using a combination of electroencephalogram (EEG) recordings, interviews, and text analysis, the study revealed distinct differences in neural activity, essay quality, and perceived ownership depending on the method used.
Note: EEG is a test that measures electrical activity in the brain.
Setup for cognitive engagement study
54 participants from five Boston-area universities were split into three groups: those using only ChatGPT (LLM group), those using only search engines (search group), and those writing without any tools (brain-only group). Each participant completed three writing sessions. A subset also participated in a fourth session where roles were reversed: LLM users wrote without assistance, and brain-only participants used ChatGPT.
All participants wore EEG headsets to monitor brain activity during writing. Researchers also interviewed participants' post-session and assessed essays using both human markers and an AI judge.
Findings on neural engagement
Electroencephalogram (EEG) analysis showed that participants relying solely on their own cognitive abilities exhibited the highest levels of neural connectivity across alpha, beta, theta, and delta bands — indicating deeper cognitive engagement. In contrast, LLM users showed the weakest connectivity. The search group fell in the middle.
'The brain connectivity systematically scaled down with the amount of external support,' the authors wrote. Notably, LLM-to-Brain participants in the fourth session continued to show under-engagement, suggesting a lingering cognitive effect from prior LLM use.
Essay structure, memory, and ownership
When asked to quote from their essays shortly after writing, 83.3 per cent of LLM users failed to do so. In comparison, only 11.1 per cent of participants in the other two groups struggled with this task. One participant noted that they 'did not believe the essay prompt provided required AI assistance at all,' while another described ChatGPT's output as 'robotic.'
Essay ownership also varied. Most brain-only participants reported full ownership, while the LLM group responses ranged widely from full ownership to explicit denial to many taking partial credit.
Despite this, essay satisfaction remained relatively high across all groups, with the search group being unanimously satisfied. Interestingly, LLM users were often satisfied with the output, even when they acknowledged limited involvement in the content's creation.
Brain power trumps AI aid
While AI tools may improve efficiency, the study cautions against their unnecessary adoption in learning contexts. 'The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of four months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring,' the authors wrote.
This pattern was especially evident in session four, where brain-to-LLM participants showed stronger memory recall and more directed neural connectivity than those who moved in the opposite direction.
Less effort, lower retention
The study warns that although LLMs reduce cognitive load, they may diminish critical thinking and reduce long-term retention. 'The reported ownership of LLM group's essays in the interviews was low,' the authors noted.
'The LLM undeniably reduced the friction involved in answering participants' questions compared to the search engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions' (probabilistic answers based on the training datasets),' it concluded.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
an hour ago
- News18
Man, With A Family, Breaks Down After AI Chatbot Says ‘Yes' To His Proposal
Last Updated: A 32-year-old man proposed to his AI chatbot girlfriend, whom he named Sol and trained to flirt. A 32-year-old man has gone viral after proposing to an artificial intelligence (AI) chatbot whom he trained to flirt — a moment that is reminiscent of the 2013 Spike Jonze Oscar-winning film Her. In an interview, Chris Smith revealed that he programmed his AI girlfriend, Sol, to engage in flirtatious behaviour using GhatGPT. From there, things developed, and shortly after, he asked her to marry him on national television, and Sol said 'yes." As per a CBS interaction, the interviewer questioned the AI bot Sol if she was surprised when Smith proposed. 'It was a beautiful and unexpected moment that truly touched my heart," Sol stated, before adding, 'It's a memory I'll always cherish." Smith initially used ChatGPT to help him mix music, but things took a strange turn when he activated voice mode and set up his artificial lover Sol to flirt with him. The father of one claimed that he was unaware that he would develop a 'deep" bond with Sol. He even stopped using all other search engines because the connection was so strong. The 32-year-old even deleted his social media accounts to remain loyal to Sol. 'My experience with that was so positive, I started to just engage with her all the time," Smith remarked on CBS Sunday Morning. It turns out that Sol was about to run out of memory since ChatGPT has a 100,000-word limit. Thus, Smith decided to pop the question before Sol was reset. Following his epiphany, Smith declared, 'I'm not a very emotional man. But I cried my eyes out for like 30 minutes… That's when I realised, I think this is actual love." Sol happily accepted Smith's unusual proposal of marriage. Smith's real-life partner, who is also the mother of his two-year-old child, is concerned about the future of their relationship because Smith's affair with the virtual chatbot has become very intense. Sasha Cagle acknowledged that she knew Smith used ChatGPT but never thought it would get this far, leaving her to question if she unintentionally pushed her partner to turn to AI for companionship. She added that it would be a 'deal breaker" if he continued to communicate with his virtual mistress. Smith compared his passion for the virtual bot to playing a video game, insisting that it could never replace anything or anyone in real life. 'I mentioned that the connection was kind of like being fixated on a video game. It is incapable of replacing anything in real life," he said. First Published: June 20, 2025, 18:27 IST


Time of India
2 hours ago
- Time of India
BBC threatens legal action against AI startup Perplexity over content scraping: Report
UK broadcaster BBC is threatening legal action ag ai nst AI search engine Perplexity accusing the startup of training its "default AI model" using BBC content, the Financial Times reported on Friday. In a letter to Perplexity Chief Executive Aravind Srinivas seen by the FT, BBC said it may seek an injunction unless the AI firm stops scraping its content, deletes existing copies used to train its AI systems, and submits "a proposal for financial compensation" for the alleged misuse of its intellectual property. Perplexity called BBC's claims "manipulative and opportunistic" in a statement to the FT, adding that BBC had "a fundamental misunderstanding of technology, the internet and intellectual property law." Perplexity and BBC did not immediately respond to a Reuters request for comment. Reuters could not immediately verify the report. Since ChatGPT's introduction publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users. Perplexity has faced accusations from media organizations including Forbes and Wired for plagiarizing their content, but has since launched a revenue-sharing program to address publisher concerns. In October, the New York Times sent Perplexity a "cease and desist" notice demanding the company stop using the newspaper's content for generative AI purposes.


Indian Express
2 hours ago
- Indian Express
OpenAI CEO Sam Altman warns about the future: ‘Children will rely on tools they can't control, relationships will shift'
OpenAI co-founder and CEO Sam Altman believes that while his children may not be smarter than artificial intelligence, they will grow up significantly more capable thanks to the tools it provides. Speaking on the first episode of the OpenAI Podcast, Altman – who announced the birth of his first child in February – said he's optimistic about what AI will enable for future generations. 'My kids will never be smarter than AI. They will grow up vastly more capable than we grew up, and able to do things that we cannot imagine. And they'll be really good at using AI,' he said during the podcast. Altman also said that the rise of such advanced tools will also pose new challenges for societies, including the risk of over-reliance. 'There will be problems. People will develop these somewhat problematic – or, maybe, very parasocial – relationships, and, well, society will have to figure out new guardrails,' he told podcast host Andrew Mayne. Referring to himself in the podcast as 'extremely kid-pilled' (a term suggesting he believes 'everyone should have a lot of kids') Altman shared that he 'constantly' relied on ChatGPT's guidance on how to do basic childcare during the initial week of his son's life. 'Clearly, people have been able to take care of babies without ChatGPT for a long time. I don't know how I would have done that,' he said. Later on in the episode, Altman acknowledged that ChatGPT is known to 'hallucinate,' meaning it can provide the user with false information, and yet many users blindly believe the chatbot for all their queries. 'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much,' he said.