logo
#

Latest news with #EEG

ChatGPT use linked to cognitive decline: MIT research
ChatGPT use linked to cognitive decline: MIT research

The Hill

time11 hours ago

  • Science
  • The Hill

ChatGPT use linked to cognitive decline: MIT research

ChatGPT can harm an individual's critical thinking over time, a new study suggests. Researchers at MIT's Media Lab asked subjects to write several SAT essays and separated subjects into three groups — using OpenAI's ChatGPT, using Google's search engine and using nothing, which they called the 'brain‑only' group. Each subject's brain was monitored through electroencephalography (EEG), which measured the writer's brain activity through multiple regions in the brain. They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels,' according to the study. The study found that the ChatGPT group initially used the large language model, or LLM, to ask structural questions for their essay, but near the end of the study, they were more likely to copy and paste their essay. Those who used Google's search engine were found to have moderate brain engagement, but the 'brain-only' group showed the 'strongest, wide-ranging networks.' The findings suggest that using LLMs can harm a user's cognitive function over time, especially in younger users. It comes as educators continue to navigate teaching when AI is increasingly accessible for cheating. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' the study's main author Nataliya Kosmyna told TIME. 'Developing brains are at the highest risk.' However, using AI in education doesn't appear to be slowing down. In April, President Trump signed an executive order that aims to incorporate AI into U.S. classrooms. 'The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal,' Will Scharf, White House staff secretary, said at the time.

ChatGPT is getting smarter, but excessive use could destroy our brains, study warns
ChatGPT is getting smarter, but excessive use could destroy our brains, study warns

New York Post

timea day ago

  • Science
  • New York Post

ChatGPT is getting smarter, but excessive use could destroy our brains, study warns

Is it an artificial lack of intelligence? Not only is AI getting frighteningly smart, but it may be making us dumber as well. Scientists found that students who used ChatGPT to complete essays had poorer cognitive skills than those who relied on just their brain, according to a dystopian new study out of the Massachusetts Institute of Technology (MIT) in Cambridge. Advertisement 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the researchers wrote, per the Telegraph. The team had set out to determine the 'cognitive cost' of using large language models (LLMs), which have become increasingly omnipresent in every sector of society, including academia. According to a winter survey by the Pew Research Center, approximately 26% of teen students used the AI chatbot to help them with assignments in 2024 — up from just 13% in 2023. 3 ChatGPT is easily accessible on smartphones and other tech. Ascannio – Advertisement To determine how using synthetic homework assistants affects the mind, the MIT researchers tasked 54 people with writing several SAT essays, Time Magazine reported. Participants were split into three groups: one that relied on pure brainpower, one that used Google, and a third that enlisted the aid of the now-ubiquitous LLM ChatGPT. Each person was outfitted with an electroencephalography (EEG) device so researchers could monitor their brain activity while completing the task. They found that the ChatGPT group 'performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring,' according to the Telegraph. 3 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the researchers wrote. PhotoGranary – Advertisement The readings also showed reduced activity in the regions of the brain associated with memory and learning, the authors said, noting that a lot of the 'thinking and planning was offloaded.' In fact, AI-aided scholars got lazier with each subsequent paper to the point that by the third essay, they were simply typing the prompt into ChatGPT and having it do all the work. 'It was more like, 'Just give me the essay, refine this sentence, edit it, and I'm done,'' said the paper's main author, Nataliya Kosmyna. By contrast, the essayists with no external aid demonstrated the highest levels of neural connectivity, especially in regions of the brain responsible for language comprehension, creativity and memory. Advertisement The brain-only group was also more engaged and satisfied with their essays, per the study. Interestingly, the Google group showed just slightly lower levels of engagement, but the same amount of recall — a perhaps troubling prospect given the increasing number of people who dive into research using AI rather than internet search engines. 3 Researchers deduced that too much reliance on AI could have long-term cognitive effects. Daniel CHETRONI – Researchers deduced that 'frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving.' That could have long-term ramifications, including 'diminished critical inquiry, increased vulnerability to manipulation' and 'decreased creativity,' the authors said. Fortunately, the findings weren't a total indictment of AI in academia. As a follow-up exam, the scientists asked the ChatGPT group and their brain-only counterparts to rewrite one of their previous essays — but the AI-assisted participants did so without the chatbot, while the unassisted group could use the cutting-edge tech. Advertisement Unsurprisingly, the original ChatGPT group didn't recall much info from their papers, indicating either a lack of engagement or an inability to remember it. Meanwhile, the former brain-only group exhibited a marked increase in brain activity across all the aforementioned regions despite using the tool. That suggests if used properly, AI could be a helpful academic tool rather than a cognition-destroying crutch. Advertisement The warning about AI-induced brain atrophy comes — somewhat frighteningly — as the technology is becoming more 'intelligent.' Recently, Chinese researchers found the first-ever evidence that AI models like ChatGPT process information similarly to the human mind — particularly when it comes to language grouping.

Essay aid or cognitive crutch? MIT study tests the cost of writing with AI
Essay aid or cognitive crutch? MIT study tests the cost of writing with AI

Business Standard

timea day ago

  • Science
  • Business Standard

Essay aid or cognitive crutch? MIT study tests the cost of writing with AI

While LLMs reduce cognitive load, a new study warns they may also hinder critical thinking and memory retention - raising concerns about their growing role in learning and cognitive development Rahul Goreja New Delhi A new study from the Massachusetts Institute of Technology (MIT) Media Lab has raised concerns about how artificial intelligence tools like ChatGPT may impact students' cognitive engagement and learning when used to write essays. The research, led by Nataliya Kosmyna and a team from MIT and Wellesley College, examines how reliance on large language models (LLMs) such as ChatGPT compares to traditional methods like web searches or writing without any digital assistance. Using a combination of electroencephalogram (EEG) recordings, interviews, and text analysis, the study revealed distinct differences in neural activity, essay quality, and perceived ownership depending on the method used. Note: EEG is a test that measures electrical activity in the brain. Setup for cognitive engagement study 54 participants from five Boston-area universities were split into three groups: those using only ChatGPT (LLM group), those using only search engines (search group), and those writing without any tools (brain-only group). Each participant completed three writing sessions. A subset also participated in a fourth session where roles were reversed: LLM users wrote without assistance, and brain-only participants used ChatGPT. All participants wore EEG headsets to monitor brain activity during writing. Researchers also interviewed participants' post-session and assessed essays using both human markers and an AI judge. Findings on neural engagement Electroencephalogram (EEG) analysis showed that participants relying solely on their own cognitive abilities exhibited the highest levels of neural connectivity across alpha, beta, theta, and delta bands — indicating deeper cognitive engagement. In contrast, LLM users showed the weakest connectivity. The search group fell in the middle. 'The brain connectivity systematically scaled down with the amount of external support,' the authors wrote. Notably, LLM-to-Brain participants in the fourth session continued to show under-engagement, suggesting a lingering cognitive effect from prior LLM use. Essay structure, memory, and ownership When asked to quote from their essays shortly after writing, 83.3 per cent of LLM users failed to do so. In comparison, only 11.1 per cent of participants in the other two groups struggled with this task. One participant noted that they 'did not believe the essay prompt provided required AI assistance at all,' while another described ChatGPT's output as 'robotic.' Essay ownership also varied. Most brain-only participants reported full ownership, while the LLM group responses ranged widely from full ownership to explicit denial to many taking partial credit. Despite this, essay satisfaction remained relatively high across all groups, with the search group being unanimously satisfied. Interestingly, LLM users were often satisfied with the output, even when they acknowledged limited involvement in the content's creation. Brain power trumps AI aid While AI tools may improve efficiency, the study cautions against their unnecessary adoption in learning contexts. 'The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of four months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring,' the authors wrote. This pattern was especially evident in session four, where brain-to-LLM participants showed stronger memory recall and more directed neural connectivity than those who moved in the opposite direction. Less effort, lower retention The study warns that although LLMs reduce cognitive load, they may diminish critical thinking and reduce long-term retention. 'The reported ownership of LLM group's essays in the interviews was low,' the authors noted. 'The LLM undeniably reduced the friction involved in answering participants' questions compared to the search engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions' (probabilistic answers based on the training datasets),' it concluded.

Is ChatGPT making us dumb? MIT study says students are using their brains less
Is ChatGPT making us dumb? MIT study says students are using their brains less

India Today

timea day ago

  • Science
  • India Today

Is ChatGPT making us dumb? MIT study says students are using their brains less

ChatGPT is making students dumb! Or rather, making them use their brains less. A new study by MIT's Media Lab around the impact on human cognition, particularly among students, found that using generative AI tools like ChatGPT for academic work and learning could actually lower people's critical thinking and cognitive engagement over this study researchers observed 54 participants aged 18 to 39 from the Boston area, and divided them into three groups. Each group of students was then asked to write SAT-style essays using either OpenAI's ChatGPT, Google Search, or no digital assistance at all. During this process, researchers monitored brain activity among users through electroencephalography (EEG), scanning 32 different brain regions to evaluate cognitive engagement during the findings were concerning. The group of students using ChatGPT showed the lowest levels of brain activity. According to the study, these students 'consistently underperformed at neural, linguistic, and behavioural levels.' In fact, the study found that over the course of several essays, many ChatGPT users became increasingly passive, often resorting to just copying and pasting text from the AI chatbot's responses rather than refining or reflecting on the content in line with their own thoughts. Meanwhile, the students who worked without any digital tools showed the highest brain activity, particularly in regions associated with creativity, memory, and semantic processing. 'The task was executed, and you could say that it was efficient and convenient,' Nataliya Kosmyna, one of the authors of the research paper. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.'Long term impact suspectedadvertisementResearchers concluded that while AI can help students' quick productivity, it can also impact long-term learning and brain development. Meanwhile, the essay-writing group that used no tools reported higher levels of satisfaction and ownership over their work. In this group, the EEG readings also showed greater neural connectivity in the alpha, theta, and delta frequency bands, areas that are often linked to deep thinking and creative the group using Google Search showed relatively high levels of brain engagement, suggesting that traditional internet browsing still stimulates active thought processes. The difference further shows how AI users tend to rely entirely on chatbot responses for information instead of thinking critically or using search further understand and measure retention and comprehension, researchers also asked the students to rewrite one of their essays. And this time the tools were swapped. Students who earlier used ChatGPT were now asked to write without assistance, and the group which used their brain were asked to use AI. The results of this swapping further reinforced the earlier findings. The users who had relied on ChatGPT struggled to recall their original essays and showed weak cognitive re-engagement. Meanwhile, the group that had initially written without the online tools showed increased neural activity when using ChatGPT. This finding further confirms that AI tools can be helpful in learning, but only when used after humans complete the foundational thinking themselves.

Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind
Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind

Time of India

timea day ago

  • Science
  • Time of India

Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind

It's quick, it's clever, and it answers almost everything—no wonder millions around the world rely on ChatGPT. But could this digital genie be dulling our minds with every wish we make? According to a startling new study by scientists at MIT's Media Lab, the answer may be yes. Researchers have now found that excessive use of AI tools like ChatGPT could be quietly eroding your memory, critical thinking, and even your brain activity. Published on arXiv, the study titled 'The Cognitive Cost of Using LLMs' explores how language models—especially ChatGPT—affect the brain's ability to think, learn, and retain information. Brain vs Bot: How the Study Was Done To examine what they call the 'cognitive cost' of using large language models (LLMs), MIT researchers tracked 54 students over a four-month period using electroencephalography (EEG) devices to monitor brain activity. The participants were divided into three groups: one used ChatGPT, another relied on Google, and the last used no external help at all—dubbed the 'Brain-only' group. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World by Taboola by Taboola While the AI-powered group initially showed faster results, the long-term findings were more sobering. Students who depended on ChatGPT for essay writing exhibited poorer memory retention, reduced brain engagement, and lower scoring compared to their peers. As the researchers noted, 'The LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, and scoring.' — rohanpaul_ai (@rohanpaul_ai) Google Wasn't Great, But Still Better Than ChatGPT Interestingly, students who used Google showed moderate brain activity and generated more thoughtful content than those who leaned on ChatGPT. Meanwhile, those in the Brain-only group had the highest levels of cognitive engagement, producing original ideas and deeper insights. In fact, even when ChatGPT users later attempted to write without assistance, their brain activity remained subdued—unlike the other groups who showed increased engagement while adapting to new tools. You Might Also Like: Narayana Murthy says AI was five times faster at what took him hours; shares how techies should use it This suggests that habitual ChatGPT usage might not just affect how we think, but whether we think at all. A Shortcut with a Hidden Toll The study also points to how this over-reliance on AI encourages mental passivity. While ChatGPT users reported reduced friction in accessing information, this convenience came at a cost. As the researchers explained, 'This convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions'.' The team also raised red flags about algorithmic bias : what appears as top-ranked content from an AI is often a result of shareholder-driven training data, not necessarily truth or value. This creates a more sophisticated version of the 'echo chamber,' where your thoughts are subtly shaped—not by your own reasoning, but by an AI's probabilistic guesses. What This Means for the AI Generation As AI tools become more embedded in our everyday tasks—from writing emails to crafting essays—this study is a wake-up call for students, educators, and professionals. While tools like ChatGPT are powerful assistants, they should not become cognitive crutches. You Might Also Like: 'Neuralink babies'? Scale AI's Alexandr Wang says he is waiting for Elon Musk's brain chips before having kids The researchers caution that as language models continue to evolve, users must remain alert to their potential mental side effects. In a world where convenience is king, critical thinking might just be the first casualty.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store