logo
#

Latest news with #NataliyaKosmyna

Relying on AI could be weakening the way we think, researchers warn
Relying on AI could be weakening the way we think, researchers warn

Sinar Daily

time6 hours ago

  • Science
  • Sinar Daily

Relying on AI could be weakening the way we think, researchers warn

ARTIFICIAL intelligence is progressively transforming how we write, research, and communicate in this new age of technological renaissance. But according to MIT's latest study, this digital shortcut might come at a steep price: our brainpower. A new study by researchers at the Massachusetts Institute of Technology (MIT) has raised red flags over the long-term cognitive effects of using AI chatbots like ChatGPT, suggesting that outsourcing our thinking to machines may be dulling our minds, reducing critical thinking, and increasing our 'cognitive debt.' Researchers at MIT found that participants who used ChatGPT to write essays exhibited significantly lower brain activity, weaker memory recall, and poorer performance in critical thinking tasks than those who completed the same assignments using only their own thoughts or traditional search engines. 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the research paper elaborated. While AI tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Photo: Canva The MIT Study The conducted study in question involved 54 participants, who were divided into three groups: one used ChatGPT, another relied on search engines, and the last used only their brainpower to write four essays. Using electroencephalogram (EEG) scans, the researchers measured brain activity during and after the writing tasks. The results were stark. 'EEG revealed significant differences in brain connectivity. Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM (Large Language Model) users displayed the weakest connectivity,' the researchers reported. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Researchers described this as 'offloading human thinking and planning,' indicating that the brain was doing less work because it was leaning on the AI. Interestingly, when later asked to quote or discuss the content of their essays without AI help, 83 per cent of the chatbot users failed to provide a single correct quote, compared to just 10 per cent among the search engine and brain-only groups. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. Photo: Canva In context to the study, this would likely suggest they either didn't engage deeply with the content or simply didn't remember it. 'Frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving,' lead researcher Dr Nataliya Kosmyna warned. The chatbot-written essays were also found to be homogenous, with repetitive themes and language, suggesting that while AI might produce polished results, it lacks diversity of thought and originality. Are our minds getting lazy? The MIT findings echo earlier warnings about the dangers of 'cognitive offloading' — a term used when people rely on external tools to think for them. An earlier February 2025 study by Microsoft and Carnegie Mellon University found that workers who heavily relied on AI tools reported lower levels of critical thinking and reduced confidence in their own reasoning abilities. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. This particular trend is steadily increasing concerns of having serious consequences for education and workforce development. Moving forward, the MIT team cautioned that relying too much on AI could diminish creativity, increase vulnerability to manipulation, and weaken long-term memory and language skills. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Photo: Canva The dawn of a new era? With AI chatbots becoming increasingly common in classrooms and homework help, educators are facing a difficult balancing act. While these said tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Teachers have been voicing concerns that students are using AI to cheat or shortcut their assignments. The aforementioned MIT study provides hard evidence that such practices don't just break rules — they may actually hinder intellectual development. As such, the primary takeaway is not that AI is inherently bad — but that how we use it matters greatly. The study thus reinforces the importance of engaging actively with information, rather than blindly outsourcing thinking to machines. As the researchers put it: 'AI-assisted tools should be integrated carefully, ensuring that human cognition remains at the centre of learning and decision-making.'

ChatGPT use linked to cognitive decline: MIT research
ChatGPT use linked to cognitive decline: MIT research

The Hill

time11 hours ago

  • Science
  • The Hill

ChatGPT use linked to cognitive decline: MIT research

ChatGPT can harm an individual's critical thinking over time, a new study suggests. Researchers at MIT's Media Lab asked subjects to write several SAT essays and separated subjects into three groups — using OpenAI's ChatGPT, using Google's search engine and using nothing, which they called the 'brain‑only' group. Each subject's brain was monitored through electroencephalography (EEG), which measured the writer's brain activity through multiple regions in the brain. They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels,' according to the study. The study found that the ChatGPT group initially used the large language model, or LLM, to ask structural questions for their essay, but near the end of the study, they were more likely to copy and paste their essay. Those who used Google's search engine were found to have moderate brain engagement, but the 'brain-only' group showed the 'strongest, wide-ranging networks.' The findings suggest that using LLMs can harm a user's cognitive function over time, especially in younger users. It comes as educators continue to navigate teaching when AI is increasingly accessible for cheating. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' the study's main author Nataliya Kosmyna told TIME. 'Developing brains are at the highest risk.' However, using AI in education doesn't appear to be slowing down. In April, President Trump signed an executive order that aims to incorporate AI into U.S. classrooms. 'The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal,' Will Scharf, White House staff secretary, said at the time.

ChatGPT is getting smarter, but excessive use could destroy our brains, study warns
ChatGPT is getting smarter, but excessive use could destroy our brains, study warns

New York Post

timea day ago

  • Science
  • New York Post

ChatGPT is getting smarter, but excessive use could destroy our brains, study warns

Is it an artificial lack of intelligence? Not only is AI getting frighteningly smart, but it may be making us dumber as well. Scientists found that students who used ChatGPT to complete essays had poorer cognitive skills than those who relied on just their brain, according to a dystopian new study out of the Massachusetts Institute of Technology (MIT) in Cambridge. Advertisement 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the researchers wrote, per the Telegraph. The team had set out to determine the 'cognitive cost' of using large language models (LLMs), which have become increasingly omnipresent in every sector of society, including academia. According to a winter survey by the Pew Research Center, approximately 26% of teen students used the AI chatbot to help them with assignments in 2024 — up from just 13% in 2023. 3 ChatGPT is easily accessible on smartphones and other tech. Ascannio – Advertisement To determine how using synthetic homework assistants affects the mind, the MIT researchers tasked 54 people with writing several SAT essays, Time Magazine reported. Participants were split into three groups: one that relied on pure brainpower, one that used Google, and a third that enlisted the aid of the now-ubiquitous LLM ChatGPT. Each person was outfitted with an electroencephalography (EEG) device so researchers could monitor their brain activity while completing the task. They found that the ChatGPT group 'performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring,' according to the Telegraph. 3 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the researchers wrote. PhotoGranary – Advertisement The readings also showed reduced activity in the regions of the brain associated with memory and learning, the authors said, noting that a lot of the 'thinking and planning was offloaded.' In fact, AI-aided scholars got lazier with each subsequent paper to the point that by the third essay, they were simply typing the prompt into ChatGPT and having it do all the work. 'It was more like, 'Just give me the essay, refine this sentence, edit it, and I'm done,'' said the paper's main author, Nataliya Kosmyna. By contrast, the essayists with no external aid demonstrated the highest levels of neural connectivity, especially in regions of the brain responsible for language comprehension, creativity and memory. Advertisement The brain-only group was also more engaged and satisfied with their essays, per the study. Interestingly, the Google group showed just slightly lower levels of engagement, but the same amount of recall — a perhaps troubling prospect given the increasing number of people who dive into research using AI rather than internet search engines. 3 Researchers deduced that too much reliance on AI could have long-term cognitive effects. Daniel CHETRONI – Researchers deduced that 'frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving.' That could have long-term ramifications, including 'diminished critical inquiry, increased vulnerability to manipulation' and 'decreased creativity,' the authors said. Fortunately, the findings weren't a total indictment of AI in academia. As a follow-up exam, the scientists asked the ChatGPT group and their brain-only counterparts to rewrite one of their previous essays — but the AI-assisted participants did so without the chatbot, while the unassisted group could use the cutting-edge tech. Advertisement Unsurprisingly, the original ChatGPT group didn't recall much info from their papers, indicating either a lack of engagement or an inability to remember it. Meanwhile, the former brain-only group exhibited a marked increase in brain activity across all the aforementioned regions despite using the tool. That suggests if used properly, AI could be a helpful academic tool rather than a cognition-destroying crutch. Advertisement The warning about AI-induced brain atrophy comes — somewhat frighteningly — as the technology is becoming more 'intelligent.' Recently, Chinese researchers found the first-ever evidence that AI models like ChatGPT process information similarly to the human mind — particularly when it comes to language grouping.

Essay aid or cognitive crutch? MIT study tests the cost of writing with AI
Essay aid or cognitive crutch? MIT study tests the cost of writing with AI

Business Standard

timea day ago

  • Science
  • Business Standard

Essay aid or cognitive crutch? MIT study tests the cost of writing with AI

While LLMs reduce cognitive load, a new study warns they may also hinder critical thinking and memory retention - raising concerns about their growing role in learning and cognitive development Rahul Goreja New Delhi A new study from the Massachusetts Institute of Technology (MIT) Media Lab has raised concerns about how artificial intelligence tools like ChatGPT may impact students' cognitive engagement and learning when used to write essays. The research, led by Nataliya Kosmyna and a team from MIT and Wellesley College, examines how reliance on large language models (LLMs) such as ChatGPT compares to traditional methods like web searches or writing without any digital assistance. Using a combination of electroencephalogram (EEG) recordings, interviews, and text analysis, the study revealed distinct differences in neural activity, essay quality, and perceived ownership depending on the method used. Note: EEG is a test that measures electrical activity in the brain. Setup for cognitive engagement study 54 participants from five Boston-area universities were split into three groups: those using only ChatGPT (LLM group), those using only search engines (search group), and those writing without any tools (brain-only group). Each participant completed three writing sessions. A subset also participated in a fourth session where roles were reversed: LLM users wrote without assistance, and brain-only participants used ChatGPT. All participants wore EEG headsets to monitor brain activity during writing. Researchers also interviewed participants' post-session and assessed essays using both human markers and an AI judge. Findings on neural engagement Electroencephalogram (EEG) analysis showed that participants relying solely on their own cognitive abilities exhibited the highest levels of neural connectivity across alpha, beta, theta, and delta bands — indicating deeper cognitive engagement. In contrast, LLM users showed the weakest connectivity. The search group fell in the middle. 'The brain connectivity systematically scaled down with the amount of external support,' the authors wrote. Notably, LLM-to-Brain participants in the fourth session continued to show under-engagement, suggesting a lingering cognitive effect from prior LLM use. Essay structure, memory, and ownership When asked to quote from their essays shortly after writing, 83.3 per cent of LLM users failed to do so. In comparison, only 11.1 per cent of participants in the other two groups struggled with this task. One participant noted that they 'did not believe the essay prompt provided required AI assistance at all,' while another described ChatGPT's output as 'robotic.' Essay ownership also varied. Most brain-only participants reported full ownership, while the LLM group responses ranged widely from full ownership to explicit denial to many taking partial credit. Despite this, essay satisfaction remained relatively high across all groups, with the search group being unanimously satisfied. Interestingly, LLM users were often satisfied with the output, even when they acknowledged limited involvement in the content's creation. Brain power trumps AI aid While AI tools may improve efficiency, the study cautions against their unnecessary adoption in learning contexts. 'The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of four months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring,' the authors wrote. This pattern was especially evident in session four, where brain-to-LLM participants showed stronger memory recall and more directed neural connectivity than those who moved in the opposite direction. Less effort, lower retention The study warns that although LLMs reduce cognitive load, they may diminish critical thinking and reduce long-term retention. 'The reported ownership of LLM group's essays in the interviews was low,' the authors noted. 'The LLM undeniably reduced the friction involved in answering participants' questions compared to the search engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions' (probabilistic answers based on the training datasets),' it concluded.

Is ChatGPT making us dumb? MIT study says students are using their brains less
Is ChatGPT making us dumb? MIT study says students are using their brains less

India Today

timea day ago

  • Science
  • India Today

Is ChatGPT making us dumb? MIT study says students are using their brains less

ChatGPT is making students dumb! Or rather, making them use their brains less. A new study by MIT's Media Lab around the impact on human cognition, particularly among students, found that using generative AI tools like ChatGPT for academic work and learning could actually lower people's critical thinking and cognitive engagement over this study researchers observed 54 participants aged 18 to 39 from the Boston area, and divided them into three groups. Each group of students was then asked to write SAT-style essays using either OpenAI's ChatGPT, Google Search, or no digital assistance at all. During this process, researchers monitored brain activity among users through electroencephalography (EEG), scanning 32 different brain regions to evaluate cognitive engagement during the findings were concerning. The group of students using ChatGPT showed the lowest levels of brain activity. According to the study, these students 'consistently underperformed at neural, linguistic, and behavioural levels.' In fact, the study found that over the course of several essays, many ChatGPT users became increasingly passive, often resorting to just copying and pasting text from the AI chatbot's responses rather than refining or reflecting on the content in line with their own thoughts. Meanwhile, the students who worked without any digital tools showed the highest brain activity, particularly in regions associated with creativity, memory, and semantic processing. 'The task was executed, and you could say that it was efficient and convenient,' Nataliya Kosmyna, one of the authors of the research paper. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.'Long term impact suspectedadvertisementResearchers concluded that while AI can help students' quick productivity, it can also impact long-term learning and brain development. Meanwhile, the essay-writing group that used no tools reported higher levels of satisfaction and ownership over their work. In this group, the EEG readings also showed greater neural connectivity in the alpha, theta, and delta frequency bands, areas that are often linked to deep thinking and creative the group using Google Search showed relatively high levels of brain engagement, suggesting that traditional internet browsing still stimulates active thought processes. The difference further shows how AI users tend to rely entirely on chatbot responses for information instead of thinking critically or using search further understand and measure retention and comprehension, researchers also asked the students to rewrite one of their essays. And this time the tools were swapped. Students who earlier used ChatGPT were now asked to write without assistance, and the group which used their brain were asked to use AI. The results of this swapping further reinforced the earlier findings. The users who had relied on ChatGPT struggled to recall their original essays and showed weak cognitive re-engagement. Meanwhile, the group that had initially written without the online tools showed increased neural activity when using ChatGPT. This finding further confirms that AI tools can be helpful in learning, but only when used after humans complete the foundational thinking themselves.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store