
Relying on AI could be weakening the way we think, researchers warn
ARTIFICIAL intelligence is progressively transforming how we write, research, and communicate in this new age of technological renaissance. But according to MIT's latest study, this digital shortcut might come at a steep price: our brainpower.
A new study by researchers at the Massachusetts Institute of Technology (MIT) has raised red flags over the long-term cognitive effects of using AI chatbots like ChatGPT, suggesting that outsourcing our thinking to machines may be dulling our minds, reducing critical thinking, and increasing our 'cognitive debt.'
Researchers at MIT found that participants who used ChatGPT to write essays exhibited significantly lower brain activity, weaker memory recall, and poorer performance in critical thinking tasks than those who completed the same assignments using only their own thoughts or traditional search engines.
'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the research paper elaborated. While AI tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Photo: Canva
The MIT Study
The conducted study in question involved 54 participants, who were divided into three groups: one used ChatGPT, another relied on search engines, and the last used only their brainpower to write four essays.
Using electroencephalogram (EEG) scans, the researchers measured brain activity during and after the writing tasks. The results were stark.
'EEG revealed significant differences in brain connectivity. Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM (Large Language Model) users displayed the weakest connectivity,' the researchers reported.
As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation.
Researchers described this as 'offloading human thinking and planning,' indicating that the brain was doing less work because it was leaning on the AI.
Interestingly, when later asked to quote or discuss the content of their essays without AI help, 83 per cent of the chatbot users failed to provide a single correct quote, compared to just 10 per cent among the search engine and brain-only groups. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. Photo: Canva
In context to the study, this would likely suggest they either didn't engage deeply with the content or simply didn't remember it.
'Frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving,' lead researcher Dr Nataliya Kosmyna warned.
The chatbot-written essays were also found to be homogenous, with repetitive themes and language, suggesting that while AI might produce polished results, it lacks diversity of thought and originality.
Are our minds getting lazy?
The MIT findings echo earlier warnings about the dangers of 'cognitive offloading' — a term used when people rely on external tools to think for them.
An earlier February 2025 study by Microsoft and Carnegie Mellon University found that workers who heavily relied on AI tools reported lower levels of critical thinking and reduced confidence in their own reasoning abilities.
The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them.
This particular trend is steadily increasing concerns of having serious consequences for education and workforce development.
Moving forward, the MIT team cautioned that relying too much on AI could diminish creativity, increase vulnerability to manipulation, and weaken long-term memory and language skills. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Photo: Canva
The dawn of a new era?
With AI chatbots becoming increasingly common in classrooms and homework help, educators are facing a difficult balancing act.
While these said tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop.
Teachers have been voicing concerns that students are using AI to cheat or shortcut their assignments.
The aforementioned MIT study provides hard evidence that such practices don't just break rules — they may actually hinder intellectual development.
As such, the primary takeaway is not that AI is inherently bad — but that how we use it matters greatly.
The study thus reinforces the importance of engaging actively with information, rather than blindly outsourcing thinking to machines.
As the researchers put it:
'AI-assisted tools should be integrated carefully, ensuring that human cognition remains at the centre of learning and decision-making.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Sun
5 hours ago
- The Sun
BBC threatens legal action against AI startup Perplexity over content scraping, FT reports
THE BBC has threatened legal action against Perplexity, accusing the AI startup of training its 'default AI model' using BBC content, the Financial Times reported on Friday, making the British broadcaster the latest news organisation to accuse the AI firm of content scraping. The BBC may seek an injunction unless Perplexity stops scraping its content, deletes existing copies used to train its AI systems, and submits 'a proposal for financial compensation' for the alleged misuse of its intellectual property, FT said, citing a letter sent to Perplexity CEO Aravind Srinivas. The broadcaster confirmed the FT report in a statement to Reuters. Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content but has since launched a revenue-sharing program to address publisher concerns. Last October, the New York Times sent it a 'cease and desist' notice, demanding the firm stop using the newspaper's content for generative AI purposes. Since the introduction of ChatGPT, publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users. The BBC said that parts of its content had been reproduced verbatim by Perplexity and that links to the BBC website have appeared in search results, according to the FT report. Perplexity called the BBC's claims 'manipulative and opportunistic' in a statement to Reuters, adding that the broadcaster had 'a fundamental misunderstanding of technology, the internet and intellectual property law.' Perplexity provides information by searching the internet, similar to ChatGPT and Google's Gemini, and is backed by founder Jeff Bezos, AI giant Nvidia and Japan's SoftBank Group. The startup is in advanced talks to raise $500 million in a funding round that would value it at $14 billion, the Wall Street Journal reported last month.


The Sun
5 hours ago
- The Sun
BBC threatens legal action over Perplexity AI content use
THE BBC has threatened legal action against Perplexity, accusing the AI startup of training its 'default AI model' using BBC content, the Financial Times reported on Friday, making the British broadcaster the latest news organisation to accuse the AI firm of content scraping. The BBC may seek an injunction unless Perplexity stops scraping its content, deletes existing copies used to train its AI systems, and submits 'a proposal for financial compensation' for the alleged misuse of its intellectual property, FT said, citing a letter sent to Perplexity CEO Aravind Srinivas. The broadcaster confirmed the FT report in a statement to Reuters. Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content but has since launched a revenue-sharing program to address publisher concerns. Last October, the New York Times sent it a 'cease and desist' notice, demanding the firm stop using the newspaper's content for generative AI purposes. Since the introduction of ChatGPT, publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users. The BBC said that parts of its content had been reproduced verbatim by Perplexity and that links to the BBC website have appeared in search results, according to the FT report. Perplexity called the BBC's claims 'manipulative and opportunistic' in a statement to Reuters, adding that the broadcaster had 'a fundamental misunderstanding of technology, the internet and intellectual property law.' Perplexity provides information by searching the internet, similar to ChatGPT and Google's Gemini, and is backed by founder Jeff Bezos, AI giant Nvidia and Japan's SoftBank Group. The startup is in advanced talks to raise $500 million in a funding round that would value it at $14 billion, the Wall Street Journal reported last month.


Sinar Daily
8 hours ago
- Sinar Daily
Relying on AI could be weakening the way we think, researchers warn
ARTIFICIAL intelligence is progressively transforming how we write, research, and communicate in this new age of technological renaissance. But according to MIT's latest study, this digital shortcut might come at a steep price: our brainpower. A new study by researchers at the Massachusetts Institute of Technology (MIT) has raised red flags over the long-term cognitive effects of using AI chatbots like ChatGPT, suggesting that outsourcing our thinking to machines may be dulling our minds, reducing critical thinking, and increasing our 'cognitive debt.' Researchers at MIT found that participants who used ChatGPT to write essays exhibited significantly lower brain activity, weaker memory recall, and poorer performance in critical thinking tasks than those who completed the same assignments using only their own thoughts or traditional search engines. 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the research paper elaborated. While AI tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Photo: Canva The MIT Study The conducted study in question involved 54 participants, who were divided into three groups: one used ChatGPT, another relied on search engines, and the last used only their brainpower to write four essays. Using electroencephalogram (EEG) scans, the researchers measured brain activity during and after the writing tasks. The results were stark. 'EEG revealed significant differences in brain connectivity. Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM (Large Language Model) users displayed the weakest connectivity,' the researchers reported. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Researchers described this as 'offloading human thinking and planning,' indicating that the brain was doing less work because it was leaning on the AI. Interestingly, when later asked to quote or discuss the content of their essays without AI help, 83 per cent of the chatbot users failed to provide a single correct quote, compared to just 10 per cent among the search engine and brain-only groups. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. Photo: Canva In context to the study, this would likely suggest they either didn't engage deeply with the content or simply didn't remember it. 'Frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving,' lead researcher Dr Nataliya Kosmyna warned. The chatbot-written essays were also found to be homogenous, with repetitive themes and language, suggesting that while AI might produce polished results, it lacks diversity of thought and originality. Are our minds getting lazy? The MIT findings echo earlier warnings about the dangers of 'cognitive offloading' — a term used when people rely on external tools to think for them. An earlier February 2025 study by Microsoft and Carnegie Mellon University found that workers who heavily relied on AI tools reported lower levels of critical thinking and reduced confidence in their own reasoning abilities. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. This particular trend is steadily increasing concerns of having serious consequences for education and workforce development. Moving forward, the MIT team cautioned that relying too much on AI could diminish creativity, increase vulnerability to manipulation, and weaken long-term memory and language skills. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Photo: Canva The dawn of a new era? With AI chatbots becoming increasingly common in classrooms and homework help, educators are facing a difficult balancing act. While these said tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Teachers have been voicing concerns that students are using AI to cheat or shortcut their assignments. The aforementioned MIT study provides hard evidence that such practices don't just break rules — they may actually hinder intellectual development. As such, the primary takeaway is not that AI is inherently bad — but that how we use it matters greatly. The study thus reinforces the importance of engaging actively with information, rather than blindly outsourcing thinking to machines. As the researchers put it: 'AI-assisted tools should be integrated carefully, ensuring that human cognition remains at the centre of learning and decision-making.'