
What happens when you use ChatGPT to write an essay? See what new study found.
Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost.
A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all.
Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing.
"As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research."
Here's a deeper look at the study and how it was conducted.
Terms to know: With artificial intelligence growing popular, here's what to know about how it works
AI in education: How AI is affecting the way kids learn to read and write
A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group.
The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays.
Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty:
"Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?"
As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity.
Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states.
In addition to analyzing brain activity, the researchers looked at the essays themselves.
First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous.
All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI.
"These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads.
As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five.
When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states.
And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen.
Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states.
As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Using AI bots like ChatGPTcould be causing cognitive decline, new study shows
A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or using no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. Related Can ChatGPT be an alternative to psychotherapy and help with emotional growth? The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was 'biased and superficial'. The study found that repeated GPT use can come with 'cognitive debt' that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to 'diminished critical inquiry, increased vulnerability to manipulation and decreased creativity,' as well as a 'likely decrease' in learning skills. 'When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives,' the study continued. Related 'Our GPUs are melting': OpenAI puts restrictions on new ChatGPT image generation tool The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain 'before LLMs are recognised as something that is net positive for humans.'
Yahoo
an hour ago
- Yahoo
OpenAI supremo Sam Altman says he 'doesn't know how' he would have taken care of his baby without the help of ChatGPT
When you buy through links on our articles, Future and its syndication partners may earn a commission. For a chap atop one of the most high profile tech organisations on the planet, OpenAI CEO Sam Altman's propensity, shall we say, to expatiate but not excogitate, is, well, remarkable. Sometimes, he really doesn't seem to think before he speaks. The latest example involves his status as a "new parent," something which he apparently doesn't consider viable without help from his very own chatbot (via Techcrunch). "Clearly, people have been able to take care of babies without ChatGPT for a long time,' Altman initially and astutely observes on the official OpenAI podcast, only to concede, "I don't know how I would've done that." "Those first few weeks it was constantly," he says of his tendency to consult ChatGPT on childcare. Apparently, books, consulting friends and family, even a good old fashioned Google search would not have occurred to this colossus astride the field of artificial, er, intelligence. If all that's a touch arch, forgive me. But the Altman is in absolute AI evangelism overdrive mode in this interview. "I spend a lot of time thinking about how my kid will use AI in the future," he says, "my kids will never be smarter than AI. But they will grow up vastly more capable than we grew up and able to do things that we cannot imagine, they'll be really good at using AI." There are countless immediate and obvious objections to that world view. For sure, people will be better at using AI. But will they themselves be more capable? Maybe most people won't be able to write coherent prose if AI does it for them from day one. Will having AI write everything make everyone more capable? Not that this is a major revelation, but this podcast makes it clear just how signed up Altman is to the AI revolution. "They will look back on this as a very prehistoric time period," he says of today's children. That's a slightly odd claim, given "prehistory" means before human activities and endeavours were recorded for posterity. And, of course, the very existence of the large language models that OpenAI creates entirely relies on the countless gigabytes of pre-AI data on which those LLMs were originally trained. Indeed, one of the greatest challenges currently facing AI is the notion of chatbot contamination. The idea is that, since the release of ChatGPT into the wild in 2022, the data on which LLMs are now being trained is increasing polluted with the synthetic output of prior chatbots. As more and more chatbots inject more and more synthetic data into the overall shared pool, subsequent generations of AI models will thus become ever more polluted and less reliable, eventually leading to a state known as AI model collapse. Indeed, some observers believe this is already happening, as evidenced by the increasing propensity to hallucinate by some of the latest models. Cleaning that problem up is going to be "prohibitively expensive, probably impossible" by some accounts. Anyway, if there's a issue with Altman's unfailingly optimistic utterances, it's probably a lack of nuance. Everything before AI is hopeless and clunky, to the point where it's hard to imagine how you'd look after a newborn baby without ChatGPT. Everything after AI is bright and clean and perfect. Of course, anyone who's used a current chatbot for more than a few moments will be very familiar with their immediately obvious limitations, let alone the broader problems they may pose even if issues like hallucination are overcome. At the very least, it would be a lot easier to empathise with the likes of Altman if there was some sense of those challenges to balance his one-sided narrative. Anywho, fire up the podcast and decide for yourself just what you make of Altman's everything-AI attitudes.


Tom's Guide
3 hours ago
- Tom's Guide
ChatGPT is holding back — these four prompts unlock its full potential
ChatGPT can be such a useful tool. But, it has a tendency to sometimes not put in its all. If you prompt it correctly, you can force ChatGPT to give a request that little bit of extra oomph to really give you a solid answer. This could be for a multi-step prompt, or simply when you want the AI chatbot to dig deep and really think through an answer. In my time it, a few prompts have come up that I've found have really pushed ChatGPT to go all out. These are my four favorite ChatGPT prompts for that exact task. This one requires a bit of work, talking ChatGPT through a stages, but the end result is worth it. Of course, if you're just asking a simple question or looking into something simple, all of this work isn't needed. However, I have found that a bit of forward planning can get the model thinking harder. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. ChatGPT will respond to this saying that it is ready for your question. Ask your request and it will take its time thinking through the task. This prompt works best on one of the more advanced versions of ChatGPT, such as 4o. It will also work on other chatbots such as Claude 4 or Gemini. Prompt: Debate with yourself on [insert topic]. For each side of the argument, quote sources and use any information available to you to form the argument. Take time before you start to prepare your arguments. ChatGPT can make a great debate partner, even better when it is debating itself. By using this prompt, you'll get strongly planned and considered arguments on both sides of a topic. This is especially useful when you're working on an essay or project that needs a varied consideration. The model can debate on any topic, but sometimes will only touch on the surface of a topic. In this case, follow up with a prompt asking ChatGPT to think harder about its responses, forcing it to consider everything in more detail. Prompt: 'Break down the history, current state, and future implications of [issue], using subheadings and citing credible sources.' Instead of just getting a general overview of a subject, this will give you a detailed report, examining the past, future and current state of a topic. By asking for citations, ChatGPT will list all of the sources it has used to offer up the information in your report. You can go a step further by asking ChatGPT to use the internet to do this, providing links to any information it has used. Prompt: 'List the step-by-step process for [task], noting common pitfalls and how to avoid each one.' A simple but effective prompt for ChatGPT, this will not only give you the instructions for how to do something but warn you of the mistakes that are often made for each stage. For example, when using this prompt for making focaccia, ChatGPT gave me instructions for stage 1 of mixing the dough, along with warnings around the temperature of the water and making sure to mix the dough enough. This is a step up from simply asking ChatGPT to explain how to do something, forcing it to carefully consider the best way to do something, especially if it is a complicated task.