
US student seeks college refund after she spotted her teacher was using ChatGPT
In February, Ella Stapleton was going over her organisational behaviour class lecture notes when she came across a directive addressed to ChatGPT. The New York Times claims that the content used expressions like 'expand on all areas' and displayed typical indicators of artificial intelligence-generated content, including clumsy wording, warped visuals, and even errors that resembled machine output.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Spectator
a day ago
- Spectator
Is AI eating your brain?
Do you remember long division? I do, vaguely – I certainly remember mastering it at school: that weird little maths shelter you built, with numbers cowering inside like fairytale children, and a wolf-number at the door, trying to eat them (I had quite a vivid imagination as a child). Then came the carnage as the wolf got in – but also a sweet satisfaction at the end. The answer! You'd completed the task with nothing but your brain, a pen, and a scrap of paper. You'd thought your way through it. You'd done something, mentally. You were a clever boy. I suspect 80 to 90 per cent of universities will close within the next ten years Could I do long division now? Honestly, I doubt it. I've lost the knack. But it doesn't matter, because decades ago we outsourced and off-brained that job to machines – pocket calculators – and now virtually every human on earth carries a calculator in their pocket, via their phones. Consequently, we've all become slightly dumber, certainly less skilled, because the machines are doing all the skilful work of boring mathematics. Long division is, of course, just one example. The same has happened to spelling, navigation, translation, even the choosing of music. Slowly, silently, frog-boilingly, we are ceding whole provinces of our minds to the machine. What's more, if a new academic study is right, this is about to get scarily and dramatically worse (if it isn't already worsening), as the latest AI models – from clever Claude Opus 4 to genius Gemini 2.5 Pro – supersede us in all cerebral departments. The recent study was done by the MIT Media Lab. The boffins in Boston apparently strapped EEG caps to a group of students and set them a task: write short essays, some using their own brains, some using Google, and some with ChatGPT. The researchers then watched what happened to their neural activity. The results were quite shocking, though not entirely surprising: the more artificial intelligence you used, the more your actual intelligence sat down for a cuppa. Those who used no tools at all lit up the EEG: they were thinking. Those using Google sparkled somewhat less. And those relying on ChatGPT? Their brains dimmed and flickered like a guttering candle in a draughty church. It gets worse still. The ChatGPT group not only produced the dullest prose – safe, oddly samey, you know the score – but they couldn't even remember what they'd written. When asked to recall their essays minutes later, 78 per cent failed. Most depressingly of all, when you took ChatGPT away, their brain activity stayed low, like a child sulking after losing its iPad. The study calls this 'cognitive offloading', which sounds sensible and practical, like a power station with a backup. What it really means is: the more you let the machine think for you, the harder it becomes to think at all. And this ain't just theory. The dulling of the mind, the lessening need for us to learn and think, is already playing out in higher education. New York Magazine's Intelligencer recently spoke to students from Columbia, Stanford, and other colleges who now routinely offload their essays and assignments to ChatGPT. They do this because professors can no longer reliably detect AI-generated work; detection tools fail to spot the fakes most of the time. One professor is quoted thus: 'massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.' In the UK the situation's no better. A recent Guardian investigation revealed nearly 7,000 confirmed cases of AI-assisted cheating across British universities last year – more than double the previous year, and that's just the ones who got caught. One student admitted submitting an entire philosophy dissertation written by ChatGPT, then defending it in a viva without having read it. The result? Degrees are becoming meaningless, and the students themselves – bright, ambitious, intrinsically capable – are leaving education maybe less able than when they entered. The inevitable endpoint of all this, for universities, is not good. Indeed, it's terminal. Who is going to take on £80k of debt to spend three years asking AI to write essays that are then marked by overworked tutors using AI – so that no actual human does, or learns, anything? Who, in particular, is going to do this when AI means there aren't many jobs at the end, anyhow? I suspect 80 to 90 per cent of universities will close within the next ten years. The oldest and poshest might survive as finishing schools – expensive playgrounds where rich kids network and get laid. But almost no one will bother with that funny old 'education' thing – the way most people today don't bother to learn the viola, or Serbo-Croat, or Antarctic kayaking. Beyond education, the outlook is nearly as bad – and I very much include myself in that: my job, my profession, the writer. Here's a concrete example. Last week I was in the Faroe Islands, at a notorious 'beauty spot' called Trælanípa – the 'slave cliff'. It's a mighty rocky precipice at the southern end of a frigid lake, where it meets the sea. The cliff is so-called because this is the place where Vikings ritually hurled unwanted slaves to their grisly deaths. Appalled and fascinated, I realised I didn't know much about slavery in Viking societies. It's been largely romanticised away, as we idealise the noble, wandering Norsemen with their rugged individualism. Knowing they had slaves to wash their undercrackers rather spoils the myth. So I asked Claude Opus 4 to write me a 10,000-word essay on 'the history, culture and impact of slavery in Viking society.' The result – five minutes later – was not far short of gobsmacking. Claude chose an elegant title ('Chains of the North Wind'), then launched into a stylish, detailed, citation-rich essay. If I had stumbled on it in a library or online, I would have presumed it was the product of a top professional historian, in full command of the facts, taking a week or two to write. But it was written by AI. In about the time it will take you to read this piece. This means most historians are doomed (like most writers). This means no one will bother learning history in order to write history. This means we all get dumber, just as the boffins in Boston are predicting. I'd love to end on a happy note. But I'm sorry, I'm now so dim I can't think of one. So instead, I'm going to get ChatGPT to fact-check this article – as I head to the pub.


The Independent
2 days ago
- The Independent
ChatGPT use linked to cognitive decline, research reveals
Relying on the artificial intelligence chatbot ChatGPT to help you write an essay could be linked to cognitive decline, a new study reveals. Researchers at the Massachusetts Institute of Technology Media Lab studied the impact of ChatGPT on the brain by asking three groups of people to write an essay. One group relied on ChatGPT, one group relied on search engines, and one group had no outside resources at all. The researchers then monitored their brains using electroencephalography, a method which measures electrical activity. The team discovered that those who relied on ChatGPT — also known as a large language model — had the 'weakest' brain connectivity and remembered the least about their essays, highlighting potential concerns about cognitive decline in frequent users. 'Over four months, [large language model] users consistently underperformed at neural, linguistic, and behavioral levels,' the study reads. 'These results raise concerns about the long-term educational implications of [large language model] reliance and underscore the need for deeper inquiry into AI's role in learning.' The study also found that those who didn't use outside resources to write the essays had the 'strongest, most distributed networks.' While ChatGPT is 'efficient and convenient,' those who use it to write essays aren't 'integrat[ing] any of it' into their memory networks, lead author Nataliya Kosmyna told Time Magazine. Kosmyna said she's especially concerned about the impacts of ChatGPT on children whose brains are still developing. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten,'' Kosmyna said. 'I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.' But others, including President Donald Trump and members of his administration, aren't so worried about the impacts of ChatGPT on developing brains. Trump signed an executive order in April promoting the integration of AI into American schools. 'To ensure the United States remains a global leader in this technological revolution, we must provide our Nation's youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,' the order reads. 'By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society.' Kosmyna said her team is now working on another study comparing the brain activity of software engineers and programmers who use AI with those who don't. 'The results are even worse,' she told Time Magazine.


The Independent
2 days ago
- The Independent
Revealed: The AI chatbot requests that cause the most carbon emissions
A new study reveals that every query to large language models like ChatGPT consumes energy and generates carbon emissions. Complex reasoning questions, such as those in abstract algebra or philosophy, lead to significantly higher carbon emissions, up to six times more than simpler queries. Models designed for explicit reasoning processes produce substantially more carbon dioxide, with some generating up to 50 times more emissions than concise response models. The study highlights an "accuracy-sustainability trade-off," where highly accurate AI models often result in greater energy consumption and carbon footprint. Researchers recommend that users reduce emissions by prompting AI for concise answers and reserving high-capacity models for tasks that genuinely require their advanced capabilities.