
AI killed the Easter Bunny
On the grounds of advancing age, I had decided to ignore all the chatter about artificial intelligence and devote my remaining time to things I could properly understand. Then I discovered that one of my own copyrighted properties, the fruit of a year's work, had been scraped into the AI maw without so much as a by-your-leave, and it became personal.
I wrote to my MP who responded with template blandishments. This government… committed to blah blah… exciting prospects… safeguarding… potential opt-out system… a close watch, yadda yadda…
Feeling impotent and no further forward, I returned to my knitting. It took the murder of the Easter Bunny to rouse me from the torpor of denial. My six-year-old grandson, hanging out with friends who knew how to question Google AI, had been informed there is no rabbit who brings chocolate eggs. It's just your parents, dumbo. They buy the eggs from the supermarket and hide them in the garden. In that moment of brutal AI revelation, I fear Father Christmas also received his P45. Likewise the Tooth Fairy. Whether this myth-busting applies to bogeymen and things that go bump in the night, I'm not sure.
I do know, without consulting Google, that the ethics of terrifying today's delicate children are no longer clear. Screen monsters are probably OK but the kind of flesh-and-blood horrors I was threatened with as a child – Flannel Foot the silent burglar, the Man with the Big Stick and, most sinister of all, the Ten O'Clock Horses, who came for you if you were still playing on the street after dark – are now likely considered too horrific for tender ears.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mirror
20 hours ago
- Daily Mirror
Samsung Galaxy phone users face urgent deadline - ignoring it will be costly
If you use a Galaxy phone make sure you log into your Samsung account. Samsung is warning users to check their settings and make sure they are logged in to certain services. The Korean technology giant appears to be having a spring clean and is deleting Samsung accounts that haven't been used in a while. It follows a similar approach to Google, which also b egan deleting inactive accounts back in 2023. In a message sent to Galaxy users, Samsung said it is making "important changes" and those not using certain services face having accounts deleted. "Thank you for using Samsung account," the message, seen by Mirror Online, reads. "We are writing to inform you of important changes related to using your Samsung account. "Samsung is implementing an inactive Samsung account policy to protect the data of users who have not used their account for an extended period of time. Once this policy is implemented, Samsung accounts that have not been logged in to or used for twenty-four (24) months will be considered inactive and will be subject to deletion. "If an account is deleted, access to the account will be restricted and all data linked to the account will be deleted. Accounts and data that are deleted cannot be restored." If you find this message is sat in your inbox it means you haven't logged into your account in a while. If you don't want to lose it or any data stored then It's now vital that you act quickly to stop things being shut down for good. To avoid any issues, all you need to do is log in, and your data should be safe. "To prevent your account from being deleted, and to ensure proper use of Samsung Services, your account must have at least one usage/activity every twenty-four (24) months," Samsung added.


The Guardian
2 days ago
- The Guardian
Internet users advised to change passwords after 16bn logins exposed
Internet users have been told to change their passwords and upgrade their digital security after researchers claimed to have revealed the scale of sensitive information – 16bn login records – potentially available to cybercriminals. Researchers at Cybernews, an online tech publication, said they had found 30 datasets stuffed with credentials harvested from malicious software known as 'infostealers' and leaks. The researchers said the datasets were exposed 'only briefly' but amounted to 16bn login records, with an unspecified number of overlapping records – meaning it is difficult to say definitively how many accounts or people have been exposed. Cybernews said the credentials could open access to services including Facebook, Apple and Google – although there had been no 'centralised data breach' at those companies. Bob Diachenko, the Ukrainian cybersecurity specialist behind the research, said the datasets had become temporarily available after being poorly stored on remote servers – before being removed again. Diachenko said he was able to download the files and would aim to contact individuals and companies that had been exposed. 'It will take some time of course because it is an enormous amount of data,' he said. Diachenko said the information he had seen in infostealer logs included login URLs to Apple, Facebook and Google login pages. Apple and Facebook's parent, Meta, have been contacted for comment. A Google spokesperson said the data reported by Cybernews did not stem from a Google data breach – and recommended people use tools like Google's password manager to protect their accounts. Internet users are also able to check if their email has been compromised in a data breach by using the website Cybernews said the information seen in the datasets followed a 'clear structure: URL, followed by login details and a password'. Diachenko said the data appeared to be '85% infostealers' and about 15% from historical data breaches such as a leak suffered by LinkedIn. Experts said the research underlined the need to update passwords regularly and adopt tough security measures such as multifactor authentication – or combining a password with another form of verification such as a code texted from a phone. Other recommended measures include passkeys, a password-free method championed by Google and Facebook's owner, Meta. 'While you'd be right to be startled at the huge volume of data exposed in this leak it's important to note that there is no new threat here: this data will have already likely have been in circulation,' said Peter Mackenzie, the director of incident response and readiness at the cybersecurity firm Sophos. Mackenzie said the research underlined the scale of data that can be accessed by online criminals. 'What we are understanding is the depth of information available to cybercriminals.' He added: 'It is an important reminder to everyone to take proactive steps to update passwords, use a password manager and employ multifactor authentication to avoid credential issues in the future.' Toby Lewis, the global head of threat analysis at the cybersecurity firm Darktrace, said the data flagged in the research is hard to verify but infostealers – the malware reportedly behind the data theft – are 'very much real and in use by bad actors'. He said: 'They don't access a user's account but instead scrape information from their browser cookies and metadata. If you're following good practice of using password managers, turning on two-factor authentication and checking suspicious logins, this isn't something you should be greatly worried about.' Cybernews said none of the datasets have been reported previously barring one revealed in May with 184m records. It described the datasets as a 'blueprint for mass exploitation' including 'account takeover, identity theft, and highly targeted phishing'. The researchers added: 'The only silver lining here is that all of the datasets were exposed only briefly: long enough for researchers to uncover them, but not long enough to find who was controlling vast amounts of data.' Alan Woodward, a professor of cybersecurity at Surrey University, said the news was a reminder to carry out 'password spring cleaning'. He added: 'The fact that everything seems to be breached eventually is why there is such a big push for zero trust security measures.'


Spectator
2 days ago
- Spectator
Is AI eating your brain?
Do you remember long division? I do, vaguely – I certainly remember mastering it at school: that weird little maths shelter you built, with numbers cowering inside like fairytale children, and a wolf-number at the door, trying to eat them (I had quite a vivid imagination as a child). Then came the carnage as the wolf got in – but also a sweet satisfaction at the end. The answer! You'd completed the task with nothing but your brain, a pen, and a scrap of paper. You'd thought your way through it. You'd done something, mentally. You were a clever boy. I suspect 80 to 90 per cent of universities will close within the next ten years Could I do long division now? Honestly, I doubt it. I've lost the knack. But it doesn't matter, because decades ago we outsourced and off-brained that job to machines – pocket calculators – and now virtually every human on earth carries a calculator in their pocket, via their phones. Consequently, we've all become slightly dumber, certainly less skilled, because the machines are doing all the skilful work of boring mathematics. Long division is, of course, just one example. The same has happened to spelling, navigation, translation, even the choosing of music. Slowly, silently, frog-boilingly, we are ceding whole provinces of our minds to the machine. What's more, if a new academic study is right, this is about to get scarily and dramatically worse (if it isn't already worsening), as the latest AI models – from clever Claude Opus 4 to genius Gemini 2.5 Pro – supersede us in all cerebral departments. The recent study was done by the MIT Media Lab. The boffins in Boston apparently strapped EEG caps to a group of students and set them a task: write short essays, some using their own brains, some using Google, and some with ChatGPT. The researchers then watched what happened to their neural activity. The results were quite shocking, though not entirely surprising: the more artificial intelligence you used, the more your actual intelligence sat down for a cuppa. Those who used no tools at all lit up the EEG: they were thinking. Those using Google sparkled somewhat less. And those relying on ChatGPT? Their brains dimmed and flickered like a guttering candle in a draughty church. It gets worse still. The ChatGPT group not only produced the dullest prose – safe, oddly samey, you know the score – but they couldn't even remember what they'd written. When asked to recall their essays minutes later, 78 per cent failed. Most depressingly of all, when you took ChatGPT away, their brain activity stayed low, like a child sulking after losing its iPad. The study calls this 'cognitive offloading', which sounds sensible and practical, like a power station with a backup. What it really means is: the more you let the machine think for you, the harder it becomes to think at all. And this ain't just theory. The dulling of the mind, the lessening need for us to learn and think, is already playing out in higher education. New York Magazine's Intelligencer recently spoke to students from Columbia, Stanford, and other colleges who now routinely offload their essays and assignments to ChatGPT. They do this because professors can no longer reliably detect AI-generated work; detection tools fail to spot the fakes most of the time. One professor is quoted thus: 'massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.' In the UK the situation's no better. A recent Guardian investigation revealed nearly 7,000 confirmed cases of AI-assisted cheating across British universities last year – more than double the previous year, and that's just the ones who got caught. One student admitted submitting an entire philosophy dissertation written by ChatGPT, then defending it in a viva without having read it. The result? Degrees are becoming meaningless, and the students themselves – bright, ambitious, intrinsically capable – are leaving education maybe less able than when they entered. The inevitable endpoint of all this, for universities, is not good. Indeed, it's terminal. Who is going to take on £80k of debt to spend three years asking AI to write essays that are then marked by overworked tutors using AI – so that no actual human does, or learns, anything? Who, in particular, is going to do this when AI means there aren't many jobs at the end, anyhow? I suspect 80 to 90 per cent of universities will close within the next ten years. The oldest and poshest might survive as finishing schools – expensive playgrounds where rich kids network and get laid. But almost no one will bother with that funny old 'education' thing – the way most people today don't bother to learn the viola, or Serbo-Croat, or Antarctic kayaking. Beyond education, the outlook is nearly as bad – and I very much include myself in that: my job, my profession, the writer. Here's a concrete example. Last week I was in the Faroe Islands, at a notorious 'beauty spot' called Trælanípa – the 'slave cliff'. It's a mighty rocky precipice at the southern end of a frigid lake, where it meets the sea. The cliff is so-called because this is the place where Vikings ritually hurled unwanted slaves to their grisly deaths. Appalled and fascinated, I realised I didn't know much about slavery in Viking societies. It's been largely romanticised away, as we idealise the noble, wandering Norsemen with their rugged individualism. Knowing they had slaves to wash their undercrackers rather spoils the myth. So I asked Claude Opus 4 to write me a 10,000-word essay on 'the history, culture and impact of slavery in Viking society.' The result – five minutes later – was not far short of gobsmacking. Claude chose an elegant title ('Chains of the North Wind'), then launched into a stylish, detailed, citation-rich essay. If I had stumbled on it in a library or online, I would have presumed it was the product of a top professional historian, in full command of the facts, taking a week or two to write. But it was written by AI. In about the time it will take you to read this piece. This means most historians are doomed (like most writers). This means no one will bother learning history in order to write history. This means we all get dumber, just as the boffins in Boston are predicting. I'd love to end on a happy note. But I'm sorry, I'm now so dim I can't think of one. So instead, I'm going to get ChatGPT to fact-check this article – as I head to the pub.