
Expert warns parents over AI deepfakes of children
Only 20 images of a child are needed to create a deepfake video of them, a leading expert in cybersecurity has warned.
The study, conducted by Perspectus Global, focused on 2,000 parents with children under the age of 16 in the UK, and showed that parents upload an average of 63 images to social media every month.
Over half of these photos include family photos (59%), with one in five parents (21%) uploading these types of images multiple times a week.
Speaking on RTÉ's Today with Claire Byrne, CEO of Hotline.ie, Mick Moran, said that as AI gets stronger, the 20 images required to create the videos will be reduced to only one.
"The big worry is that these AI models will be used to create CSAM (Child Sexual Abuse Material) and children involved in sex acts," he said.
"We've already seen in the past, innocent images that kids themselves are posting, or their parents are posting, being used in advertising pornography sites.
"In this case however, giving a certain data set of images, 20 of them, will allow you to produce a non-limited amount in any scenario of that child."
Mr Moran explained that the risk of CSAM is only one aspect of the issue, and the deep-fake videos could also be used for fraud or for scams.
"You have to be aware that your data is being used to train these models and fundamentally, any information you share online can be used in ways you never intended."
He said that if images are being shared publicly, the expectation of privacy is "gone," adding that some companies see uploaded material as under "implicit consent."
"If you're an adult and you share a picture... it attracts different rules under data protection. However, if you're a parent and you share a picture of your child or another child, it is deemed to be implicit consent from the parent that transfers to the child, and therefore they can use the image."
Parents urged to limit social media privacy settings
Mr Moran said that there is "no problem" in sharing images online, as long as the audience who can view it is limited through social media privacy settings.
He called on the Government to bring in legislation to make it illegal to possess or to make an engine which trains AI to produce CSAM.
"CSAM and child pornography are illegal under the Child Trafficking and Pornography Act of 1998, so it's illegal to possess it, whether it's made by AI or not," he said.
"What I'd be calling on the Government to do here would be to make it illegal to possess, make an engine, or to train an AI engine that will produce CSAM - that's not illegal.
"What you put into it might be illegal, what comes out of it might be illegal, but the act of doing it is not necessarily illegal," he added.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Times
11 hours ago
- Irish Times
The victim delivered a searing impact statement. Just one thing felt off
It was a routine enough tableau; a judge, sitting at the bench, watching the victim of a violent attack address a courtroom via video as they forgave their attacker and asked for leniency. The judge held the fate of the perpetrator, already found guilty and awaiting sentencing, in their hands. As the video statement ended, the judge commented that he 'loved' it, that he 'heard the forgiveness'. It was a moving moment. The only issue was that the victim had been dead for three and a half years. The video was an AI -generated victim impact statement from a murdered man, Christopher Pelkey. This use of synthetically generated video and audio of a murder victim in an Arizona court last month felt like another 'puffer jacket pope' moment. The viral AI-generated image of Pope Francis in a white Balenciaga-style down jacket fooled millions and catapulted image generation tools into the cultural mainstream. Now, along with popes in puffer jackets, we have another watershed moment in 'ghostbots'. READ MORE Unlike the people it depicts, the 'digital afterlife industry', as it is more formally known, is alive and kicking. Companies with names such as HereAfter AI and You Only Virtual allow users to create digital archives of themselves so that the people they leave behind can interact with 'them' once they are gone. These apps market themselves to the living or bypass the person being digitally cloned altogether. The bereaved are now offered the promise of 'regenerating' their deceased relatives and friends. People out there are, at this moment, interacting with virtual renderings of their mothers and spouses on apps with names such as Re:memory and Replika. They don't need the participation or consent of the deceased. The video used to reanimate Christopher Pelkey was created using widely available tools and a few simple reference points – a YouTube interview and his obituary photo, according to The New York Times . This gives the generated footage the feel of a decent cheapfake rather than a sophisticated deepfake. Watching it, you find yourself in the so-called 'uncanny valley', that feeling you get when interacting with a bot, when your brain knows something is not quite right. This person is too serene, too poreless, too ethereal as they stare into your eyes and talk about their own death. Pelkey's sister wrote the script, imagining the message she believed her brother would have wanted to deliver. This includes the synthetic version of Pelkey addressing 'his' killer: 'It is a shame we encountered each other that day in those circumstances. In another life, we probably could have been friends. I believe in forgiveness and in God, who forgives. I always have and I still do.' [ Why Greeks are in pole position when it comes to artificial intelligence Opens in new window ] I do not doubt that the Pelkey family had good intentions. They had a point they wanted to make, saw a tool to let them do it, and were permitted to do so by the court. They also likely believe they know what their lost loved one would have wanted. But should anyone really have the power to put words in the mouth and voice of the deceased? We often fret about AI image and video generation tools being used to mislead us, to trick us as voters or targets of scams. But deception and manipulation are not the same thing. In that Arizona courtroom there was no intention to deceive: no one thought this was the actual murder victim speaking. Yet that does not diminish its emotional impact. If we can have the murdered plea for peace, does that mean we could also have AI ghosts asking for vengeance, retribution or war? Political actors have embraced generative AI, with its ability to cheaply make persuasive, memorable content. Despite fears it would be used for disinformation, most public use cases are of non-deceptive 'soft fakes'. An attack ad against Donald Trump, for example, featured audio of a synthetic version of his voice saying out loud something he had only written in a tweet. However, the real political AI innovation is happening in India, where last year candidates did things such as create videos of them speaking in languages they do not know, and even generate digital 'endorsements' from long dead figures. One candidate had the voice of his father, who died from Covid in 2020, tell voters; 'Though I died, my soul is still with all of you ... I can assure you that my son, Vijay, will work for the betterment of Kanniyakumari.' Vijay won. People have long tried to speak for the dead, often to further their own ends. AI turbo charges this into a kind of morbid ventriloquism, rendered in high definition and delivered with reverential sincerity. But the danger isn't that we mistake these digital ghosts for the real thing, it's that we know what they are, and still acquiesce to being emotionally manipulated by them. Maybe now we all need to look into whether we need to write a will with a new kind of DNR: Do Not Regenerate.


RTÉ News
a day ago
- RTÉ News
Pope Leo warns politicians of the challenges posed by AI
Pope Leo has warned politicians of the challenges posed by the rise of artificial intelligence (AI), addressing its potential impact on younger people as a prime concern. Speaking at an event attended by Italian Prime Minister Giorgia Meloni and parliamentary delegations from 68 countries, Leo revisited a topic that he has raised on a number of occasions during the first few weeks of his papacy. "In particular, it must not be forgotten that artificial intelligence functions as a tool for the good of human beings, not to diminish them or even to replace them," Leo said at an event held as part of the Roman Catholic Jubilee or Holy Year. AI proponents say it will speed up scientific and technological progress and help people to carry out routine tasks, granting them more time to pursue higher-value and creative work. The US-born pontiff said attention was needed to protect "healthy, fair and sound lifestyles, especially for the good of younger generations." He noted that AI's "static memory" was in no way comparableto the "creative, dynamic" power of human memory. "Our personal life has greater value than any algorithm, and social relationships require spaces for development that far transcend the limited patterns that any soulless machine canpre-package," he said. Leo, who became pope in May, has spoken previously of the threat posed by AI to jobs and has called on journalists to use it responsibly.


The Irish Sun
a day ago
- The Irish Sun
Users of Facebook app must make important change now to avoid private chats going PUBLIC
META AI, which has been woven into the Facebook and WhatsApp experience, might be making your private conversations with the chatbot public. The standalone Meta AI app prompts users to choose to post publicly in the app's Discovery feed by default, a recent report by 2 When users tap "Share" and "Post to feed," they are sharing their conversations with strangers all around the world Credit: Alamy 2 Fortunately, you can opt out of having your conversations go public completely through the Meta AI app's settings Credit: Alamy When users tap "Share" and "Post to feed," they are sharing their conversations with strangers all around the world. It is much like a public Facebook post, the report added. The Discovery feed is plastered with AI-generated images, as well as text conversations. There's no telling how private these interactions can be - from talking through your relationship woes to drafting a eulogy. READ MORE ON META "I've scrolled past people asking Meta AI to explain their anxiety dreams, draft eulogies, and brainstorm wedding proposals," the report wrote. "It's voyeuristic, and not in the performative way of most social media; it's real and personal." Meta has a new pop-up warning users that agreeing for their AI chats to land on the Discovery page means strangers can view them. These conversation snippets aren't just for themselves or their friends to see. Most read in Tech However, accidental sharing remains a possibility. TechRadar noted that these conversations may even appear elsewhere on Meta platforms, like Facebook, WhatsApp or Instagram. Meta's top VR boss predicts AI-powered future with no phones, brain-controlled ovens and virtual TVs that only cost $1 Fortunately, you can opt out of having your conversations go public completely through the Meta AI app's settings. Here's how you can make sure your chats aren't at risk of being shared publicly: Open the Meta AI app. Tap your account icon, i.e. your profile picture or initials. Next, click on Data and Privacy and then tap Manage Your Information. Then toggle on Make all public prompts visible to only you , and then Apply to all in the pop-up. This will ensure that when you share a prompt, only you will be able to see it. To go one step further, you can erase all records of any interaction you've had with Meta AI. To do this, simply tap Delete all prompts in this same section of the Meta AI app's settings. This will wipe any prompt you've written, regardless of whether it's been posted, from the app. It's worth noting that even though you have opted out Of course, even with the opt-out enabled and your conversations with Meta AI no longer public, Meta still retains the right to use your chats to improve its models. What is Meta AI? You may have spotted Meta AI on your social media feed - here's how it works: Meta AI is a conversational artificial intelligence tool, also known as a chatbot. It responds to a user's questions in a similar fashion to competitors like ChatGPT and Microsoft Copilot. Meta AI is what's known as generative AI, so called due to its ability to generate content. It can produced text or images in response to a user's request. The tool is trained on data that's available online. It can mimic patterns commonly found in human language as it provides responses. Meta AI appears on Facebook, Instagram, WhatsApp, and Messenger, where it launches a chat when a question is sent.