25 arrested in global operation targeting AI child sexual abuse content
The Hague — A global campaign has led to at least 25 arrests over child sexual abuse content generated by artificial intelligence and distributed online, Europol said Friday.
"Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material, making it exceptionally challenging for investigators due to the lack of national legislation addressing these crimes," the Hague-based European police agency said in a statement. The majority of the arrests were made Wednesday during the world-wide operation led by the Danish police, and which also involved law enforcement agencies from the EU, Australia, Britain, Canada and New Zealand. U.S. law enforcement agencies did not take part in the operation, according to Europol. It followed the arrest last November of the main suspect in the case, a Danish national who ran an online platform where he distributed the AI material he produced. After a "symbolic online payment, users from around the world were able to obtain a password to access the platform and watch children being abused," Europol said.
Online child sexual exploitation remains one of the most threatening manifestations of cybercrime in the European Union, the agency warned. It "continues to be one of the top priorities for law enforcement agencies, which are dealing with an ever-growing volume of illegal content," it said, adding that more arrests were expected as the investigation continued.
While Europol said Operation Cumberland targeted a platform and people sharing content fully created using AI, there has also been a worrying proliferation of AI-manipulated "deepfake" imagery online, which often uses images of real people, including children, and can have devastating impacts on their lives.
According to a report by CBS News' Jim Axelrod in December that focused on one girl who had been targeted for such abuse by a classmate, there were more than 21,000 deepfake pornographic pictures or videos online during 2023, an increase of more than 460% over the year prior. The manipulated content has proliferated on the internet as lawmakers in the U.S. and elsewhere race to catch up with new legislation to address the problem.
Just weeks ago the Senate passed a bipartisan bill called the "TAKE IT DOWN Act" that, if signed into law, would criminalize the "publication of non-consensual intimate imagery (NCII), including AI-generated NCII (or "deepfake revenge pornography"), and requires social media and similar websites to implement procedures to remove such content within 48 hours of notice from a victim," according to a description on the U.S. Senate website.
As it stands, some social media platforms have appeared unable or unwilling to crackdown on the spread of sexualized, AI-generated deepfake content, including fake images depicting celebrities. In mid-February, Facebook and Instagram owner Meta said it had removed over a dozen fraudulent sexualized images of famous female actors and athletes after a CBS News investigation found a high prevalence of AI-manipulated deepfake images on Facebook.
"This is an industry-wide challenge, and we're continually working to improve our detection and enforcement technology," Meta spokesperson Erin Logan told CBS News in a statement sent by email at the time.
Sneak peek: The People v. Kouri Richins
California neighborhood is slowly sliding toward the ocean
Eye Opener: Gene Hackman and his wife found dead in their home
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
an hour ago
- Business Insider
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.

2 hours ago
Insurance firm ordered to pay $44M in lawsuit filed by man freed from prison
A man won $11 million in a lawsuit against police after his conviction for killing a Missouri newspaper's sports editor was overturned, but the city's former insurer resisted paying most of it for almost three years. A Missouri judge this week ordered the company to pay nearly $44 million. Most of the money would go to Ryan Ferguson, whose legal battle with Minnesota-based St. Paul Fire and Marine Insurance Co. in Missouri's courts started in 2017, about six weeks after he won a federal lawsuit against six Columbia police officers. Ferguson was convicted in 2004 of killing Columbia Daily Tribune sports editor Kent Heitholt but was released from prison in 2013 after a state appeals court panel concluded that he hadn't received a fair trial. Ferguson maintained his innocence. The city insurer paid Ferguson $2.7 million almost immediately after he won his federal lawsuit, and his attorneys expected St. Paul to pay $8 million under its coverage for the officers from 2006 to 2011. But the company argued that it wasn't on the hook because the actions leading to Ferguson's arrest and imprisonment occurred before its coverage began. While Ferguson sought to collect, the officers argued that St. Paul was acting in bad faith, shifting the burden to them as individuals and forcing them to face bankruptcy. Ferguson's lawyers took up those claims, and Missouri courts concluded that St. Paul was obligated to pay $5.3 million for the time Ferguson was in prison while it covered the officers. It paid in 2020. But the payment didn't end the dispute, and in November, a jury concluded that St. Paul had acted in bad faith and engaged in a 'vexatious refusal' to pay. Cole County Circuit Judge S. Cotton Walker upheld that finding in his order Monday as he calculated how much money the company would pay — mostly as punishment — under a Missouri law capping such punitive damages. 'It's a way to send a message to insurance companies that if there's coverage, they need to pay,' said Kathleen Zellner, whose firm represents Ferguson. She added: 'You can't just pull the rug out from under people when they've paid the premiums.' The company can appeal the decision. An attorney representing St. Paul did not immediately return a telephone message seeking comment. Under an agreement between Ferguson and the six officers, they stand to split about $5 million of the $44 million. The award of nearly $44 million includes $3.2 million to compensate Ferguson and the officers, another $24.2 million in punitive damages, $535,000 million for the 'vexatious refusal' allegation and interest on all of the damages totaling about $16 million.

USA Today
8 hours ago
- USA Today
Walmart to pay $10 million to settle FTC fraud lawsuit over money transfers
Walmart WMT.N has agreed to pay $10 million to settle a U.S. Federal Trade Commission civil lawsuit accusing the world's largest retailer of ignoring warning signs that fraudsters used its money transfer services to fleece consumers out of hundreds of millions of dollars. The settlement was filed on Friday in Chicago federal court, and requires approval by U.S. District Judge Manish Shah. Walmart also agreed not to process money transfers it suspects are fraudulent, or help sellers and telemarketers it believes are using its services to commit fraud. "Electronic money transfers are one of the most common ways that scammers tell consumers to send them money, because once it's sent, it's gone for good," said Christopher Mufarrige, director of the FTC consumer protection bureau. "Companies that provide these services must train their employees to comply with the law and work to protect consumers." Average worker pay: Walmart reveals its highest paying job, excluding managers The Bentonville, Arkansas-based retailer did not admit or deny wrongdoing in agreeing to settle. Walmart did not immediately respond to requests for comment. In its June 2022 complaint, the FTC accused Walmart of turning a blind eye to fraudsters who used its money transfer services to cash out at its stores. Walmart acts as an agent for money transfers by companies such as MoneyGram, Ria EEFT.O and Western Union WU.N. Money can be hard to trace once delivered. The FTC said fraudsters used many schemes that included impersonating Internal Revenue Service agents, impersonating family members who needed money from grandparents to avoid jail, and telling victims they won lotteries or sweepstakes but owed fees to collect their winnings. Shah dismissed part of the FTC case last July but let the regulator pursue the remainder. Walmart appealed from that decision. Friday's settlement would end the appeal. The case is Federal Trade Commission v Walmart Inc, U.S. District Court, Northern District of Illinois, No. 22-03372. Reporting by Jonathan Stempel in New York; Editing by Marguerita Choy