Latest news with #deepfakes


Forbes
21 hours ago
- Politics
- Forbes
How AI Is Rewriting Reality—And Why Media Literacy Is Our Best Defense
Dr. Lyric Mandell of MOXY Company is a media strategist and scholar merging credibility, creativity, and culture to shape communication. We live in a society where AI-generated images of presidents in papal robes or pop stars in pitiful props aren't just the brainchildren of bored internet users—they now circulate through official channels and have real-world consequences. The rise of AI-driven visuals from sources as superfluous as anonymous Reddit threads to as sacred as the White House shows how blurred the line between satire and statecraft is, and it's not just political theater. When military agencies experiment with deepfakes and public health campaigns feature AI-generated humans, it becomes clear: This is no longer just about technological novelty—it's a crisis of perception, authority and what we, as a society, agree to call 'real.' In his 2005 book discussing 'BS,' Harry Frankfurt reminds us that much of what circulates in public life is neither truth nor lie—it's language used without any regard for the truth. In the digital age, that indifference becomes content. And when this kind of insincerity becomes visually striking and algorithmically optimized, the danger isn't just that we misinterpret the message—it's that we stop caring whether the message is real. For communicators, this shift is seismic. We now operate in a landscape where audiences often don't care about who shares something—only how it makes them feel or how frequently it appears in their feed. And perhaps more unsettling is that much of this isn't malicious; it's rooted in media illiteracy. The erosion of traditional credibility markers—expertise, authorship and institutional trust—forces communicators to ask complex questions: How do we create messages that resonate in a reality where factual grounding is optional but ethical responsibility isn't? The stakes aren't just strategic—they're societal. Research suggests that false information spreads six times faster than truth and often appears professional enough to pass as fact, even influencing how governments, organizations and the public respond to events. As AI grows more adept at mimicking human behavior, our critical filters weaken. Although the technology is new, the terrain is familiar. As early as 1922, journalist Walter Lippmann theorized in Public Opinion that people respond not to actual events but to the 'pictures inside our heads'—mental shortcuts or 'stereotypes' that help us navigate chaos. In an age where media circulates in many-to-many networks, AI doesn't just reinforce those images; it manufactures them at scale. Media theorist Neil Postman calls this the entertainment-ization of public discourse. In Amusing Ourselves to Death, he argues that television renders 'serious' ideas digestible only when entertaining. AI-generated media becomes Postman's nightmare realized: politics as parody and medicine as memes. This overflow of information, although entertaining, also drains us. With people spending over two hours a day on social media, each swipe delivers another micro-dose of engagement—or irritation. This content overload leads to what scholars describe as 'information fatigue syndrome'—a cognitive condition marked by emotional burnout, decision paralysis and, most alarmingly, active avoidance of news and discourse. Research from Reuters suggests that people don't turn away from the news out of apathy—they retreat because the content feels repetitive, emotionally exhausting and beyond their power to influence. In an ecosystem where audiences can't—or won't—filter every post for truth or relevance, trust becomes optional and attention becomes reflexive. And AI accelerates this breakdown. When content never stops and everything feels true, our brains default to shortcuts. We adopt Lippmann's stereotypes—those 'pictures in our heads'—because interrogating every piece of media proves too exhausting. The antidote isn't withdrawal—it's critical literacy. In an 'apathy economy' where content circulates without conviction, modern communicators must create signals worthy of the scarce, fatigued attention users still possess—but at what cost? For communicators, this shift demands more than creative recalibration—it requires ethical clarity. In an environment where virality often outperforms veracity, the temptation rises: optimize for engagement, lean into outrage and co-opt the aesthetic of authenticity without accountability. But the real challenge isn't just how to get attention—it's how to deserve it. Credibility is no longer a given. If we want audiences to engage intentionally rather than impulsively, we must build trust actively—and often, uphill. This means resisting the allure of AI shortcuts that produce volume without value. It means recognizing that saturation breeds cynicism, and most importantly, it means creating content that contributes to literacy, not just visibility. Frankfurt warns that 'BS' is dangerous not because it's false but because it's indifferent. Postman warns that spectacle smothers substance, and Lippmann warns that our internal 'pictures' overpower facts. Today, all three thinkers converge at the intersection of AI and public discourse. The real danger we face isn't just misinformation—it's the erosion of consensus, not consensus as shared opinions but of shared processes: a collective understanding of how we evaluate and prioritize truth, source credibility and what constitutes reliable evidence. In a world where every post, video or AI-generated image circulates with the same weight—regardless of origin or intent—that consensus collapses. This collapse doesn't just disrupt public trust; it dismantles the conditions that make disagreement productive. Without a baseline agreement on how we determine what's real—and more importantly, why truth should still matter—we lose the ability to disagree meaningfully. We don't just fight over facts—we fight over whether facts exist at all. For communicators, this places a unique responsibility at our feet. We're not just competing in an attention economy but shaping a reality economy. Every message we craft doesn't just influence a market; it contributes to—or corrodes—the broader information environment. We must evaluate our impact not just within KPIs but across our social world. If we're all architects of attention, we're also stewards of its consequences—and that includes preserving a cultural commitment to truth itself. Forbes Communications Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?
Yahoo
2 days ago
- Business
- Yahoo
AI Deepfakes Responsible For 40% Of $4.6B Lost To Crypto Scams Last Year, Report Says
Benzinga and Yahoo Finance LLC may earn commission or revenue on some items through the links below. Some $4.6 billion was lost to cryptocurrency scams in 2024, according to a joint report from cryptocurrency exchange Bitget and cryptocurrency-focused security firms SlowMist and Elliptic released last week. Deepfakes were the most used tactic, accounting for "nearly 40% of high value fraud," the report said. Using deepfakes, scammers created the illusion of official authority for scam projects, the firms said. They cited deepfaked videos of Singapore Prime Minister Lee Hsien Loong and Deputy Prime Minister Lawrence Wong endorsing supposed "government-endorsed crypto investment" platforms as examples. The report also said Tesla (NASDAQ:TSLA) CEO Elon Musk was regularly featured in fraudulent giveaway schemes. Don't Miss: — no wallets, just price speculation and free paper trading to practice different strategies. Grow your IRA or 401(k) with Crypto – . Beyond impersonating public figures the report said deepfakes are used to bypass know your customer verification systems to steal customer funds, create virtual identities as covers for investment fraud and launch phishing attacks through fake video meeting platforms that implant backdoors in the computers of targets. "Five years ago, avoiding scams meant 'don't click suspicious links.' Today, it's 'don't trust your own eyes,'" the report said. Meanwhile, AI is also being leveraged to make more traditional scams, like Ponzi and pyramid schemes, more sophisticated. Using face-swapping and deepfake technology, scammers are able to fake images and videos to bolster confidence in the schemes. The report cited a February scheme that saw scammers hijack the X account of Tanzanian billionaire Mohammed Dewji to promote a fake Tanzania token using deepfake videos. The project raised over $1.4 million in the first 24 hours. "The biggest threat to crypto today isn't volatility—it's deception," Bitget CEO Gracy Chen said in a statement. "AI has made scams faster, cheaper, and harder to detect." Trending: New to crypto? on Coinbase. With the pace of AI advancement likely to continue to accelerate, the current dominance of AI-based cryptocurrency scams promises to be the new reality, making it necessary for projects and individuals to develop countermeasures. Some suggestions in the report include establishing a single platform for information sharing and using on-chain signatures for easy verification. The report also warned users against blindly trusting familiar faces and voices, urging them to verify information across multiple platforms before acting. Other tips included being skeptical of unsolicited contact, not running code or installing files from unknown sources, bookmarking official sites, and using scam detection plug-ins. The scourge of deepfakes is not limited to the cryptocurrency space. President Donald Trump in May signed the Take It Down Act, which criminalizes deepfake pornography and requires tech firms to remove them upon request. Read Next: A must-have for all crypto enthusiasts: . Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Image: Shutterstock This article AI Deepfakes Responsible For 40% Of $4.6B Lost To Crypto Scams Last Year, Report Says originally appeared on Sign in to access your portfolio


CTV News
4 days ago
- CTV News
Authorities warn parents of sexually explicit deepfakes, dangers of AI
Authorities are warning about an increase in deepfakes in child sexual abuse investigations. 'AI and deepfakes are a new trend we're starting to see online and enter into some of our investigations,' said Const. Stephanie Bosh of ALERT's Internet Child Exploitation (ICE) unit in a news release on Tuesday. Deepfakes are video, images or audio recordings that seem real, but have been created or altered using artificial intelligence (AI). a sexual exploitation and abuse tip line operated by the Canadian Centre for Child Protection (C3P), processed 4,000 sexually explicit deepfake images and videos over 12 months in 2023 and 2024. 'Our team is hearing more stories about the negative effects of AI, especially when it's used by someone with ill intent, each time we're out in the community. It is imperative that parents are aware that this technology exists, especially with kids home this summer,' Cpl. Heather Bangle added. Guardians are advised to:


CNN
12-06-2025
- Business
- CNN
Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech
Meta is suing the Hong Kong-based maker of the app CrushAI, a platform capable of creating sexually explicit deepfakes, claiming that it repeatedly circumvented the social media company's rules to purchase ads. The suit is part of what Meta (META) described as a wider effort to crack down on so-called 'nudifying' apps — which allow users to create nude or sexualized images from a photo of someone's face, even without their consent — following claims that the social media giant was failing to adequately address ads for those services on its platforms. As of February, the maker of CrushAI, also known as Crushmate and by several other names, had run more than 87,000 ads on Meta platforms that violated its rules, according to the complaint Meta filed in Hong Kong district court Thursday. Meta alleges the app maker, Joy Timeline HK Limited, violated its rules by creating a network of at least 170 business accounts on Facebook or Instagram to buy the ads. The app maker also allegedly had more than 55 active users managing over 135 Facebook pages where the ads were displayed. The ads primarily targeted users in the United States, Canada, Australia, Germany and the United Kingdom. 'Everyone who creates an account on Facebook or uses Facebook must agree to the Meta Terms of Service,' the complaint states. Some of those ads included sexualized or nude images generated by artificial intelligence and were captioned with phrases like 'upload a photo to strip for a minute' and 'erase any clothes on girls,' according to the lawsuit. CNN has reached out to Joy Timeline HK Limited for comment on the lawsuit. Tech platforms face growing pressure to do more to address non-consensual, explicit deepfakes, as AI makes it easier than ever to create such images. Targets of such deepfakes have included prominent figures such as Taylor Swift and Rep. Alexandria Ocasio-Cortez, as well as high school girls across the United States. The Take It Down Act, which makes it illegal for individuals to share non-consensual, explicit deepfakes online and requires tech platforms to quickly remove them, was signed into law last month. But a series of media reports in recent months suggest that these nudifying AI services have found an audience by advertising on Meta's platforms. In January, reports from tech newsletter Faked Up and outlet 404Media found that CrushAI had published thousands of ads on Instagram and Facebook and that 90% of the app's traffic was coming from Meta's platforms. That's despite the fact that Meta prohibits ads that contain adult nudity and sexual activity, and forbids sharing non-consensual intimate images and content that promotes sexual exploitation, bullying and harassment. Following those reports, Sen. Dick Durbin, Democrat and ranking member of the Senate Judiciary Committee, wrote to Meta CEO Mark Zuckerberg asking 'how Meta allowed this to happen and what Meta is doing to address this dangerous trend.' Earlier this month, CBS News reported that it had identified hundreds of advertisements promoting nudifying apps across Meta's platforms, including ads that featured sexualized images of celebrities. Other ads on the platforms pointed to websites claiming to animate deepfake images of real people to make them appear to perform sex acts, the report stated. In response to that report, Meta said it had 'removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps.' Meta says it reviews ads before they run on its platforms, but its complaint indicates that it has struggled to enforce its rules. According to the complaint, some of the CrushAI ads blatantly advertised its nudifying capabilities with captions such as 'Ever wish you could erase someone's clothes? Introducing our revolutionary technology' and 'Amazing! This software can erase any clothes.' Now, Meta said its lawsuit against the CrushAI maker aims to prevent it from further circumventing its rules to place ads on its platforms. Meta alleges it has lost $289,000 because of the costs of the investigation, responding to regulators and enforcing its rules against the app maker. When it announced the lawsuit Thursday, the company also said it had developed new technology to identify these types of ads, even if the ads themselves didn't contain nudity. Meta's 'specialist teams' partnered with external experts to train its automated content moderation systems to detect the terms, phrases and emojis often present in such ads. 'This is an adversarial space in which the people behind it — who are primarily financially motivated — continue to evolve their tactics to avoid detection,' the company said in a statement. 'Some use benign imagery in their ads to avoid being caught by our nudity detection technology, while others quickly create new domain names to replace the websites we block.' Meta said it had begun sharing information about nudifying apps attempting to advertise on its sites with other tech platforms through a program called Lantern, run by industry group the Tech Coalition. Tech giants created Lantern in 2023 to share data that could help them fight child sexual exploitation online. The push to crack down on deepfake apps comes after Meta dialed back some of its automated content removal systems — prompting some backlash from online safety experts. Zuckerberg announced earlier this year that those systems would be focused on checking only for illegal and 'high-severity' violations such as those related to terrorism, child sexual exploitation, drugs, fraud and scams. Other concerns must be reported by users before the company evaluates them.


CNN
12-06-2025
- Business
- CNN
Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech
Meta is suing the Hong Kong-based maker of the app CrushAI, a platform capable of creating sexually explicit deepfakes, claiming that it repeatedly circumvented the social media company's rules to purchase ads. The suit is part of what Meta (META) described as a wider effort to crack down on so-called 'nudifying' apps — which allow users to create nude or sexualized images from a photo of someone's face, even without their consent — following claims that the social media giant was failing to adequately address ads for those services on its platforms. As of February, the maker of CrushAI, also known as Crushmate and by several other names, had run more than 87,000 ads on Meta platforms that violated its rules, according to the complaint Meta filed in Hong Kong district court Thursday. Meta alleges the app maker, Joy Timeline HK Limited, violated its rules by creating a network of at least 170 business accounts on Facebook or Instagram to buy the ads. The app maker also allegedly had more than 55 active users managing over 135 Facebook pages where the ads were displayed. The ads primarily targeted users in the United States, Canada, Australia, Germany and the United Kingdom. 'Everyone who creates an account on Facebook or uses Facebook must agree to the Meta Terms of Service,' the complaint states. Some of those ads included sexualized or nude images generated by artificial intelligence and were captioned with phrases like 'upload a photo to strip for a minute' and 'erase any clothes on girls,' according to the lawsuit. CNN has reached out to Joy Timeline HK Limited for comment on the lawsuit. Tech platforms face growing pressure to do more to address non-consensual, explicit deepfakes, as AI makes it easier than ever to create such images. Targets of such deepfakes have included prominent figures such as Taylor Swift and Rep. Alexandria Ocasio-Cortez, as well as high school girls across the United States. The Take It Down Act, which makes it illegal for individuals to share non-consensual, explicit deepfakes online and requires tech platforms to quickly remove them, was signed into law last month. But a series of media reports in recent months suggest that these nudifying AI services have found an audience by advertising on Meta's platforms. In January, reports from tech newsletter Faked Up and outlet 404Media found that CrushAI had published thousands of ads on Instagram and Facebook and that 90% of the app's traffic was coming from Meta's platforms. That's despite the fact that Meta prohibits ads that contain adult nudity and sexual activity, and forbids sharing non-consensual intimate images and content that promotes sexual exploitation, bullying and harassment. Following those reports, Sen. Dick Durbin, Democrat and ranking member of the Senate Judiciary Committee, wrote to Meta CEO Mark Zuckerberg asking 'how Meta allowed this to happen and what Meta is doing to address this dangerous trend.' Earlier this month, CBS News reported that it had identified hundreds of advertisements promoting nudifying apps across Meta's platforms, including ads that featured sexualized images of celebrities. Other ads on the platforms pointed to websites claiming to animate deepfake images of real people to make them appear to perform sex acts, the report stated. In response to that report, Meta said it had 'removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps.' Meta says it reviews ads before they run on its platforms, but its complaint indicates that it has struggled to enforce its rules. According to the complaint, some of the CrushAI ads blatantly advertised its nudifying capabilities with captions such as 'Ever wish you could erase someone's clothes? Introducing our revolutionary technology' and 'Amazing! This software can erase any clothes.' Now, Meta said its lawsuit against the CrushAI maker aims to prevent it from further circumventing its rules to place ads on its platforms. Meta alleges it has lost $289,000 because of the costs of the investigation, responding to regulators and enforcing its rules against the app maker. When it announced the lawsuit Thursday, the company also said it had developed new technology to identify these types of ads, even if the ads themselves didn't contain nudity. Meta's 'specialist teams' partnered with external experts to train its automated content moderation systems to detect the terms, phrases and emojis often present in such ads. 'This is an adversarial space in which the people behind it — who are primarily financially motivated — continue to evolve their tactics to avoid detection,' the company said in a statement. 'Some use benign imagery in their ads to avoid being caught by our nudity detection technology, while others quickly create new domain names to replace the websites we block.' Meta said it had begun sharing information about nudifying apps attempting to advertise on its sites with other tech platforms through a program called Lantern, run by industry group the Tech Coalition. Tech giants created Lantern in 2023 to share data that could help them fight child sexual exploitation online. The push to crack down on deepfake apps comes after Meta dialed back some of its automated content removal systems — prompting some backlash from online safety experts. Zuckerberg announced earlier this year that those systems would be focused on checking only for illegal and 'high-severity' violations such as those related to terrorism, child sexual exploitation, drugs, fraud and scams. Other concerns must be reported by users before the company evaluates them.