Latest news with #Section230


Newsweek
5 days ago
- Business
- Newsweek
Republicans Must Say No to the AI Regulation Moratorium
In the earliest days of Donald Trump's second term, there were exciting signs that the administration was going to chart what we might call a "human-first" course on technology. Voters who were angry over how smartphones, social media, app stores, and EdTech had metastasized into something resembling a conspiracy against children, and who were anxious that automation might take their jobs, helped the president retake the White House. But hopes for a human-first tech policy are already dimming. In its all-consuming efforts to beat China in the A.I. race, the Republican Party has fallen into its old libertarian habits of deferring to Big Tech's interests, failing to protect children and families from predatory uses of emerging technology, and deregulating the industry so that it can operate without any concern for consumer welfare. It's not too late, though. In the administration's earliest days, the president sided with longshoremen against efforts to make union members redundant via automation. Also, in his January 25, 2025 executive order, the president committed to A.I. policy that pursues "human flourishing." Vice President JD Vance declared at February's A.I. Action Summit in Paris that the administration would "fight for policies that ensure that AI" will lead to "higher wages, better benefits, and safer and more prosperous communities." This is the road that most Americans want the administration to take. But since then, the Republican Party has taken one huge step backward. Last month the House of Representatives approved an amendment to the "Big Beautiful Bill" that, if ratified by the Senate, would shield A.I. companies from state regulation and liability for ten whole years. Such a move shows astounding disregard for how ungoverned technologies can undermine human flourishing—and it would unbridle Big Tech's power. The moratorium would void a law in Utah, for instance, that prohibits mental health chatbots from targeting users with advertising, a policy that removes companies' incentives to exploit a suffering audience. It would also block a proposed law in Texas that would require a "human operator," i.e., a human driver, to accompany an autonomous long-haul truck as it transports its freight. And it would block several laws that have been introduced around the country, including in blue states like California and New York, that would require so-called "A.I. companions"—an Orwellian bit of Big Tech branding—to clarify that they are not human beings. SAN FRANCISCO, CALIFORNIA - JUNE 02: Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 02, 2025 in San Francisco, California. SAN FRANCISCO, CALIFORNIA - JUNE 02: Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 02, 2025 in San Francisco, must learn from Congress' past mistakes, such as when, in 1996, it passed the ignominious Section 230 of the Communications Decency Act. Section 230 was touted as necessary to guard the innovative potential of the nascent online service industry from death by regulation. But, by granting sweeping immunity for any content posted by third parties, platforms were disincentivized from making good-faith efforts to protect kids. Section 230 dug a legal moat around Big Tech from behind which the industry waged war on America's children. The ten-year moratorium on A.I. regulation portends a similar legacy. It indicates that Congress, especially Republican leadership, has failed to reckon with how immunizing technological power from liability threatens human flourishing. To its credit, by including human flourishing in its A.I. policy framework, the administration recognizes the possibility of promoting A.I. innovation without sacrificing other human goods. Human flourishing as an explicit policy objective underscores that "acceleration," as the techno-libertarian Right calls it, is an over-simplified paradigm, and that tech policy needs to pursue a broader suite of values, especially the good of the human person and the family. As we have argued elsewhere, the achievement of human flourishing in the age of A.I. (as in every age) depends on deliberate policy choices. Technological innovation, no matter how beneficial to economic prosperity or national security, should never come at the expense of the family or the human person. And there are ways to balance these interests. We have called upon the Trump administration, for instance, to establish a Working Group on Technology and the Family, that would directly assist in the formation of policy to guide technology toward family empowerment, and away from legislation—like the moratorium—that would put families in the crosshairs. In February 2019, the first Trump administration released an executive order that committed the federal government to securing "public trust" and "public confidence" in its A.I. policy. It acknowledged that protecting "American values" was a critical objective, even as it worked to advance "American leadership in AI." That is what an administration committed to human flourishing sounds like; and it is what the second Trump administration sounded like at its start. A ten-year moratorium on state regulation, by contrast, is just a retread of the tired libertarian playbook that trades American values and public trust for technological power and financial gain. Fortunately, a groundswell of opposition among Republicans senators has emerged, such as Josh Hawley (Mo.), Marsha Blackburn (Tenn.), Ron Johnson (Wisc.), and Rick Scott (Fla.), who publicly oppose the moratorium. Representative Marjorie Taylor Green (R-Ga.) has done likewise, and more may join them. So, the die is not yet cast. The word is not yet final. The future is still ahead. The Trump administration can still make a human-first A.I. policy. But the time for choosing is now. Michael Toscano is director of the Family First Technology Initiative at the Institute for Family Studies. Jared Hayden is a policy analyst for the Family First Tech Initiative at the Institute for Family Studies. The views expressed in this article are the writers' own.
Yahoo
09-06-2025
- Politics
- Yahoo
FTC Pivots From Competition to Children
A Federal Trade Commission (FTC) summit last week on protecting children online previewed an odd pivot. Apparently, the agency wants to be a sort of family values advocacy group. "This government-sponsored event was not a good-faith conversation about child safety—it was a strategy session for censorship," said the Free Speech Coalition (FSC), a trade group for the adult industry. What stands out most to me about last Wednesday's event—called "The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families"—is the glimpse it provided into how the FTC's anti-tech strategy is evolving and the way Republicans seem intent on turning a bipartisan project like online child protection into a purely conservative one. Attacking tech platforms has become a core part of the FTC's mission over the past decade. During Donald Trump's first term as president, these attacks tended to invoke free speech concerns. Whether the weapon of choice was antitrust law or changes to Section 230, the justification back then usually had something to do with the ways tech platforms were moderating content and the idea that this moderation was politically biased against conservatives. Under President Joe Biden, the FTC continued to wield antitrust law against tech companies, but now the justification was that the companies were just too big. Democrats invoked "fairness" and the idea that they were restoring competition by knocking these big businesses down a peg. The way the FTC attacks tech companies has become a window into the larger preoccupations and priorities of different political cohorts. And these days, it's going all in on being a conservative morality machine—in the name of protecting the children, of course. Replace references to social media platforms and app stores with cable TV and video games—or rock music and comic books—and this workshop would have been right at home in any of the last few decades of last century. Even the old right-wing culture war stalwart Morality in Media was there, though the group now calls itself the National Center on Sexual Exploitation (NCOSE). In fact, most of the panelists came from conservative groups. In addition to the representative from NCOSE, there were folks from the Heritage Foundation, the American Principles Project, the Family Policy Alliance, the Ethics & Public Policy Center, the Family First Technology Institute for the Institute for Family Studies, and Hillsdale College. The speakers also included several Republican politicians and some Republican FTC commissioners. The event barely pretended to be anything other than a right-wing values summit, with panelists laying out their vision for how the FTC and Congress can work together to put conservative values into law. "For years, protecting kids online has been touted as one of the only issues Republicans and Democrats could agree on," notes Lauren Feiner at The Verge. But the FTC's recent event "previewed how that conversation may take a different tone under President Donald Trump's second term—one where anti-porn rules, conservative family values, and a push for parents' rights take center stage." "We have a God-given right and duty to question whether" social and technological change must be looked at with resignation and indifference, said FTC Chair Andrew N. Ferguson in his prepared keynote remarks. Ferguson said that the FTC's job is to protect vulnerable consumers and that this includes children. Protecting kids online will inevitably involve everyone giving up more personal information, he suggested: "We must go beyond the current legal regime, which conditions unfettered access to online services on nothing more than an unverified, self-reported birthdate." Going beyond self-reported age assurances means app stores, social media companies, adult websites, and all sorts of other web platforms checking government-issued IDs, using biometric data, or otherwise engaging in privacy-invading actions. That obviously will affect not just minors but almost everyone who uses the internet, requiring adults as well as kids to give up more personal information. It's a funny agenda item for an agency ostensibly concerned with consumer privacy. Panelists at the FTC conference seemed especially concerned with checking IDs for consumers of online pornography. "The topics of age verification and pornography came up many times over the course of the event," reports the FSC. "Throughout the event, FTC leadership and their allies made plain their intentions to spread unconstitutional age-verification policies nationwide and attack the adult industry's very right to exist." But panelists expressed support for a wide range of federal legislation aimed at age-gating and censoring the internet, including: The , which would require online platforms to "prevent and mitigate" all sorts of online "harms" to minors, from eating disorders to depression to risky spending. The Shielding Children's Retinas from Egregious Exposure on the Net Act (SCREEN) Act (H.R. 1623 and S. 737), which would create a federal age-verification mandate for platforms that host content deemed "harmful to minors" (a category that includes all porn platforms but could also ensnare a good deal beyond that). The App Store Accountability Act, which would require app stores to verify user ages and restrict downloads for minors who didn't have parental consent. "While framed as a child protection measure, the bill would force app stores to collect sensitive personal data like government IDs or biometric scans from potentially hundreds of millions of users, posing serious risks to privacy, threatening free expression, and replicating the same constitutional flaws that have plagued previous online age-verification laws," write Marc Scribner and Nicole Shekhovtsova, two policy analysts at Reason Foundation (the nonprofit that publishes this website). The CASE IT Act (HR 573), a bill last introduced in 2023 that would take away Section 230 protections for porn websites that don't verify ages. "There are ways to encode certain values into technological design," Michael Toscano of the Institute for Family Studies said on one panel."We have a responsibility as a political, social, and economic matter to ensure that technology is ordered towards human flourishing and the common good." But Americans have many different ideas about what constitutes human flourishing and the common good. And policies mandating that tech companies take the "common good" into account are inevitably going to reflect the version of the common good envisioned by those in power at the time. The idea of human flourishing and common good envisioned by those in favor at the FTC right now seems to recognize few rights and little agency for anyone under the age of 18. In his keynote, Ferguson envisioned a world where the government gives parents total control and surveillance over their children's online activities. "Parents should be able to see what messages their children are sending or receiving on a particular service," he said. "And most importantly, parents should be able to erase any trace left by their children on these platforms, at all levels of granularity, from individual messages to entire accounts." The idea of human flourishing and common good envisioned by those in favor at the FTC right now also leaves little room for adults' sexual freedom. "From bizarre, unscientific claims about porn addiction to denials that the First Amendment protects sexual content, many of the speakers used the spotlight to slander and malign the adult industry," noted the FSC. "The FTC also made it clear that they plan to test the limits of their authority, including by expanding their use of Section 5 of FTC Act (which prohibits 'unfair or deceptive acts or practices in or affecting commerce') to go after targets they disfavor." The idea of human flourishing and common good envisioned by those in favor at the FTC right now doesn't seem too keen on free markets either. FTC Commissioner Mark Meador went on an extended rant comparing tech companies to tobacco companies and calling individual choice a smokescreen for "ever-greater corporate power." The FTC's current anti-tech agenda is explicitly rooted in socially conservative moral values and explicitly hostile to free speech and free markets. It might have a different flavor than the Biden FTC agenda, but it won't be any better for business freedom or for individuals' civil liberties. During closing statements last week in the case against former leaders of the orgasmic meditation company OneTaste, the government showed the jury pictures of the alleged victims—including a picture of a woman named Madelyn Carl. One government attorney mentioned Carl more than two dozen times in her closing. But Carl had not testified as a government witness, and was in fact in the courtroom that day to support defendants Nicole Daedone and Rachel Cherwitz. "I do not see myself as a victim of OneTaste, or Nicole Daedone, or Rachel Cherwitz," said Carl in an emailed statement. "Both of those women have helped me in immeasurable ways, and I would be devastated if they got convicted." "My story is my story," she continued. "Obviously it did not fit the government's narrative, so they did not call me as a witness. I joined the OneTaste community by choice, and I remained in the community until I decided it was time for me to move on." The FBI did interview Carl about her time at OneTaste. Afterward, agents prepared a report about the interview that "mischaracterized things I said" and "reframed my story in a misleading way," according to Carl. She also said the FBI offered to pay for therapy if she went through an FBI victim specialist: In the summer of 2022 I reached out to one of the other witnesses for a reference to a therapist but then ultimately ended up declining because the offer that I got back was not something I was interested in. The offer was that the fbi would put me in touch with a victim specialist and pay for my therapy. She said they had offered to pay for her therapy retroactively and would do the same for me. I declined because I didn't want to use a victim specialist. Or process my issues with the fbi. Because I didn't feel like a victim. Carl isn't the only woman involved with OneTaste who feels the FBI tried to paint as a victim despite her objections. Reason talked with two other women—Alisha Price and Jennifer Slusher—who felt pressured by the FBI to say they were victims. You can read their stories here. • The "big beautiful break between Trump and Musk" signals Silicon Valley's wider disillusionment with the Trump administration, writes Yascha Mounk. • "A recent ruling from the Ninth Circuit Court of Appeals is raising the stakes for any business that operates a website collecting user data," reports The National Law Review: In Briskin v. Shopify, decided in April 2025, the court held that California courts can exercise personal jurisdiction over an out-of-state company—Shopify—for allegedly collecting personal data from a California resident without proper disclosure or consent. This decision signals a significant shift in how courts view digital jurisdiction in the age of online commerce and widespread data collection. • How Hollywood studios are quietly using AI. The post FTC Pivots From Competition to Children appeared first on
Yahoo
29-05-2025
- Health
- Yahoo
Jolt's Latest Doc ‘Can't Look Away' Examines the Dark Side of Social Media and Its Impact On Adolescents
In the documentary 'Can't Look Away,' directors Matthew O'Neill and Perri Peltz expose the dark side of social media and the tragic impact Big Tech company algorithms can have on children and teens. Based on extensive investigative reporting by Bloomberg News reporter Olivia Carville, the doc follows a team of lawyers at Seattle's Social Media Victims Law Center who are battling several tech companies for families who have lost children due to suicide, drug overdose, or exploitation linked to social media use. O'Neill and Peltz ('Axios,' 'Surveilled') capture the lawyers' fight against Section 230 of the Federal Communications Act. Created in 1996 before the birth of social media, Section 230, known as the Communications Decency Act, states that internet service providers cannot be held responsible for what third parties are doing. More from Variety Netflix's 'Cold Case: The Tylenol Murders' Investigates Who Was Responsible for Seven Deaths: A Psychopath or a Drug Company? Judas Priest Documentary, Co-Directed by Rage Against the Machine's Tom Morello, Coming From Sony Music Vision (EXCLUSIVE) Millennium Docs Against Gravity Festival in Poland Crowns 'Yintah' With Grand Prize 'The fact that this group of really incredible lawyers came together with this mission in mind to get around Section 230 through product liability, we just thought it was such a fascinating approach,' says Peltz. 'Can't Look Away' is currently streaming on Jolt, an AI-driven streaming platform that connects independent films with audiences. Recent Jolt titles include 'Hollywoodgate,' 'Zurawsksi v Texas,' and 'The Bibi Files,' a documentary from Oscar-winners Alex Gibney and Alexis Bloom that investigates corruption in Israeli politics. O'Neill says that he and Petlz decided to put 'Can't Look Away' on Jolt, in part, because the company could 'move quickly and decisively reach an audience now, with a message that audiences are hungry for.' 'What was also appealing to us is this sense of Jolt as a technology company,' he says. 'They are using these tools to identify and draw in new audiences that might not be the quote unquote documentary audience. We are documentary filmmakers, and we want our films to speak to everyone.' Jolt uses AI to power its Interest Delivery Networks, enabling films to connect with their target audiences. The platform's Chief Executive Officer, Tara Hein-Phillip, would not disclose Jolt viewership numbers for 'Can't Look Away,' making it difficult to determine how well the new distribution service is performing. However, Hein-Phillip did reveal that since the platform's launch in March 2024, the company's most-viewed film is the documentary 'Your Fat Friend,' which charts the rise of writer, activist, and influencer Aubrey Gordon. Hein-Phillip attributed part of the film's success on Jolt to Gordon's niche but significant online following. 'We are still learning along the way what builds audience and where to find them and how long it takes to build them,' Hein-Phillip says. 'It's slightly different for every film. We really focus on trying to find unique audiences for each individual film. In a way, that is problematic because it's not a reliable audience to say, 'Oh, we have built however many for this particular film, now we can turn them onto (this other) film and they'll all go there.' They won't.' The company utilizes advanced data analytics and machine learning to develop performance marketing plans that target specific audiences for each film and increase awareness. All collected data is shared with each respective Jolt filmmaker, who receives 70% of their Jolt earnings and retains complete ownership of their work and all future rights. 'Initially, we thought Jolt would just be an opportunity to put a film up there,' says Hein-Phillip. 'We would put some marketing against it, and we would push the film out into the world and give it our best push, and we definitely still do that, but now we realize that to build an audience, you actually have to do a handful of things. Some films come to us and they have already done that work, and some films come to us and they haven't. If they haven't, it's in our best interest and their best interest for us to help facilitate that.' That 'work' can include a theatrical release, an impact campaign, or a festival run. In addition to being a 'great, impactful film,' Hein-Phillip says that Jolt partnered with O'Neill and Peltz on 'Can't Look Away' because of the doc's audience potential. 'There are so many audiences for this film – parents, teenagers, lawyers, educators, etc,' said Hein-Philip. To attract those audiences, Jolt and 'Can't Look Away' directors have, ironically, relied on social media to help get the word out about the film. 'We aren't anti-social media,' says Petlz. 'What we are trying to say in the film is – put the responsibility where it rightly belongs.' 'Can't Look Away' will be released on Bloomberg Media Platforms in July. Best of Variety What's Coming to Netflix in June 2025 New Movies Out Now in Theaters: What to See This Week 'Harry Potter' TV Show Cast Guide: Who's Who in Hogwarts?
Yahoo
28-05-2025
- Politics
- Yahoo
Section 230 Was Hijacked by Big Tech to Silence You
In 1996, Congress passed a well-meaning law called Section 230 of the Communications Decency Act to help internet platforms grow. It was supposed to protect online forums from liability for what their users said—not give billion-dollar corporations the right to shadow-ban dissidents, rig elections, and coordinate censorship with the federal government. But thanks to a judicial sleight of hand, Section 230 became the sledgehammer Big Tech used to bludgeon the First Amendment into submission. And now—at long last—the Supreme Court may have a chance to fix it. The case to watch is Fyk v. Facebook, and it might be the most important free speech lawsuit you've never heard of. So, here's The Lie That Broke the Internet: Section 230(c)(1) reads: 'No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.' Sounds simple, right? Don't sue the platform for what someone else posts. But that's not how the courts interpreted it. They swapped out 'the publisher' for 'a publisher'—a tiny grammatical switch with massive consequences. That misquote gave platforms immunity not just for hosting content—but for what they choose to manipulate, suppress, or delete. This misinterpretation has allowed Big Tech giants to: Throttle political speech they don't like; Deplatform rival voices and competitors; Shadow-ban stories that challenge official narratives, And partner with the government to suppress dissenting opinions—all while claiming immunity. Don't take my word for it—look at the receipts. The 'Twitter Files' revealed that federal agencies actively worked with platforms to suppress content. A federal judge even issued an injunction in Missouri v. Biden to stop this unconstitutional collusion. That's not moderation. That's state-sanctioned censorship in a corporate mask. Congress intended Section 230 to protect platforms acting in good faith—hence the name of Section 230(c): 'Protection for 'Good Samaritan' blocking and screening of offensive material.' Platforms were supposed to remove truly harmful content—pornography, violence, abuse—not opinions that made their investors uncomfortable or their partners in D.C. nervous. But under the courts' bastardized reading of the law, the 'good faith' clause in Section 230(c)(2) became meaningless. If 230(c)(1) shields all moderation, then what's the point of requiring platforms to act in good faith at all? That's a textbook violation of the surplusage canon—a legal rule that says no part of a statute should be rendered pointless. In short, the courts rewrote the law. And they handed Big Tech the keys to our digital public square. Jason Fyk built a multi-million-dollar business on Facebook. With over 25 million followers, his pages drove massive traffic—until Facebook targeted and deleted his content, allegedly redirecting it to competitors and killing his revenue. When he sued, Judge Jeffrey White dismissed the case under Section 230—claiming Facebook was immune. But here's the kicker: Fyk wasn't suing over what other people said. He was suing over what Facebook did. They didn't just host his content—they manipulated it, redirected it, and destroyed his business. That's not speech. That's sabotage. Fyk's verified complaint included sworn factual allegations. Under standard civil procedure (Rule 12(b)(6)), the court was required to treat those facts as true. Instead, the judge parroted Facebook's false claims—even branding Fyk the 'pee page guy' over a page he didn't even own. This kind of judicial deference to Big Tech is exactly why Fyk's case is headed to the Supreme Court. Let's clear something up: Section 230 is an affirmative defense, not 'sovereign immunity.' That means platforms must prove their actions were lawful—not automatically escape trial. In Barnes v. Yahoo! (2009), the Ninth Circuit confirmed that Section 230 is not a blanket shield. But courts have ignored that precedent and instead created a fantasy world where Big Tech can't be touched—no matter what they do. As Jason Fyk explains in his eye-opening analysis, Section 230 for Dummies, the judiciary has created 'super-immunity' out of thin air. That's not just unconstitutional—it's dangerous. The Supreme Court has a golden opportunity here. If they take Fyk's case, they can: Restore due process by ending early dismissals based on false immunity; Reinstate the 'good faith' requirement for content moderation; Clarify the difference between a neutral host and an active publisher; And return free speech to the people, not the platforms. No new laws are needed. Just correct interpretation of the law we already have. Section 230 was designed to protect speech—not suppress it. It was written to encourage good faith moderation—not corporate censorship on behalf of the federal government. The law isn't broken. The courts broke it. Now it's time they fix it.


Fox News
22-05-2025
- Business
- Fox News
Meta faces increasing scrutiny over widespread scam ads
Meta, the parent company of Facebook and Instagram, is under fire after a major report revealed that thousands of fraudulent ads have been allowed to run on its platforms. According to the Wall Street Journal, Meta accounted for nearly half of all scam complaints tied to Zelle transactions at JPMorgan Chase between mid-2023 and mid-2024. Other banks have also reported a high number of fraud cases linked to Meta's platforms. The problem of scam ads on Facebook has grown rapidly in recent years. Experts point to the rise of cryptocurrency schemes, AI-generated content and organized criminal groups operating from Southeast Asia. These scams range from fake investment opportunities to misleading product offers and even the sale of nonexistent puppies. One example involves Edgar Guzman, a legitimate business owner in Atlanta, whose warehouse address was used by scammers in more than 4,400 Facebook and Instagram ads. These ads promised deep discounts on bulk merchandise, tricking people into sending money for products that never existed. "What sucks is we have to break it to people that they've been scammed. We don't even do online sales," Guzman told reporters. Meta says it's fighting back with new technology and partnerships, including facial-recognition tools and collaborations with banks and other tech companies. A spokesperson described the situation as an "epidemic of scams" and insisted that Meta is taking aggressive action, removing more than 2 million accounts linked to scam centers in several countries this year alone. However, insiders tell a different story. Current and former Meta employees say the company has been reluctant to make it harder for advertisers to buy ads, fearing it could hurt the company's bottom line. Staff reportedly tolerated between eight and 32 fraud "strikes" before banning accounts and scam enforcement was deprioritized to avoid losing ad revenue. Victims of these scams often lose hundreds or even thousands of dollars. In one case, fake ads promised free spice racks from McCormick & Co. for just a small shipping fee, only to steal credit card details and rack up fraudulent charges. Another common scam involves fake puppy sales, with victims sending deposits for pets that never arrive. Some scam operations are even linked to human trafficking, with criminal groups forcing kidnapped victims to run online fraud schemes under threat of violence. Meta maintains that it is not legally responsible for fraudulent content on its platforms, citing Section 230 of federal law, which protects tech companies from liability for user-generated content. In court filings, Meta has argued that it "does not owe a duty to users" when it comes to policing fraud. Meanwhile, a class-action lawsuit over allegedly inflated ad reach metrics is moving forward, putting even more pressure on Meta to address transparency and accountability. Staying safe online takes a little extra effort, but it's well worth it. Here are some steps you can follow to avoid falling victim to scam ads. 1. Check the source and use strong antivirus software: Look for verified pages and official websites. Scammers often copy the names and logos of trusted brands, but the web address or page details may be off. Always double-check the URL for slight misspellings or extra characters and avoid clicking links in ads if you're unsure about their legitimacy. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices. 2. Be skeptical of deals that seem too good to be true: If an ad offers products at an unbelievable price or promises huge returns, pause and investigate before clicking. Scammers often use flashy discounts or urgent language to lure people in quickly. Take a moment to think before you act, and remember that if something sounds impossible, it probably is. 3. Research the seller: Search for reviews and complaints about the company or individual. If you can't find any credible information, it's best to avoid the offer. A quick online search can reveal if others have reported scams or had bad experiences, and legitimate businesses usually have a track record you can verify. 4. Consider using a personal data removal service: There are companies that can help remove your personal info from data brokers and people-search sites. This means less of your data floating around for scammers to find and use. While these services usually charge a fee, they can save you a lot of time and hassle compared to doing it all yourself. Over time, you might notice fewer spam calls, emails and even a lower risk of identity theft. Check out my top picks for data removal services here. 5. Never share sensitive information: Don't enter your credit card or bank details on unfamiliar sites. If you're asked for personal information, double-check the legitimacy of the request. Scammers may ask for sensitive data under the guise of "verifying your identity" or processing a payment, but reputable companies will never ask for this through insecure channels. 6. Keep your devices updated: Keeping your software updated adds an extra layer of protection against the latest threats. Updates often include important security patches that fix vulnerabilities hackers might try to exploit. By regularly updating your devices, you help close those security gaps and keep your personal information safer from scammers and malware. 7. Report suspicious ads: If you see a scam ad on Facebook or Instagram, report it using the platform's tools. This helps alert others and puts pressure on Meta to take action. Reporting is quick and anonymous, and it plays a crucial role in helping platforms identify patterns and remove harmful content. 8. Monitor your accounts: Regularly check your bank and credit card statements for unauthorized transactions, especially after making online purchases. Early detection can help you limit the damage if your information is compromised, and most banks have fraud protection services that can assist you if you spot something suspicious. By following these steps, you can better protect yourself and your finances from online scams. Staying alert and informed is your best defense in today's digital world. The mess with scam ads on Meta's platforms shows why it's important to look out for yourself online. Meta says it's working on the problem, but many people think it's not moving fast enough. By staying careful, questioning suspicious offers and using good security tools, you can keep yourself safer. Until the platforms step up their game, protecting yourself is the smartest move you can make. Should Meta be doing more to protect its users from scam ads, even if it means making changes that could affect its advertising revenue? Let us know by writing us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.