Latest news with #OversightBoard

Engadget
a day ago
- Politics
- Engadget
Meta tells the Oversight Board it isn't removing the word 'transgenderism' from its hate speech rules
If anyone was holding out hope that the Oversight Board would provide some kind of check on Meta's rewritten hate speech policy , Meta has just made it clear exactly where it stands. The company published its formal response to the board's criticism, and has declined to commit to any substantive steps to change its rules. The Oversight Board previously criticized Meta's January policy changes as " hastily announced " and wrote that it was "concerned" about the company's decision to use the term "transgenderism" in its rewritten community standards. The company's policy, announced by Mark Zuckerberg in January shortly before President Donald Trump took office, now permits people to claim that LGBTQ people are mentally ill. "We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words such as 'weird,'" the policy now states. In a decision related to two videos depicting public harassment of transgender women, the Oversight Board had sided with Meta on its decision to leave the videos up. But the board recommended that Meta remove the word "transgenderism," from its policy. "For its rules to have legitimacy, Meta must seek to frame its content policies neutrally," the board said. The word has a long association with discrimination and dehumanization, human rights groups have said. Human Rights Campaign noted that the term is "socially and scientifically invalid" and "often wielded by anti-trans activists to delegitimize transgender people." GLAAD has likewise noted that "framing a person's transgender identity as a 'concept' or 'ideology' reduces a core identity to an opinion that can be debated, and therefore justifies dehumanization, discrimination, and real-world violence against transgender, nonbinary, and gender nonconforming people." In its formal response, Meta officials said they were still "assessing feasibility" of removing the word from its policies. The company said it would "consider ways to update the terminology" but added that "achieving clarity and transparency in our public explanations may sometimes require including language considered offensive to some." Meta also declined to commit to the board's three other recommendations in the case. The board had recommended that Meta "identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk," take steps to mitigate those risks and issue regular reports to the board and the public about its work. It had also recommended that Meta allow users to designate other individuals who are able to report bullying and harassment on their behalf, and that the company make improvements to reduce errors when people report bullying and harassment. Meta said it was "assessing feasibility" of these suggestions. Meta's response raises uncomfortable questions about just how much influence the ostensibly independent Oversight Board can have. Zuckerberg said that Meta created the Oversight Board so that it wouldn't have to make consequential policy decisions on its own. Previously, the social network has asked the board for help in major decisions, like Donald Trump's suspension and its rules for celebrities and politicians. But Zuckerberg's decision to roll back hate speech protections and ditch third-party fact checking took the board by surprise. Meta has always been free to ignore the Oversight Board's recommendations, but it has allowed it to influence some of its more controversial policies. That seems like it could be changing, however. Zuckerberg's decision to roll back hate speech protections and ditch third-party fact checking took the board by surprise. And the company now seems to have little interest in engaging with the board's criticism of those changes.


Hans India
06-06-2025
- Business
- Hans India
Meta Faces Heat Over Celebrity Deepfake Scams on Facebook, Oversight Board Warns
Meta is under renewed scrutiny after its Oversight Board flagged a troubling rise in AI deepfake scams, particularly those misusing celebrity identities for deceptive ads. In a recent decision, the board overturned Meta's choice to keep up a Facebook post featuring an AI-generated deepfake of Brazilian football legend Ronaldo Nazário promoting a gambling app. Despite more than 50 user reports, the ad remained online and racked up over 600,000 views before being taken down. The Oversight Board stated this case highlights broader issues with Meta's enforcement of its own policies against impersonation and scams. It criticized the tech giant for enabling large-scale scam content, noting that content reviewers often lack the authority and training to act on AI-generated deepfakes unless there's a direct escalation. According to the board, reviewers face inconsistent enforcement guidelines, which vary by region, making scam detection uneven and unreliable. The ad in question promoted a game called Plinko and was among thousands found in Meta's Ad Library. Many of these reportedly featured deepfaked videos of other celebrities, including Cristiano Ronaldo and even Meta's CEO, Mark Zuckerberg. The board issued a single but significant recommendation: Meta must strengthen its internal policies, empower its moderators, and train them to recognize hallmarks of AI-manipulated media. In response, Meta pushed back, saying the board's assessment was 'simply inaccurate.' The company pointed to an ongoing pilot program using facial recognition to detect such scams and emphasized its broader safety tools and enforcement strategies. Still, Meta's efforts appear insufficient. Earlier this year, several deepfake scam ads featuring Elon Musk and other public figures made the rounds, with some running for weeks despite clear signs of manipulation. Actress Jamie Lee Curtis recently criticized Meta publicly for failing to remove a fake ad featuring her likeness until she intervened directly. The Oversight Board isn't alone in raising alarms. A Wall Street Journal report revealed that nearly half of all scam reports on Zelle for JPMorgan Chase originated from Meta platforms. Regulators in the UK and Australia have also highlighted similar trends. As AI tools become more accessible, the misuse of deepfakes for fraud is accelerating. Critics argue that without stricter ad oversight and enforcement, Meta risks becoming a breeding ground for online scams.


The Verge
05-06-2025
- Business
- The Verge
The Oversight Board says Meta has an AI deepfake problem.
On Thursday, the Oversight Board overturned Meta's decision to leave up a Facebook post showing an AI deepfake of Brazilian soccer star Ronaldo Nazário in an ad for a gambling app. The ad was viewed more than 600,000 times and received more than 50 reports. The Oversight Board points to a larger problem at Meta, saying it is 'likely allowing significant amounts of scam content on its platforms' and that reviewers aren't 'empowered' to enforce the platform's policy against deepfake scams.

Engadget
05-06-2025
- Business
- Engadget
The Oversight Board says Meta isn't doing enough to fight celeb deepfake scams
Scams using AI deepfakes of celebrities have become an increasingly prominent issue for Meta over the last couple of years. Now, the Oversight Board has weighed in and has seemingly confirmed what other critics have said: Meta isn't doing enough to enforce its own rules, and makes it far too easy for scammers to get away with these schemes. "Meta is likely allowing significant amounts of scam content on its platforms to avoid potentially overenforcing a small subset of genuine celebrity endorsements," the board wrote in its latest decision. "At-scale reviewers are not empowered to enforce this prohibition on content that establishes a fake persona or pretends to be a famous person in order to scam or defraud." That conclusion came as the result of a case involving an ad for an online casino-style game called Plinko that used an AI-manipulated video of Ronaldo Nazário, a retired Brazilian soccer player. The ad, which according to the board showed obvious signs of being fake, was not removed by Meta even after it was reported as a scam more than 50 times. Meta later removed the ad, but not the underlying Facebook post behind it until the Oversight Board agreed to review the case. It was viewed more than 600,000 times. The board says that the case highlights fundamental flaws in how Meta approaches content moderation for reported scams involving celebrities and public figures. The board says that Meta told its members that "it enforces the policy only on escalation to ensure the person depicted in the content did not actually endorse the product" and that individual reviewers' "interpretation of what constitutes a 'fake persona' could vary across regions and introduce inconsistencies in enforcement.' The result, according to the Oversight Board, is that a "significant" amount of scam content is likely slipping through the cracks. In its sole recommendation to Meta, the board urged the company should update its internal guidelines, empower content reviewers to identify such scams and train them on "indicators" of AI-manipulated content. In a statement, a spokesperson for Meta said that "many of the Board's claims are simply inaccurate" and pointed to a test it began last year that uses facial recognition technology to fight "celeb-bait" scams. 'Scams have grown in scale and complexity in recent years, driven by ruthless cross-border criminal networks," the spokesperson said. "As this activity has become more persistent and sophisticated, so have our efforts to combat it. We're testing the use of facial recognition technology, enforcing aggressively against scams, and empowering people to protect themselves through many different on platform safety tools and warnings. While we appreciate the Oversight Board's views in this case, many of the Board's claims are simply inaccurate and we will respond to the full recommendation in 60 days in accordance with the bylaws.' Scams using AI deepfakes of celebrities has become a major problem for Meta as AI tech gets cheaper and more easily accessible. Earlier this year, I reported that dozens of pages were running ads featuring deepfakes of Elon Musk and Fox News personalities promoting supplements that claimed to cure diabetes. Some of these pages repeatedly ran hundreds of versions of these ads with seemingly few repercussions. Meta disabled some of the pages after my reporting, but similar scam ads persist on Facebook to this day. Actress Jamie Lee Curtis also recently publicly slammed Mark Zuckerberg for not removing a deepfaked Facebook ad that featured her (Meta removed the ad after her public posts). The Oversight Board similarly highlighted the scale of the problem in this case, noting that it found thousands of video ads promoting the Plinko app in Meta's Ad Library. It said that several of these featured AI deepfakes, including ads featuring another Brazilian soccer star, Cristiano Ronaldo, and Meta's own CEO Mark Zuckerberg. The Oversight Board isn't the only group that's raised the alarm about scams on Meta's platforms. The Wall Street Journal recently reported that Meta "accounted for nearly half of all reported scams on Zelle for JPMorgan Chase between the summers of 2023 and 2024" and that "British and Australian regulators have found similar levels of fraud originating on Meta's platforms." The paper noted that Meta is "reluctant" to add friction to its ad-buying process and that the company "balks" at banning advertisers, even those with a history of conducting scams. If you buy something through a link in this article, we may earn commission.


Hans India
07-05-2025
- Politics
- Hans India
Oversight Board seeks public opinion to restore or remove child abuse videos on Meta
New Delhi: The independent Oversight Board on Wednesday has sought opinion of general public to restore or remove child abuse videos on Meta. The Board, an independent body of 22 global human rights and freedom of expression experts from across the political spectrum and the world, is reviewing two videos which show teachers hitting children in school settings. 'The review will explore the key tension between sharing content depicting non-sexual child abuse to shed light on wrongdoing and demand accountability, and the need to protect children's safety, dignity, and privacy,' the Board said in a statement. Both videos were initially removed by Meta for violating the Child Sexual Exploitation, Abuse and Nudity policy, later one was allowed on the platform 'with a newsworthy allowance and warning screen'. The policy states the company removes content depicting 'real or non-real non-sexual child abuse regardless of sharing intent...' 'Allowing non-sexual child abuse content in an awareness-raising or condemnation context risks re-traumatising the victim, while prohibiting such content may be viewed as infringing on the public's ability to be informed,' said Meta, in its referral to the Board. In view of this, the Oversight Board opened a public comment period and is seeking comments from stakeholders on the complex issues surrounding online depictions of child abuse. The comments are sought on 'the impact on victim, responsibilities of the platform, human rights considerations for content moderation, effects on accountability, and standards for protective reporting'. The public comment window will remain open until 23:59 Pacific Time (12:29 pm IST) on Wednesday May 21, the statement said. It added that the comments can be short, or up to five pages long, and can include links to external sources and research. The comments will 'form a vital part of the Board's decision-making process on whether content should be removed or restored and can help shape our recommendations on how Meta should improve its policies and processes', the Board said.