logo
#

Latest news with #contentModeration

Musk's X sues New York to block social media hate speech law
Musk's X sues New York to block social media hate speech law

Free Malaysia Today

time3 days ago

  • Business
  • Free Malaysia Today

Musk's X sues New York to block social media hate speech law

Elon Musk has described himself as a free speech absolutist. (Reuters pic) NEW YORK : Elon Musk's X Corp sued New York today, challenging the constitutionality of a state law requiring social media companies to disclose how they monitor hate speech, extremism, disinformation, harassment and foreign political interference. X said the law known as the Stop Hiding Hate Act violated the first amendment and state constitution by subjecting it to lawsuits and heavy fines unless it disclosed 'highly sensitive and controversial speech' that New York may find objectionable. Deciding what content is acceptable on social media platforms 'engenders considerable debate among reasonable people about where to draw the correct proverbial line', X said. 'This is not a role that the government may play.' The complaint filed in Manhattan federal court also quoted a letter from two legislators who sponsored the law, which said X and Musk in particular had a 'disturbing record' on content moderation 'that threatens the foundations of our democracy'. New York attorney-general Letitia James, a Democrat who enforces the state's laws, is the named defendant in X's lawsuit. Her office did not immediately respond to requests for comment. Musk, the world's richest person and recently a close adviser to Republican president Donald Trump, has described himself as a free speech absolutist. He did away with the content moderation policy of Twitter, as X was previously known, after he bought the company for US$44 billion in October 2022. New York's law requires social media companies to disclose steps they take to eliminate hate on their platforms, and to report their progress. Civil fines could reach US$15,000 per violation per day. The law was written by state senator Brad Hoylman-Sigal and assemblymember Grace Lee, both Democrats, with help from the Anti-Defamation League. It was signed in December by governor Kathy Hochul, also a Democrat. X said New York based its law on a nearly identical 2023 California law whose enforcement was partially blocked by a federal appeals court last September because of free speech concerns. California agreed in a February settlement with X not to enforce the law's disclosure requirements. In a joint statement, Hoylman-Sigal and Lee said they were confident a judge would uphold New York's content moderation law. 'The fact that Elon Musk would go to these lengths to avoid disclosing straightforward information to New Yorkers' shows why the law is necessary, the legislators said.

Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows
Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

Asharq Al-Awsat

time3 days ago

  • Business
  • Asharq Al-Awsat

Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact-checking in the United States and eased moderation policies, a survey showed Monday. The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checkers in January and turned over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. The decision was widely seen as an attempt to appease President Donald Trump's new administration, whose conservative support base has long complained that fact-checking on tech platforms was a way to curtail free speech and censor right-wing content. Meta also rolled back restrictions around topics such as gender and sexual identity. The tech giant's updated community guidelines said its platforms would permit users to accuse people of "mental illness" or "abnormality" based on their gender or sexual orientation. "These policy shifts signified a dramatic reversal of content moderation standards the company had built over nearly a decade," said the survey published by digital and human rights groups including UltraViolet, GLAAD, and All Out. "Among our survey population of approximately 7,000 active users, we found stark evidence of increased harmful content, decreased freedom of expression, and increased self-censorship". One in six respondents in the survey reported being the victim of some form of gender-based or sexual violence on Meta platforms, while 66 percent said they had witnessed harmful content such as hateful or violent material. Ninety-two percent of surveyed users said they were concerned about increasing harmful content and felt "less protected from being exposed to or targeted by" such material on Meta's platforms. Seventy-seven percent of respondents described feeling "less safe" expressing themselves freely. The company declined to comment on the survey. In its most recent quarterly report, published in May, Meta insisted that the changes in January had left a minimal impact. "Following the changes announced in January we've cut enforcement mistakes in the US in half, while during that same time period the low prevalence of violating content on the platform remained largely unchanged for most problem areas," the report said. But the groups behind the survey insisted that the report did not reflect users' experiences of targeted hate and harassment. "Social media is not just a place we 'go' anymore. It's a place we live, work, and play. That's why it's more crucial than ever to ensure that all people can safely access these spaces and freely express themselves without fear of retribution," Jenna Sherman, campaign director at UltraViolet, told AFP. "But after helping to set a standard for content moderation online for nearly a decade, (chief executive) Mark Zuckerberg decided to move his company backwards, abandoning vulnerable users in the process. "Facebook and Instagram already had an equity problem. Now, it's out of control," Sherman added. The groups implored Meta to hire an independent third party to "formally analyze changes in harmful content facilitated by the policy changes" made in January, and for the tech giant to swiftly reinstate the content moderation standards that were in place earlier. The International Fact-Checking Network has previously warned of devastating consequences if Meta broadens its policy shift related to fact-checkers beyond US borders to the company's programs covering more than 100 countries. AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Elon Musk's X sues New York to block social media hate speech law
Elon Musk's X sues New York to block social media hate speech law

Al Jazeera

time3 days ago

  • Business
  • Al Jazeera

Elon Musk's X sues New York to block social media hate speech law

Elon Musk's X Corp has sued New York State Attorney General Letitia James, challenging a law in the US state that requires social media companies to disclose how they monitor hate speech, extremism, and other content. The complaint, filed on Tuesday in a Manhattan federal court, argues that the law forces companies to disclose 'highly sensitive and controversial speech' that is protected under the United States Constitution's First Amendment, but disfavoured by the state. Passed in December 2024, the law requires social media companies to clearly explain their terms of service to users and submit reports on those terms to the attorney general. 'We are taking bold action to hold companies accountable, strengthen protections, and give consumers the transparency and security they need and deserve,' New York Governor Kathy Hochul said at the time. X Corp is seeking a court order to block enforcement of the law, known as the Stop Hiding Hate Act. Deciding what content is acceptable on social media platforms 'engenders considerable debate among reasonable people about where to draw the correct proverbial line', X said, adding 'this is not a role that the government may play'. The complaint also quoted a letter from two legislators who sponsored the law, which said X and Musk in particular had a 'disturbing record' on content moderation 'that threatens the foundations of our democracy'. New York's law requires social media companies to disclose steps they take to eliminate hate on their platforms, and to report their progress. Civil fines could reach $15,000 per violation per day. The law was written by state Senator Brad Hoylman-Sigal and Assembly Member Grace Lee, both Democrats, with help from the Anti-Defamation League. X said that New York based its law on a nearly identical 2023 California law whose enforcement was partially blocked by a federal appeals court last September because of free speech concerns. California agreed in a February settlement with X not to enforce the law's disclosure requirements. This marks the latest in a series of lawsuits by the company targeting US states over free speech concerns. In April, X sued the state of Minnesota over a law banning deepfakes intended to harm political candidates or influence elections. Musk has long described himself as a free speech absolutist, yet he has also been criticised for censoring political voices he disagrees with. As Al Jazeera reported ahead of the 2024 presidential election, Musk, then a vocal supporter of Donald Trump, regularly suppressed Democratic voices and suspended several accounts on X that were critical of Trump or of Musk's views. The platform has also faced ongoing accusations of fostering hate speech under Musk's leadership. In 2023, the Center for Countering Digital Hate found that X failed to act on 99 percent of hate-filled content posted by users subscribed to Twitter Blue, the company's premium service. Reports by watchdog groups, including Media Matters, eventually led several major brands to pause advertising on the platform, which prompted X to file lawsuits in response.

Survey: Surge in hate speech on Facebook, Instagram after Meta scraps US fact-checking and eases moderation
Survey: Surge in hate speech on Facebook, Instagram after Meta scraps US fact-checking and eases moderation

Malay Mail

time4 days ago

  • Politics
  • Malay Mail

Survey: Surge in hate speech on Facebook, Instagram after Meta scraps US fact-checking and eases moderation

WASHINGTON, June 17 — Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact-checking in the United States and eased moderation policies, a survey showed yesterday. The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checkers in January and turned over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularised by X. The decision was widely seen as an attempt to appease President Donald Trump's new administration, whose conservative support base has long complained that fact-checking on tech platforms was a way to curtail free speech and censor right-wing content. Meta also rolled back restrictions around topics such as gender and sexual identity. The tech giant's updated community guidelines said its platforms would permit users to accuse people of 'mental illness' or 'abnormality' based on their gender or sexual orientation. 'These policy shifts signified a dramatic reversal of content moderation standards the company had built over nearly a decade,' said the survey published by digital and human rights groups including UltraViolet, GLAAD, and All Out. 'Among our survey population of approximately 7,000 active users, we found stark evidence of increased harmful content, decreased freedom of expression, and increased self-censorship.' One in six respondents in the survey reported being the victim of some form of gender-based or sexual violence on Meta platforms, while 66 percent said they had witnessed harmful content such as hateful or violent material. Ninety-two percent of surveyed users said they were concerned about increasing harmful content and felt 'less protected from being exposed to or targeted by' such material on Meta's platforms. Seventy-seven percent of respondents described feeling 'less safe' expressing themselves freely. The company declined to comment on the survey. In its most recent quarterly report, published in May, Meta insisted that the changes in January had left a minimal impact. 'Following the changes announced in January we've cut enforcement mistakes in the US in half, while during that same time period the low prevalence of violating content on the platform remained largely unchanged for most problem areas,' the report said. But the groups behind the survey insisted that the report did not reflect users' experiences of targeted hate and harassment. 'Social media is not just a place we 'go' anymore. It's a place we live, work, and play. That's why it's more crucial than ever to ensure that all people can safely access these spaces and freely express themselves without fear of retribution,' Jenna Sherman, campaign director at UltraViolet, told AFP. 'But after helping to set a standard for content moderation online for nearly a decade, (chief executive) Mark Zuckerberg decided to move his company backwards, abandoning vulnerable users in the process. 'Facebook and Instagram already had an equity problem. Now, it's out of control,' Sherman added. The groups implored Meta to hire an independent third party to 'formally analyse changes in harmful content facilitated by the policy changes' made in January, and for the tech giant to swiftly reinstate the content moderation standards that were in place earlier. The International Fact-Checking Network has previously warned of devastating consequences if Meta broadens its policy shift related to fact-checkers beyond US borders to the company's programs covering more than 100 countries. AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union. — AFP

Rise in ‘harmful content' since Meta policy rollbacks
Rise in ‘harmful content' since Meta policy rollbacks

Free Malaysia Today

time4 days ago

  • Politics
  • Free Malaysia Today

Rise in ‘harmful content' since Meta policy rollbacks

Meta's move was seen as appeasing Trump's administration, which claims fact-checking censors free speech and targets conservative content. (Reuters pic) PALO ALTO : Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact-checking in the US and eased moderation policies, a survey showed Monday. The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checkers in January and turned over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularised by X. The decision was widely seen as an attempt to appease President Donald Trump's new administration, whose conservative support base has long complained that fact-checking on tech platforms was a way to curtail free speech and censor right-wing content. Meta also rolled back restrictions around topics such as gender and sexual identity. The tech giant's updated community guidelines said its platforms would permit users to accuse people of 'mental illness' or 'abnormality' based on their gender or sexual orientation. 'These policy shifts signified a dramatic reversal of content moderation standards the company had built over nearly a decade,' said the survey published by digital and human rights groups including UltraViolet, GLAAD, and All Out. 'Among our survey population of approximately 7,000 active users, we found stark evidence of increased harmful content, decreased freedom of expression, and increased self-censorship.' One in six respondents in the survey reported being the victim of some form of gender-based or sexual violence on Meta platforms, while 66% said they had witnessed harmful content such as hateful or violent material. 92% of surveyed users said they were concerned about increasing harmful content and felt 'less protected from being exposed to or targeted by' such material on Meta's platforms. 77% of respondents described feeling 'less safe' expressing themselves freely. The company declined to comment on the survey. In its most recent quarterly report, published in May, Meta insisted that the changes in January had left a minimal impact. 'Following the changes announced in January we've cut enforcement mistakes in the US in half, while during that same time period the low prevalence of violating content on the platform remained largely unchanged for most problem areas,' the report said. But the groups behind the survey insisted that the report did not reflect users' experiences of targeted hate and harassment. 'Social media is not just a place we 'go' anymore. It's a place we live, work, and play. That's why it's more crucial than ever to ensure that all people can safely access these spaces and freely express themselves without fear of retribution,' Jenna Sherman, campaign director at UltraViolet, told AFP. 'But after helping to set a standard for content moderation online for nearly a decade, (chief executive) Mark Zuckerberg decided to move his company backwards, abandoning vulnerable users in the process. 'Facebook and Instagram already had an equity problem. Now, it's out of control,' Sherman added. The groups implored Meta to hire an independent third party to 'formally analyse changes in harmful content facilitated by the policy changes' made in January, and for the tech giant to swiftly reinstate the content moderation standards that were in place earlier. The International Fact-Checking Network has previously warned of devastating consequences if Meta broadens its policy shift related to fact-checkers beyond US borders to the company's programmes covering more than 100 countries. AFP currently works in 26 languages with Meta's fact-checking programme, including in Asia, Latin America, and the European Union.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store