Facebook, Instagram’s Bullying, Hate Speech Issues Saw More Proactive Actions Than Ever Before, Claims Company

In its latest Community Standards Enforcement report for the last quarter of 2020, Facebook has claimed that a number of changes have come together to increase proactive actions taken by Facebook in numerous issues.

Facebook

Facebook has published its latest Community Standards Enforcement report, highlighting the total number of cases in which it took action against posts that involved bullying and harassment, sexual content and nudity, hate speech and other metrics. The report lists a total of 12 policy areas for Facebook and 10 policy areas for Instagram, and details, in each case, the total number of actions taken in each case, the percentage of cases where Facebook acted before users flagged it, the number of user appeals it entertained against its decision, and the number of times when it changed its decision basis a user’s appeal.

Facebook policy areas

The list of areas that are highlighted in the Community Standards Enforcement report for Facebook include:

  1. Nudity and sexual content
  2. Bullying and harassment
  3. Child nudity and exploitation
  4. Terrorism
  5. Organised hate
  6. Fake accounts
  7. Hate speech
  8. Drug use
  9. Firearms promotion
  10. Spam
  11. Suicide and self-harm
  12. Violence and graphic content

Instagram policy areas

Like Facebook, too, the key areas highlighted for Instagram in the latest community transparency report include:

  1. Nudity and sexual content
  2. Bullying and harassment
  3. Child nudity and exploitation
  4. Terrorism
  5. Organised hate
  6. Hate speech
  7. Drug use
  8. Firearms promotion
  9. Suicide and self-harm
  10. Violence and graphic content

Content viewed and flagged

As stated by the company, the report is divided into prevalence of the content viewed by people, acted on by Facebook, and proactive actions by Facebook before users flagging it. The key view and content flagging metrics include:

  1. Nudity and sexual content: 4 views out of 10,000 contained nudity. A total of 28 million content pieces were acted on, of which 98.1 percent were done by Facebook before user reports. On Instagram, 11.5 million content pieces were acted on, of which 96.5 percent were done by Instagram before user reports.
  2. Bullying and harassment: A total of 6.3 million content pieces were acted on, of which 48.8 percent were done by Facebook before user reports. On Instagram, 5 million content pieces were acted on, of which 80 percent were done by Instagram before user reports.
  3. Child nudity and exploitation: A total of 5.4 million content pieces were acted on, of which 98.8 percent were done by Facebook before user reports. On Instagram, 8 lakh content pieces were acted on, of which 97.9 percent were done by Instagram before user reports.
  4. Terrorism: A total of 8.6 million content pieces were acted on, of which 99.8 percent were done by Facebook before user reports. On Instagram, 3.45 lakh content pieces were acted on, of which 98.2 percent were done by Instagram before user reports.
  5. Organised hate: A total of 6.4 million content pieces were acted on, of which 98.3 percent were done by Facebook before user reports. On Instagram, 3.08 lakh content pieces were acted on, of which 68.7 percent were done by Instagram before user reports.
  6. Fake accounts: A total of 1.3 billion fake accounts were found and removed, of which 99.6 percent were done by Facebook before user reports. No Instagram data shared on this metric.
  7. Hate speech: 8 views out of 10,000 contained hate speech. A total of 26.9 million content pieces were acted on, of which 97.1 percent were done by Facebook before user reports. On Instagram, 6.6 million content pieces were acted on, of which 95.1 percent were done by Instagram before user reports.
  8. Drug use: A total of 4.3 million content pieces were acted on, of which 97.3 percent were done by Facebook before user reports. On Instagram, 1.4 million content pieces were acted on, of which 96.1 percent were done by Instagram before user reports.
  9. Firearms promotion: A total of 1.3 million content pieces were acted on, of which 92.2 percent were done by Facebook before user reports. On Instagram, 70,200 content pieces were acted on, of which 90.3 percent were done by Instagram before user reports.
  10. Spam: A total of 1 billion spam content pieces were acted on, of which 99.8 percent were done by Facebook before user reports. No Instagram data shared on this metric.
  11. Suicide and self-harm: A total of 2.5 million content pieces were acted on with preventive measures, of which 92.8 percent were done by Facebook before user reports. On Instagram, 3.4 million content pieces were acted on, of which 94.6 percent were done by Instagram before user reports.
  12. Violence and graphic content: 5 views out of 10,000 contained hate speech. A total of 16 million content pieces were acted on, of which 99.5 percent were done by Facebook before user reports. On Instagram, 5.6 million content pieces were acted on, of which 98.3 percent were done by Instagram before user reports.

The report and all of its data aims to help the company establish a chart that is easy for regulators, users and analysts to read. The social media giant has regularly faced flak for not having done enough to be transparent with its content, and the latest data shared by Facebook claims that the company will further add to these metrics in the coming quarters, during which it also hopes to improve its algorithms enforcing its policy, and the team of human moderators involved in the process.

Thanks for reading till the end of this article. For more such informative and exclusive tech content, like our Facebook page