Facebook published the data as part of its sixth Community Standards Enforcement Report, which it first introduced back in 2018 along with more strict rules in reply to a backlash regarding its lax approach to policing content on its platforms.

Facebook told that it would invite proposals from experts to audit the metrics used in the report, beginning in 2021. The company assigned to the audit during a July ad boycott over hate speech practices. Facebook removed nearly 22.5 million posts with hate speech in the second quarter, a climactic increase from 9.6 million in the Q1. It credited the jump to advancements in detection technology. The platform also removed 8.7 million posts related to “terrorist” organizations, compared with 6.3 million in the last period. The company does not reveal changes in the pervasiveness of hateful content on its platforms, which according to civil rights groups make reports on its removal less meaningful. The company said that it depends more heavily on automation for reviewing content as it had fewer people to review at its offices due to the Coronavirus pandemic. That resulted in limited action against content related to self-harm and child sexual exploitation. Facebook’s Vice President, Guy Rosen said, Check out? Facebook employees to work from home till July 2021