Meta

The Ongoing Battle to Fight Hate

By Justin Osofsky, VP, Global Operations

This week, ProPublica flagged 49 Facebook posts, messages and comments that they believe violated our hate speech policies. After a close review, we found that 28 of these pieces of content do violate our Community Standards and we removed any posts that were still on Facebook. The remaining 19* pieces of content comply with our policies. We stand by our initial decision to keep them up.

We want to thank ProPublica for bringing these posts to our attention. We’re sorry for the mistakes we have made—they do not reflect the community we want to help build. We must do better.

We don’t allow hate speech, which we define as anything that directly attacks people based on race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease. However, our policies allow content that may be controversial and at times even distasteful, but it does not cross the line into hate speech. This may include criticism of public figures, religions, professions, and political ideologies.

In enforcing our policies on hate speech, we can make mistakes in both directions. Sometimes, we leave up content that we should’ve taken down. And vice versa.

Our challenge is identifying hate speech across different cultures, languages, and circumstances for a community of more than 2 billion people.

Nudity and violence are fairly easy to spot, but hate speech is often determined by its context. Certain words can be used self-referentially as a means of empowerment or to condemn intolerance. For example, if a woman posts a photo of herself with the caption “#dyke,” we would allow the post. If posted by a different person or in a different context, those same words can be used as a hurtful slur and amount to a direct attack that violates our policies.

Because of these nuances, we cannot rely on machine-learning or AI to the same degree that we do with other types of content like nudity. Technology can help flag the most blatantly reprehensible language. But it cannot yet understand the context necessary to assess what is or is not hate speech.

Instead, we encourage people to report posts and rely on our team of content reviewers around the world to review reported content. We are thankful to everyone, including ProPublica, who brings these issues to us. Our CEO Mark Zuckerberg recently announced that we will be doubling our safety and security team to 20,000 people in 2018 to better enforce our Community Standards. We’re also working on tools to help us improve the accuracy of our enforcement and building new AI to better detect bad content.

While even one mistake is too many, we do delete around 66,000 posts reported as hate speech each week — that’s around 288,000 posts deleted a month.

As we continue to invest in people and technology, we acknowledge that we still get it wrong sometimes. Some of the posts flagged by ProPublica are case in point. We are working nonstop to cut down on these mistakes and welcome continued feedback as we combat hate speech on Facebook.

*We weren’t given enough information to locate the remaining 2 pieces of content.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy