Meta

F8 2018: Using Technology to Remove the Bad Stuff Before It’s Even Reported

By Guy Rosen, VP of Product Management

There are two ways to get bad content, like terrorist videos, hate speech, porn or violence off Facebook: take it down when someone flags it, or proactively find it using technology. Both are important. But advances in technology, including in artificial intelligence, machine learning and computer vision, mean that we can now:

  • Remove bad content faster because we don’t always have to wait for it to be reported. In the case of suicide this can mean the difference between life and death. Because as soon as our technology has identified that someone has expressed thoughts of suicide, we can reach out to offer help or work with first responders, which we’ve now done in over a thousand cases.
  • Get to more content, again because we don’t have to wait for someone else to find it. As we announced two weeks ago, in the first quarter of 2018, for example, we proactively removed almost two million pieces of ISIS and al-Qaeda content — 99% of which was taken down before anyone reported it to Facebook.
  • Increase the capacity of our review team to work on cases where human expertise is needed to understand the context or nuance of a particular situation. For instance, is someone talking about their own drug addiction, or encouraging others to take drugs?

It’s taken time to develop this software – and we’re constantly pushing to improve it. We do this by analyzing specific examples of bad content that have been reported and removed to identify patterns of behavior. These patterns can then be used to teach our software to proactively find other, similar problems.

  • Nudity and graphic violence: These are two very different types of content but we’re using improvements in computer vision to proactively remove both.
  • Hate speech: Understanding the context of speech often requires human eyes – is something hateful, or is it being shared to condemn hate speech or raise awareness about it? We’ve started using technology to proactively detect something that might violate our policies, starting with certain languages such as English and Portuguese. Our teams then review the content so what’s OK stays up, for example someone describing hate they encountered to raise awareness of the problem.
  • Fake accounts: We block millions of fake accounts every day when they are created and before they can do any harm. This is incredibly important in fighting spam, fake news, misinformation and bad ads. Recently, we started using artificial intelligence to detect accounts linked to financial scams.
  • Spam: The vast majority of our work fighting spam is done automatically using recognizable patterns of problematic behavior. For example, if an account is posting over and over in quick succession that’s a strong sign something is wrong.
  • Terrorist propaganda: The vast majority of this content is removed automatically, without the need for someone to report it first.
  • Suicide prevention: As explained above, we proactively identify posts which might show that people are at risk so that they can get help.

When I talk about technology like artificial intelligence, computer vision or machine learning people often ask why we’re not making progress more quickly. And it’s a good question. Artificial intelligence, for example, is very promising but we are still years away from it being effective for all kinds of bad content because context is so important. That’s why we have people still reviewing reports.

And more generally, the technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. It’s why we can typically do more in English as it is the biggest data set we have on Facebook.

But we are investing in technology to increase our accuracy across new languages. For example, Facebook AI Research (FAIR) is working on an area called multi-lingual embeddings as a potential way to address the language challenge. And it’s why we sometimes may ask people for feedback if posts contain certain types of content, to encourage people to flag it for review. And it’s why reports that come from people who use Facebook are so important – so please keep them coming. Because by working together we can help make Facebook safer for everyone.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy