Meta

Hard Questions: What Are We Doing to Stay Ahead of Terrorists?

Hard Questions is a series from Facebook that addresses the impact of our products on society. 

By Monika Bickert, Global Head of Policy Management, and Brian Fishman, Head of Counterterrorism Policy

Online terrorist propaganda is a fairly new phenomenon; terrorism itself is not. In the real world, terrorist groups have proven highly resilient to counterterrorism efforts, so it shouldn’t surprise anyone that the same dynamic is true on social platforms like Facebook. The more we do to detect and remove terrorist content, the more shrewd these groups become.

The US Department of Justice recently discovered an alleged ISIS supporter warning others to be careful when posting propaganda on Facebook, pointing to our Q1 2018 terrorism removal metrics as evidence that pushing propaganda on Facebook was getting more difficult. The ISIS backer suggested that his fellow propagandists try to acquire access to legitimate accounts that had been compromised in order to create a workaround. Others have tried to avoid detection by changing their techniques, abandoning old accounts and creating new ones, developing new code language, and breaking messages into multiple components.

Sometimes we anticipate these tactics, and in other cases we react to them. But there is no question that the dynamic has strengthened our ability to successfully fight online terrorism. We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing, which includes immediately informing law enforcement in the rare instances when we identify the possibility of imminent harm.

New Machine Learning Tool

We’ve provided information on our enforcement techniques in the past and want to describe in broad terms some new tactics and methods that are proving effective.

We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counterterrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first.

In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism. We still rely on specialized reviewers to evaluate most posts, and only immediately remove posts when the tool’s confidence level is high enough that its “decision” indicates it will be more accurate than our human reviewers.

At Facebook’s scale neither human reviewers nor powerful technology will prevent all mistakes. That’s why we waited to launch these automated removals until we had expanded our appeals process to include takedowns of terrorist content.

We are constantly working to balance aggressive policy enforcement with protections for users. And we see real gains as a result of this work: for example, prioritization powered by our new machine learning tools have been critical to reducing the amount of time terrorist content reported by our users stays on the platform from 43 hours in Q1 2018 to 18 hours in Q3 2018.

Improvements to Existing Tools and Partnership

We have also improved several of our existing proactive techniques and are now able to more effectively detect terrorist content. For example, our experiments to algorithmically identify violating text posts (what we refer to as “language understanding”) now work across 19 languages. Similarly, though we have long used image- and video-hashing — which converts a file into a unique string of digits that serves as a “fingerprint” of that file — we now also use audio- and text-hashing techniques for detecting terrorist content.

We continue to share these digital fingerprints or “hashes” (image, video, audio and text) with a consortium of tech partners organized by the Global Internet Forum to Counter Terrorism (GIFCT), including Microsoft, Twitter, and YouTube, and are making some of our new hashing techniques available to companies participating in the consortium.

Progress in Enforcement

The improvements we’ve made to our technical tools have allowed for continued and sustained progress in finding and removing terrorist content from Facebook. In Q2 2018, we took action on 9.4 million pieces of content related to ISIS, al-Qaeda, and their affiliates, the majority of which was old material surfaced using specialized techniques. In Q3 2018, overall takedowns of terrorist content declined to 3 million, of which 800,000 pieces of content were old, because our efforts to surface and remove old content on the platform in the second quarter had proven effective. In both Q2 and Q3 we found more than 99% of the ISIS and al-Qaeda content ultimately removed ourselves, before it was reported by anyone in our community. These figures represent significant increases from Q1 2018, when we took action on 1.9 million pieces of content, 640,000 of which was identified using specialized tools to find older content.

We often get asked how long terrorist content stays on Facebook before we take action on it. But our analysis indicates that time-to-take-action is a less meaningful measure of harm than metrics that focus more explicitly on exposure content actually receives. This is because a piece of content might get a lot of views within minutes of it being posted, or it could remain largely unseen for days, weeks or even months before it is viewed or shared by another person. If we prioritize our efforts based narrowly on minimizing time-to-action in general, we would be less efficient at getting to content which causes the most harm. Terrorists are always looking to circumvent our detection and we need to counter such attacks with improvements in technology, training, and process. These technologies improve and get better over time, but during their initial implementation such improvements may not function as quickly as they will at maturity. This may result in increases in the time-to-action, despite the fact that such improvements are critical for a robust counterterrorism effort. Focusing narrowly on the wrong metrics may disincentivize or prevent us from doing our most effective work for the community. We are working on developing more meaningful metrics focused on the exposure instead of time to take action, and plan to share more about it in the future.

Our Q2 2018 and Q3 2018 figures illustrate the points above. In Q2 2018, the median time on platform for newly uploaded content surfaced with our standard tools was about 14 hours, a significant increase from Q1 2018, when the median time was less than 1 minute. The increase was prompted by multiple factors, including fixing a bug that prevented us from removing some content that violated our policies, and rolling out new detection and enforcement systems. Collectively, improvements helped us take action on nearly twice as much newly-uploaded content in Q2 2018 (2.2 million) than we removed in Q1 2018 (1.2 million).

In short, our overall enforcement effort was significantly better in Q2 2018 than it was previously, even though our median time to take action was 14 hours. By Q3 2018, the median time on platform decreased to less than two minutes, illustrating that the new detection systems had matured.

Our work to combat terrorism is not done. Terrorists come in many ideological stripes — and the most dangerous among them are deeply resilient. At Facebook, we recognize our responsibility to counter this threat and remain committed to it. But we should not view this as a problem that can be “solved” and set aside, even in the most optimistic scenarios. We can reduce the presence of terrorism on mainstream social platforms, but eliminating it completely requires addressing the people and organizations that generate this material in the real-world.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy