In recent decades, as it’s become clear that the internet can be used to connect people both for good and for ill, Facebook and other social media companies have made it a priority to minimize the way criminals can use new technology. One of the greatest challenges in this arena has been terror groups, which have embraced the internet as a way to spread propaganda and recruit others to their extremist ideologies. These groups don’t confine their efforts to one social media service or online distribution mechanism. Instead, they try a variety of avenues to get their message out.
But even as governments, companies and nonprofits have battled terrorist propaganda online, we’ve faced a complex question over what’s the best way to tackle a global challenge that can proliferate in different ways, across different parts of the web.
Often analysts and observers will ask us at Facebook why, with our vast databases and advanced technology, we can’t just block nefarious activity using technology alone. The truth is that we need not only technology but also people to do this work. And in order to be truly effective in stopping the spread of terrorist content across the entire internet, we need to join forces with others.
Over two years ago, we started meeting with more than a dozen other technology companies to discuss the best ways to counter terrorists’ attempts to use our services. We all face similar challenges, including how to identify the relatively small amount of terrorist content on our relatively large sites, and how to review that content quickly and accurately across many languages.
In a Hard Questions post last June, we described how we have faced those challenges at Facebook. We invest in efforts to prevent terrorist content from ever hitting our site. But when it does, we’re working to quickly find it and remove it from our platform. We’ve historically relied on people — our content reviewers — to assess potentially violating content and remove it. But as we described last June, we’ve begun to use artificial intelligence to supplement these efforts. In figuring out what’s effective, we face the challenges that any company faces in developing technology that can work across different types of media. For instance, a solution that works for photos will not necessarily help with videos or text. A solution that works for recognizing terrorist iconography in images will not necessarily distinguish between a terrorist sharing that image to recruit and a news organization sharing the same image to educate the public.
Today we have an update on how this work is going. It is still early, but the results are promising, and we’re hopeful AI will become a more important tool in the arsenal of protection and safety on the internet and on Facebook.
At the same time, because AI alone is not the answer, we continue to expand our partnerships with other technology companies, governments and nonprofits that share our goal to root out terrorism on the internet.
Detecting and Removing Terrorism Through Artificial Intelligence
The use of AI and other automation to stop the spread of terrorist content is showing promise.
Today, 99% of the ISIS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site. We do this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once we are aware of a piece of terror content, we remove 83% of subsequently uploaded copies within one hour of upload.
Deploying AI for counterterrorism is not as simple as flipping a switch. Depending on the technique, you need to carefully curate databases or have human beings code data to train a machine. A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda. Because of these limitations, we focus our most innovative techniques on the terrorist groups that pose the biggest threat globally, in the real-world and online. ISIS and Al Qaeda meet this definition most directly, so we prioritize our tools to counter these organizations and their affiliates. We hope over time that we may be able to responsibly and effectively expand the use of automated systems to detect content from regional terrorist organizations too.
Working With Experts to Find Terrorist Content
The use of AI against terrorism is increasingly bearing fruit, but ultimately it must be reinforced with manual review from trained experts. To that end, we tap expertise from inside the company and from the outside, partnering with those who can help address extremism across the internet.
This past summer, we announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT) — working with Microsoft, Twitter and YouTube to formalize our longtime collaboration to fight the spread of terrorism and violent extremism across our platforms. Because we know that terrorists will try a variety of ways to reach people online, we’re working with smaller technology companies around the world to share insights on the trends we see from terrorists and what’s working to stop them. Already, GIFCT has brought together more than 50 technology companies over the course of three international working sessions.
GIFCT’s work encompasses the development and expansion of a shared industry database of “hashes” that was launched at the EU Internet Forum in December 2016. We share and accept these hashes, which are unique digital “fingerprints” of terrorist media, from other companies to help detect attempted uploads of potential terrorist propaganda. Through GIFCT, we also engage with governments around the world and are preparing to jointly commission research on how governments, tech companies and civil society can fight online radicalization.
Along with increased industry collaboration, we continue to deepen our bench of internal specialists — which include linguists, academics, former law enforcement personnel and former intelligence analysts. They have regional expertise in terrorist groups around the world and also help us build stronger relationships with experts outside the company who can help us more quickly spot changes in how terror groups are attempting to use the internet.
For example, in recent months, we’ve expanded our partnerships with several organizations that have expertise in global terrorism or cyber intelligence to help us in our efforts. These partners — which include Flashpoint, the Middle East Media Research Institute (MEMRI), the SITE Intelligence Group, and the University of Alabama at Birmingham’s Computer Forensics Research Lab — flag Pages, profiles and groups on Facebook potentially associated with terrorist groups for us to review. These organizations also send us photo and video files associated with ISIS and Al Qaeda that they have located elsewhere on the internet, which we can then run against our algorithms to check for file matches to remove or prevent their upload to Facebook altogether.
We’re grateful for the work that law enforcement and safety officials perform around the world to keep our communities safe from terrorism, and we’re committed to doing our part to help. As we’ve mentioned before, we reach out to law enforcement whenever we see a credible threat, and have law enforcement response teams available around the clock to respond to emergency requests. Over the past year, we’ve been able to provide support to authorities around the world that are responding to the threat of terrorism, including in cases where law enforcement has been able to disrupt attacks and prevent harm.
Our Continued Commitment
As we deepen our commitment to combating terrorism by using AI, leveraging human expertise and strengthening collaboration, we recognize that we can always do more. We’ll continue to provide updates as we develop new technology and forge new partnerships in the face of this global challenge.