Meta

A Further Update on New Zealand Terrorist Attack

By Guy Rosen, VP, Product Management 

We continue to keep the people, families and communities affected by the tragedy in New Zealand in our hearts. Since the attack, we have been working directly with the New Zealand Police to respond to the attack and support their investigation. In addition, people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack, and we wanted to provide additional information from our review into how our products were used and how we can improve going forward.

Timeline

As we posted earlier this week, we removed the attacker’s video within minutes of the New Zealand Police’s outreach to us, and in the aftermath, we have people working on the ground with authorities. We will continue to support them in every way we can. In light of the active investigation, police have asked us not to share certain details. At present we are able to provide the information below:

  • The video was viewed fewer than 200 times during the live broadcast.
  • No users reported the video during the live broadcast.
  • Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook.
  • Before we were alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site.
  • The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.
  • In the first 24 hours, we removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.

Safety on Facebook Live

We recognize that the immediacy of Facebook Live brings unique challenges, and in the past few years we’ve focused on enabling our review team to get to the most important videos faster. We use artificial intelligence to detect and prioritize videos that are likely to contain suicidal or harmful acts, we improved the context we provide reviewers so that they can make the most informed decisions and we built systems to help us quickly contact first responders to get help on the ground. We continue to focus on the tools, technology and policies to keep people safe on Live.

Artificial Intelligence

Many people have asked why artificial intelligence (AI) didn’t detect the video from last week’s attack automatically. AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it’s not perfect.

AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video. This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.

AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year we more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers, and why we encourage people to report content that they find disturbing.

Reporting

During the entire live broadcast, we did not get a single user report. This matters because reports we get while a video is broadcasting live are prioritized for accelerated review. We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground.

Last year, we expanded this acceleration logic to also cover videos that were very recently live, in the past few hours. Given our focus on suicide prevention, to date we applied this acceleration when a recently live video is reported for suicide.

In Friday’s case, the first user report came in 29 minutes after the broadcast began, 12 minutes after the live broadcast ended. In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review.

Circulation of the Video

The video itself received fewer than 200 views when it was live, and was viewed about 4,000 times before being removed from Facebook. During this time, one or more users captured the video and began to circulate it. At least one of these was a user on 8chan, who posted a link to a copy of the video on a file-sharing site and we believe that from there it started circulating more broadly. Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.

This isn’t the first time violent, graphic videos, whether live streamed or not, have gone viral on various online platforms. Similar to those previous instances, we believe the broad circulation was a result of a number of different factors:

  1. There has been coordination by bad actors to distribute copies of the video to as many people as possible through social networks, video sharing sites, file sharing sites and more.
  2. Multiple media channels, including TV news channels and online websites, broadcast the video. We recognize there is a difficult balance to strike in covering a tragedy like this while not providing bad actors additional amplification for their message of hate.
  3. Individuals around the world then re-shared copies they got through many different apps and services, for example filming the broadcasts on TV, capturing videos from websites, filming computer screens with their phones, or just re-sharing a clip they received.

People shared this video for a variety of reasons. Some intended to promote the killer’s actions, others were curious, and others actually intended to highlight and denounce the violence. Distribution was further propelled by broad reporting of the existence of a video, which may have prompted people to seek it out and to then share it further with their friends.

Blocking the Video

Immediately after the attack, we designated this as a terror attack, meaning that any praise, support, or representation violates our Community Standards and is not permitted on Facebook. Given the severe nature of the video, we prohibited its distribution even if shared to raise awareness, or only a segment shared as part of a news report.

In the first 24 hours, we removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.

We’ve been asked why our image and video matching technology, which has been so effective at preventing the spread of propaganda from terrorist organizations, did not catch those additional copies. What challenged our approach was the proliferation of many different variants of the video, driven by the broad and diverse ways in which people shared it:

First, we saw a core community of bad actors working together to continually re-upload edited versions of this video in ways designed to defeat our detection.

Second, a broader set of people distributed the video and unintentionally made it harder to match copies. Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats.

In total, we found and blocked over 800 visually-distinct variants of the video that were circulating. This is different from official terrorist propaganda from organizations such as ISIS – which while distributed to a hard core set of followers, is not rebroadcast by mainstream media organizations and is not re-shared widely by individuals.

We’re learning to better understand techniques which would work for cases like this with many variants of an original video. For example, as part of our efforts we employed audio matching technology to detect videos which had visually changed beyond our systems’ ability to recognize automatically but which had the same soundtrack.

Next Steps

Our greatest priorities right now are to support the New Zealand Police in every way we can, and to continue to understand how our systems and other online platforms were used as part of these events so that we can identify the most effective policy and technical steps. This includes:

  • Most importantly, improving our matching technology so that we can stop the spread of viral videos of this nature, regardless of how they were originally produced. For example, as part of our response last Friday, we applied experimental audio-based technology which we had been building to identify variants of the video.
  • Second, reacting faster to this kind of content on a live streamed video. This includes exploring whether and how AI can be used for these cases, and how to get to user reports faster. Some have asked whether we should add a time delay to Facebook Live, similar to the broadcast delay sometimes used by TV stations. There are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos. More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground.
  • Third, continuing to combat hate speech of all kinds on our platform. Our Community Standards prohibit terrorist and hate groups of all kinds. This includes more than 200 white supremacist organizations globally, whose content we are removing through proactive detection technology.
  • Fourth, expanding our industry collaboration through the Global Internet Forum to Counter Terrorism (GIFCT). We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis.

What happened in New Zealand was horrific. Our hearts are with the victims, families and communities affected by this horrible attack.

We’ll continue to provide updates as we learn more.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy