Meta

Hard Questions: Russian Ads Delivered to Congress

By Elliot Schrage, Vice President of Policy and Communications

Update on October 6, 2017: This post has been combined with an earlier post on this topic to create a single page of information about the ads we discovered. We’ll continue to add updates here. 

What was in the ads you shared with Congress? How many people saw them?
Most of the ads appear to focus on divisive social and political messages across the ideological spectrum, touching on topics from LGBT matters to race issues to immigration to gun rights. A number of them appear to encourage people to follow Pages on these issues.

Here are a few other facts about the ads:

  • An estimated 10 million people in the US saw the ads. We were able to approximate the number of unique people (“reach”) who saw at least one of these ads, with our best modeling
  • 44% of total ad impressions (number of times ads were displayed) were before the US election on November 8, 2016; 56% were after the election.
  • Roughly 25% of the ads were never shown to anyone. That’s because advertising auctions are designed so that ads reach  people based on relevance, and certain ads may not reach anyone as a result.
  • For 50% of the ads, less than $3 was spent; for 99% of the ads, less than $1,000 was spent.
  • About 1% of the ads used a specific type of Custom Audiences targeting to reach people on Facebook who had visited that advertiser’s website or liked the advertiser’s Page — as well as to reach people who are similar to those audiences. None of the ads used another type of Custom Audiences targeting based on personal information such as email addresses. (This bullet added October 3, 2017.)
  • Of the more than 3,000 ads that we have shared with Congress, 5% appeared on Instagram. About $6,700 was spent on these ads (This bullet added October 6, 2017.)

Why do you allow ads like these to target certain demographic or interest groups?
Our ad targeting is designed to show people ads they might find useful, instead of showing everyone ads that they might find irrelevant or annoying. For instance, a baseball clothing line can use our targeting categories to reach people just interested in baseball, rather than everyone who likes sports. Other examples include a business selling makeup designed specifically for African-American women. Or a language class wanting to reach potential students.

These are worthwhile uses of ad targeting because they enable people to connect with the things they care about. But we know ad targeting can be abused, and we aim to prevent abusive ads from running on our platform. To begin, ads containing certain types of targeting will now require additional human review and approval.

In looking for such abuses, we examine all of the components of an ad: who created it, who it’s intended for, and what its message is. Sometimes a combination of an ad’s message and its targeting can be pernicious. If we find any ad — including those targeting a cultural affinity interest group — that contains a message spreading hate or violence, it will be rejected or removed. Facebook’s Community Standards strictly prohibit attacking people based on their protected characteristics, and our advertising terms are even more restrictive, prohibiting advertisers from discriminating against people based on religion and other attributes.

Why can’t you catch every ad that breaks your rules?
We review millions of ads each week, and about 8 million people report ads to us each day. In the last year alone, we have significantly grown the number of people working on ad review. And in order to do better at catching abuse on our platform, we’re announcing a number of improvements, including:

  • Making advertising more transparent
  • Strengthening enforcement against improper ads
  • Tightening restrictions on advertiser content
  • Increasing requirements for authenticity
  • Establishing industry standards and best practices

Weren’t some of these ads paid for in Russian currency? Why didn’t your ad review system notice this and bring the ads to your attention?
Some of the ads were paid for in Russian currency. Currency alone isn’t a good way of identifying suspicious activity, because the overwhelming majority of advertisers who pay in Russian currency, like the overwhelming majority of people who access Facebook from Russia, aren’t doing anything wrong. We did use this as a signal to help identify these ads, but it wasn’t the only signal. We are continuing to refine our techniques for identifying the kinds of ads in question. We’re not going to disclose more details because we don’t want to give bad actors a roadmap for avoiding future detection.

If the ads had been purchased by Americans instead of Russians, would they have violated your policies?
We require authenticity regardless of location. If Americans conducted a coordinated, inauthentic operation — as the Russian organization did in this case — we would take their ads down, too.

However, many of these ads did not violate our content policies. That means that for most of them, if they had been run by authentic individuals, anywhere, they could have remained on the platform.

Shouldn’t you stop foreigners from meddling in US social issues?
The right to speak out on global issues that cross borders is an important principle. Organizations such as UNICEF, Oxfam or religious organizations depend on the ability to communicate — and advertise — their views in a wide range of countries. While we may not always agree with the positions of those who would speak on issues here, we believe in their right to do so — just as we believe in the right of Americans to express opinions on issues in other countries.

Some of these ads and other content on Facebook appear to sow division in America and other countries at a time of increasing social unrest. If these ads or content were placed or posted authentically, you would allow many of these. Why?
This is an issue we have debated a great deal. We understand that Facebook has become an important platform for social and political expression in the US and around the world. We are focused on developing greater safeguards against malicious interference in elections and strengthening our advertising policies and enforcement to prevent abuse.

As an increasingly important and widespread platform for political and social expression, we at Facebook — and all of us — must also take seriously the crucial place that free political speech occupies around the world in protecting democracy and the rights of those who are in the minority, who are oppressed or who have views that are not held by the majority or those in power. Even when we have taken all steps to control abuse, there will be political and social content that will appear on our platform that people will find objectionable, and that we will find objectionable. We permit these messages because we share the values of free speech — that when the right to speech is censored or restricted for any of us, it diminishes the rights to speech for all of us, and that when people have the right and opportunity to engage in free and full political expression, over time, they will move forward, not backwards, in promoting democracy and the rights of all.

Are you working with other companies and the government to prevent interference that exploits platforms like yours?
The threats we’re confronting are bigger than any one company, or even any one industry. The kind of malicious interference we’re seeing requires everyone working together, across business, government and civil society, to share information and arrive at the best responses.

We have been working with many others in the technology industry, including with Google and Twitter, on a range of elements related to this investigation. We also have a long history of working together to fight online threats and develop best practices on other issues, such as child safety and counterterrorism. And we will continue all of this work.

With all these new efforts you’re putting in place, would any of them have prevented these ads from running?
We believe we would have caught these malicious actors faster and prevented more improper ads from running. Our effort to require US election-related advertisers to authenticate their business will help catch suspicious behavior. The ad transparency tool we’re building will be accessible to anyone, including industry and political watchdog groups. And our improved enforcement and more restrictive content standards for ads would have rejected more of the ads when submitted.

Is there more out there that you haven’t found?
It’s possible. We’re still looking for abuse and bad actors on our platform — our internal investigation continues. We hope that by cooperating with Congress, the special counsel and our industry partners, we will help keep bad actors off our platform.

Do you now have a complete view of what happened in this election?
The 2016 US election was the first where evidence has been widely reported that foreign actors sought to exploit the internet to influence voter behavior. We understand more about how our service was abused and we will continue to investigate to learn all we can. We know that our experience is only a small piece of a much larger puzzle. Congress and the special counsel are best placed to put these pieces together because they have much broader investigative power to obtain information from other sources.

We strongly believe in free and fair elections. We strongly believe in free speech and robust public debate. We strongly believe free speech and free elections depend upon each other. We’re fast developing both standards and greater safeguards against malicious and illegal interference on our platform. We’re strengthening our advertising policies to minimize and even eliminate abuse. Why? Because we are mindful of the importance and special place political speech occupies in protecting both democracy and civil society. We are dedicated to being an open platform for all ideas — and that may sometimes mean allowing people to express views we — or others — find objectionable. This has been the longstanding challenge for all democracies: how to foster honest and authentic political speech while protecting civic discourse from manipulation and abuse. Now that the challenge has taken a new shape, it will be up to all of us to meet it.

Update on October 8, 2017:

Did you have someone embedded within the Trump campaign?
We offered identical support to both the Trump and Clinton campaigns, and had teams assigned to both. Everyone had access to the same tools, which are the same tools that every campaign is offered.

The campaigns did not get to “hand pick” the people who worked with them from Facebook. And no one from Facebook was assigned full-time to the Trump campaign, or full-time to the Clinton campaign. Both campaigns approached things differently and used different amounts of support.

Originally published on September 21, 2017:

Hard Questions: More on Russian Ads

By Elliot Schrage, Vice President of Policy and Communications

1) Why did Facebook finally decide to share the ads with Congress?

As our General Counsel has explained, this is an extraordinary investigation — one that raises questions that go to the integrity of the US elections. After an extensive legal and policy review, we’ve concluded that sharing the ads we’ve discovered with Congress, in a manner that is consistent with our obligations to protect user information, will help government authorities complete the vitally important work of assessing what happened in the 2016 election. That is an assessment that can be made only by investigators with access to classified intelligence and information from all relevant companies and industries — and we want to do our part. Congress is best placed to use the information we and others provide to inform the public comprehensively and completely.

2) Why are you sharing these with special counsel and Congress — and not releasing them to the public?

Federal law places strict limitations on the disclosure of account information. Given the sensitive national security and privacy issues involved in this extraordinary investigation, we think Congress is best placed to use the information we and others provide to inform the public comprehensively and completely. For further understanding on this decision, see our General Counsel’s post.

3) Let’s go back to the beginning. Did Facebook know when the ads were purchased that they might be part of a Russian operation? Why not?

No, we didn’t.

The vast majority of our over 5 million advertisers use our self-service tools. This allows individuals or businesses to create a Facebook Page, attach a credit card or some other payment method and run ads promoting their posts.

In some situations, Facebook employees work directly with our larger advertisers. In the case of the Russian ads, none of those we found involved in-person relationships.

At the same time, a significant number of advertisers run ads internationally, and a high number of advertisers run content that addresses social issues — an ad from a non-governmental organization, for example, that addresses women’s rights. So there was nothing necessarily noteworthy at the time about a foreign actor running an ad involving a social issue. Of course, knowing what we’ve learned since the election, some of these ads were indeed both noteworthy and problematic, which is why our CEO today announced a number of important steps we are taking to help prevent this kind of deceptive interference in the future.

4) Do you expect to find more ads from Russian or other foreign actors using fake accounts?

It’s possible.

When we’re looking for this type of abuse, we cast a wide net in trying to identify any activity that looks suspicious. But it’s a game of cat and mouse. Bad actors are always working to use more sophisticated methods to obfuscate their origins and cover their tracks. That in turn leads us to devise new methods and smarter tactics to catch them — things like machine learning, data science and highly trained human investigators. And, of course, our internal inquiry continues.

It’s possible that government investigators have information that could help us, and we welcome any information the authorities are willing to share to help with our own investigations.

Using ads and other messaging to affect political discourse has become a common part of the cybersecurity arsenal for organized, advanced actors. This means all online platforms will need to address this issue, and get smarter about how to address it, now and in the future.

5) I’ve heard that Facebook disabled tens of thousands of accounts in France and only hundreds in the United States. Is this accurate?

No, these numbers represent different things and can’t be directly compared.

To explain it, it’s important to understand how large platforms try to stop abusive behavior at scale. Staying ahead of those who try to misuse our service is an ongoing effort led by our security and integrity teams, and we recognize this work will never be done. We build and update technical systems every day to make it easier to respond to reports of abuse, detect and remove spam, identify and eliminate fake accounts, and prevent accounts from being compromised. This work also reduces the distribution of content that violates our policies, since fake accounts often distribute deceptive material, such as false news, hoaxes, and misinformation.

This past April, we announced improvements to these systems aimed at helping us detect fake accounts on our service more effectively. As we began to roll out these changes globally, we took action against tens of thousands of fake accounts in France. This number represents fake accounts of all varieties, the most common being those that are used for financially-motivated spam. While we believe that the removal of these accounts also reduced the spread of disinformation, it’s incorrect to state that these tens of thousands of accounts represent organized campaigns from any particular country or set of countries.

In contrast, the approximately 470 accounts and Pages we shut down recently were identified by our dedicated security team that manually investigates specific, organized threats. They found that this set of accounts and Pages were affiliated with one another — and were likely operated out of Russia.

Read more about our new blog series Hard Questions.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy