Meta

Q&A on Upcoming US and Brazil Elections

The following is a transcript from today’s press briefing call related to Facebook’s work on upcoming elections in Brazil and the US.

Download Full Audio

Tom Reynolds: As we noted in the invite, this is part of the regular briefings we’ve been holding over the past few months. And we’re just two weeks away from elections starting in Brazil and 48 days from the US midterm. So we just wanted to share a few of the updates on what we have been working on. You likely saw Mark’s summary of the work Facebook has done this year and, frankly, the challenges ahead on Facebook and in the Washington Post, and Sheryl covered a lot of this ground, too, when she recently testified before Congress.

On the call today, we’re joined by Samidh Chakrabarti, the Director of our Elections and Civic Engagement work, Greg Marra, from our News Feed team, Katie Harbath, the Head of our Global Politics and Government Outreach Team, and Monica Guise Rosina, Public Policy Manager from Facebook Brazil.

As we noted in the invite, the call is on the record with the contents embargoed until we end. We’ll start with some very brief opening comments and then take questions. With that, I’ll turn it over to Samidh to get us started.

Samidh Chakrabarti: Thanks, Tom, and thanks, everyone, for joining us again today.

Preventing election interference on Facebook has been one of the biggest cross-team efforts the company has seen, and we’re bringing the same level of intensity and aggression to this problem as we did from the shift from desktop to mobile. And we have a long way to go, and this work will never be fully done as our opponents will change their tactics over time. But we’re making progress. On fake accounts, which is a priority for us, advances in machine learning allow us to block millions of fake accounts every day — the vast majority within minutes of being created and before they can do any harm.

To give you some idea of the scale in the six months from October to March, we blocked or removed almost 1.3 billion fake accounts. And last week, Sanford University published a study on this information. It found that interactions with fake news sites on Facebook have fallen by more than half since the 2016 election as a result of our efforts to prevent this junk from going viral.

And as you know, much of our product work was done earlier this year, including increased ads transparency in particular for political and issue ads. We think all of these efforts are critical when it comes to helping prevent interference in elections on Facebook. And while these enhanced security measures have been in place for months, we’re still adding new protections as they’re ready.

On Monday, for example, we announced additional security checks for people using Facebook who are affiliated with a US federal and state campaign. These pilot programs will help us better protect candidates and campaign staff, who we know are often targets of hackers and foreign adversaries in the lead-up to an election.

Along with that work, we’ve been red-teaming potential bad case scenarios to test how prepared we are for them. For example, we’ve gamed out and are actively testing what our response would be if, for example, we saw a spike in voter suppression efforts in the days leading up to the election, or if we saw a dramatic rise in Pages administered in other countries actively promoting election-related content.

So this is one of the main reasons we’re standing up a war room in Menlo Park, both for the Brazil and the US elections. It’ll have people from every one of our different teams — from engineering, threat intelligence, data science, policy, legal, and others — and it’s going to serve as a command center so that we can make real time decisions if needed.

With that, let me turn it over to Greg to talk about some of the specific work happening with News Feed.

Greg Marra: Thanks, Samidh. As Mark said in his post, we believe in the importance of giving people a voice, whatever their background or political beliefs. But people will only share if they feel safe, which is why on the product side, we’re working hard to amplify the good and mitigate the bad. People who use Facebook choose who they’re friends with, what they share, and the Pages they follow, and those choices are the heart of everything they see on Facebook.

Let me explain a bit how Feed works. First, we gather the total set of posts a person could see based on all of their friends and the pages they’ve chosen to follow. This is called the inventory. On average, people have about 3,000 posts available in their inventory each day, but they only scroll through about 300 posts. So we use technology to figure out which stories are the most meaningful to someone and put them at the top of their News Feed.

For example, we use signals like how old the post is, whether it’s a link or a photo or a video, and who posted it. Based on these signals, we make a few dozen predictions. Some of these are based on the individual. For example, the likelihood that you’ll spend time reading it. Other predictions are for everyone. For example, the likelihood that a particular post contains a clickbait headline or has been rated as false by a third-party fact checker.

Based on these predictions, we assign each post a relevance score — a number that represents how interested we think the person seeing it will be in a given story. We then order stories based on those scores. We do this for every story in someone’s inventory every time they open Facebook.

We’re constantly conducting research with people who use Facebook to help inform the ways our technology works and understand what people do and don’t like to see in their News Feed. For example, earlier this year based on feedback from people we made significant changes to News Feed to prioritize content from friends and family over public content. The friends and family changes meant that pages of all kinds, from elected officials to new organizations, are seeing their traffic fall, whatever their political persuasion.

When it comes to news, we’ve heard two important pieces of feedback. First, people really don’t want clickbait, which are posts with headlines that are designed to get you to click on a link, which typically goes to a website with lots of ads, pop-ups, potentially fake news and slow load times. In 2016, users said that clickbait was the biggest problem with News Feed, with 73% of people globally saying they were bothered when they see clickbait in the News Feed.

Based on this feedback, we have implemented a series of changes designed to reduce the distribution of posts that contain clickbait, engagement bait, or linked out to websites covered in low quality ads. These have impacted many publishers’ distribution in News Feed. Second, we heard very clearly that people want to see news from trusted sources, so within the overall decline of public content, broadly trusted news organizations do get a boost within News Feed.

As we work to prioritize quality content. We know we’re not always going to get this right. We do make mistakes, which is why we’re constantly improving the signals we use to inform our technology. It’s also why we give people several ways to control their News Feed experience. For example, the “see first” option in News Feed preferences menu, which means that your feed is ranked by date and time order for those people.

With that, I’ll turn it over to Katie to talk more about the work we’ve been doing on ads and encouraging civic engagement on Facebook.

Katie Harbath: Thanks, Greg. One new area where we’re working with a variety of academics, journalists and researchers is on our political ad archives and specifically the API we started testing last month. The goal of this new API effort is to make interacting with the archive and its contents easier and more beneficial for those interested.

We’re asking groups for their feedback on what information is most useful to their work and how to improve the tool. We also plan to introduce a weekly updated ad archive report that anyone can access. Already the archive has more than 1 million ads aimed at reaching US users since launching in May.

As we get closer to election day, I wanted to highlight another facet of our work you will start to see more of on Facebook, and that is the effort to encourage civic engagement and voter participation. Our nonpartisan work to help people learn about and engage with their representatives is constant.

At any time, you can find out when the next election is or deadlines to register to vote. Before an election, we have tools that let you hear from candidates in their words learn more about who is on ballots and make a plan to vote. On election day we let you tell your Facebook friends you voted. And after elections, we help connect users with their new representatives, allowing them to better communicate and hold officials accountable.

Keep an eye out next Tuesday, National Voter Registration Day, where we will be continuing our efforts with our partner, TurboVote, to help people register to vote in the upcoming US midterm elections.

Lastly, I wanted to note two new partnerships we recently entered into. One is with the International Republican Institute and the other is with the National Democratic Institute. The goal is to draw on the expertise from these two organizations to further assist with our international election integrity efforts for elections around the globe.

Now, I’ll turn it over to our final speaker, Monica Guise Rosina, who will give an update on how our team in Brazil is preparing for their election.

Monica Guise Rosina: Thank you, Katie. I would like to walk you through our work around the upcoming presidential elections in Brazil starting October 7. The presidential election in Brazil is a top priority for Facebook and we have been working really, really hard to protect the polls and encourage civic participation in what is Latin America’s largest democracy.

In numbers, we’re talking about approximately 150 million people eligible for voting in Brazil. This is the first time ever when candidates and parties are allowed to invest money on political content in online platforms. And in this context, we have adopted several measures to make sure we’re well-positioned. I’m going to quickly walk you through some of our work on four main fronts: enforcement, ads transparency, cooperation and news integrity.

Starting with enforcement, we are focused on removing bad actors and disrupting abusive behavior well ahead of the election. I can give you two recent examples. In July, we removed a network of 196 pages and 87 accounts that violated Facebook’s authenticity policies. This is a network that was using fake accounts to sow division and share disinformation.

And a couple of weeks later in August we took down a network off 74 groups, 57 accounts and five Pages that were enabling people to trade likes and reactions with the goal of falsely amplifying engagement for financial gain. This is a network that was initially detected by our partners at the Atlantic Council Digital Forensic Research Lab.

Our second front is ads transparency tools — something that Katie has already mentioned in depth. As we prepare for the elections, we really wanted to give people more information about the safe content they see on Facebook. We became the first country after the US to adopt Facebook’s ads transparency tools in order to help Brazilians identify political ads and see who’s paying for them.

And the campaigns are well underway by now in Brazil. These tools have not only increased transparency, but they’re also helping political parties in Brazil comply with the new and strong regulation.

A third pillar of our work focuses on cooperation. Since the beginning of the year, several people from different teams within the company have traveled the country to engage with electoral courts, campaign managers and journalists — not only to explain how Facebook works, but also what we’re doing to protect the election in our platform.

And to give you numbers again, this year alone we engaged with over 1,000 judges, prosecutors and clerks across 14 different states from Rio de Janeiro to the city of Manaus in the heart of the Amazon. Our goal here was to establish a positive dialogue with electoral authorities ahead of the elections by presenting practical aspects of our platform, answering questions and letting them know about our efforts around elections integrity, such as the ads transparency initiatives that Katie and I mentioned.

And finally, news integrity. Fighting disinformation is another area in which we have been very active in Brazil. From 2018 we have partnered with three well-respected fact-checkers in the country to help us identify false news, so we can significantly reduce their distribution. All of these partners are certified by the nonpartisan International Fact-Checking Network at Poynter. And in parallel to that, we have also supported a series of home grown initiatives to promote news literacy.

This initiative includes, for example, a messenger bot with tips to help people identify false news, and an online course that has been developed by pool of academics at top universities in Brazil aimed at identifying and fighting disinformation.

I hope that by bringing these practical examples of our work on the ground here in Brazil, I was able to let you all know how much effort we’re all putting in to getting ready for elections. And with that, back to Tom for the Q&A. Thank you.

Tom Reynolds: Great, thanks. Operator, we can open up to questions, please.

Operator: Your first question comes from Jo Ling Kent with NBC News. Please go ahead.

Jo Ling Kent: Hey, guys. Thanks so much for doing this call. My question is how much are nation states and non-state actors from Russia, Iran, North Korea and China attempting to influence Facebook users in these final two months ahead of the election? Have you gotten any complaints yet from individual campaigns and if so, which ones?

Samidh Chakrabarti: We’re continuing to monitor for bad actors all around the world no matter where they come from and that’s one of the reasons that we put a series of defenses in place to try to prevent this activity in the first place. But we need to continue to remain vigilant at all times because we know that these kinds of adversaries are very well-funded. They’re dedicated, they’re committed to this and they’re going to continue to innovate and so we will also continue to innovate here. It’s going to continuously be an arms race between us, and that’s why we need to try to be as vigilant and prepared as possible from threats no matter where they come from across the world.

Operator: Your next question comes from the line of Deepa Seetharaman with the Wall Street Journal. Please go ahead.

Deepa Seetharaman: Hi. Could you guys describe how you’re working with the other platforms — so, Twitter, YouTube, Reddit — and what specific steps you’re taking with the other companies as you guys all get ready for the midterms?

Samidh Chakrabarti: Great, so we’ve been working across the industry to share information, share leads where we can because we think that we’re actually all stronger together. We only are one piece of the puzzle here; we’re only one part of the bigger picture. And so being able to put our puzzle pieces together give us all more information, allows us all to be stronger here, because we know we can’t do it alone. And so we’ve actually been having partnerships, not just across industry, but also with civil society groups, and governments around the world to try to get much information as possible here. As one example, we’ve been here, the partnership with Atlantic Council and the Digital Forensics Research Lab, that’s allowed us to be a lot smarter and understand a lot more about potential threats that we can see. I’m not sure if Katie wants to add more to that?

Katie Harbath: Yes, absolutely. I would just add I think to what Samidh said, these partnerships that we’re doing across industry companies, civil society, et cetera, are really important for us to be able to — all of us work together and be sharing information as we go into, not just the midterms, but Brazil and looking at other elections around the globe. I would also just point out as well that the other platforms as well have also launch ads transparency efforts to bring an unprecedented amount of transparency to advertising happening across our platform in this election as well.

Tom Reynolds: Hey, Deepa. It’s Tom. I just wanted to come back and get a little more granular on your specific question. As you know — and it’s been reported — there’s been a series of meetings amongst the tech companies the past few months. I think both Sheryl and Jack Dorsey spoke to this at their hearings. And then just on a working level, I think all the teams involved in this in each of the companies had developed very good trusting relationships which has further helped facilitate that sharing information of if and when we need to.

Operator: Your next question comes from the line of Josh Constine from TechCrunch. Please go ahead.

Josh Constine: During the call, you discussed a few things: the weekly updated ad archive reports, Brazilian Messenger bots that are spotting fake news and partnerships with the International Republican Institute and Democratic Institute. I wanted to see if you could just describe those a little bit more in detail?

Katie Harbath: Yes, absolutely. I think for the ad report, what we’ll be looking to do is provide more aggregation to folks of some of the themes and trends that we are seeing in terms of political and issue advertising on our platform. With the International Republican Institute and National Democrat Institute, I want to be clear, they don’t do any work in US elections; they only work internationally. But they are going to — we’ll be working with them on helping to understand what they are seeing on the ground on elections, what they are hearing and how they are hearing — Facebook platform and other — our family of apps — might be using an election so that we can better understand the risks that people may face and what we might be able to do to mitigate those.

And then I don’t know if Monica wants to talk about the Messenger bot in Brazil?

Monica Guise Rosina: Sure. Thanks, Katie. Basically the Messenger bot is part of the partnership that we have with two of the fact-checking agencies we’re working with in Brazil. One of them is called Fátima, and it comes from fact-checking, Fátima. It will chat with people through Messenger to assist them in the process of checking online content. And the other bot is called Lupe, and it’s actually based on [Agência] Lupa’s database on false news around elections. The bot will also answer questions from people on Messenger based on their database.

And I’d just like to highlight once more the project that’s called Vaza, Falsiane! which is a free online course that seeks to support news reading skills, encourage critical analysis of information sources, and really, at the end of the day, contribute to the quality of the debate on Facebook.

Operator: Your next question comes from the line of Courtney Norris from PBS NewsHour. Please go ahead.

Courtney Norris: Hi. Thank you guys so much for doing this. I just have two quick questions — more clarifications. Did you say you — between October and March — you removed 1.6 billion fake accounts? And on — you said that there would be additional security checks for people using Facebook affiliated with federal and state campaigns, and I’m curious if you could go into more detail about that — what those are?

Samidh Chakrabarti: Great. Just to correct that one number, it’s actually — it was 1.3 billion — almost 1.3 billion, not 1.6 billion…

Courtney Norris: … OK, great.

Samidh Chakrabarti: … so it’s — quick clarification there. And then in terms of your other question, so yes, it’s a pilot program that we’ve launched. And what we’re really aiming to do is just to provide a higher level of security to campaigns and people who staff those campaigns both at the federal and state levels. And it’s just very early days — we just launched it this week — and we’re eager to see how it does and to evaluate it going forward.

Katie Harbath: And it’s Katie. Just to add to that, some examples of things that we will be doing, is helping to make sure that they’re adopting our strongest account security protections like two-factor authentication, as well as monitoring for potential hacking threats.

Operator: Your next question comes from the line of Joe Menn from Reuters. Please go ahead.

Joe Menn: Hi. Thanks for doing this. The Atlantic Council put out something very recently looking at the most popular articles in Brazil that talk about corruption which is a major theme of the election there. And it says, of the four most popular articles, three were factually incorrect, and at least one of those may have been spread deliberately to sow division. Is that — is that winning? What are you doing — what does that say about the success of your efforts so far in Brazil?

Greg Marra: Thanks. This is Greg from the Feed team. I think to characterize the efforts we’ve been doing against misinformation, we’re excited to see progress initially here. We know that we have more to do and that misinformation like many of the other areas we work on is constantly going to be an arms race. We see people who are both economically and politically motivated to spread misinformation, both in election contexts and outside of election contexts.

In general, what we try to do is both limit the distribution of factually incorrect things on Facebook, through our partnerships with third party fact-checking organizations, as well as give people more context about the information that they are seeing on Facebook so that they can decide what to read, trust, and share. And so while we know that this is going to be an ongoing and constant battle — we’re excited by some of the progress that we’ve made — but we know that there’s a lot more to do here.

Operator: Your next question comes from the line of Ryan Broderick from BuzzFeed. Please go ahead.

Ryan Broderick: Hi, guys. Interesting stuff. I wanted to ask about the fake news efforts over the summer in Brazil. Based on the stuff that I’ve been reading getting ready for the election, it seems like a big problem with pushing some of these fake news networks off of Facebook is that they then go on to WhatsApp which is sort of an un-navigable black box, particularly for journalists and fact checkers.

Is there any sort of coordination between flagging what your seeing on Facebook and making sure you know where it’s going before it disappears into the encrypted swamp of WhatsApp?

Samidh Chakrabarti: I think an important thing to realize, we’re working on these efforts across the Facebook family of apps. And for any questions that are specific to WhatsApp I think I would refer you to their team.

Operator: Your next question comes from the line of David Ingram from NBC News. Please go ahead.

David Ingram: Hey, all. I was interested in something Samidh said at beginning about going through bad case, worse case scenarios — that if you are in the last weeks of the — before an election and you see a spike in voter suppression efforts, for example. I was wondering if you could give us some more detail about what the sort of practice runs look like for that, and what you would do under the various scenarios that you’ve gamed out?

Samidh Chakrabarti: Great, yes, happy to go over that. Let me actually just start by saying that this is — won’t be the first time that we’ve had these efforts in place. We’ve been working on elections around the world for the last couple of years since the 2016 US election with our election integrity efforts. And with each country that’s had an election our defenses have just gotten better and better and better.

And for each of these elections we do convene a team of people across the company to handle situations as they crop up close to election time when every hour, every minute counts. And so in some sense we’ve been doing this for a while and we’ve had quite a bit of practice, and that’s one of the reasons that we feel a higher degree of preparedness this year than we did back in 2016.

To the specific scenario that you mentioned around voter suppression content, so we want to make sure that people are not distributing information that may give incorrect information about the mechanisms of voting. For example, voting by text message is not the kind of message that we would want to see distributed on a platform.

And so those are the kinds of scenarios that we’re trying to protect against. And we’re also building a lot of proactive tools to make sure that these don’t get distributed in the first place, so that we don’t have sort of a heat of the moment, last day of an election — last day before an election scenario here. I think that is really one of the biggest changes in our approach relative to 2016, is that we’re now being a lot more proactive in building systems to look for problems so that they don’t become big problems on our platform.

Operator: Your next question comes from the line of Salvador Rodriguez from CNBC. Please go ahead.

Salvador Rodriguez: Yes, hi there. I was hoping that Katie could elaborate on these new partnerships with the International Republican Institute and the International Democratic Institute. What exactly does that all entail and how will that play in to the mix of all this?

Katie Harbath: Yes, absolutely. I want to be clear that the partnerships with them are focused on our work on elections internationally and not in the United States because they don’t do work in the United States. But this is something, as I mentioned before, we’re working with them. They have a lot of experience in working in elections and in many countries around the globe. And so they’re also doing a lot of work and thinking around election integrity and how we can better protect the integrity of elections on the platform.

And so, they’re going to be a key partner for us in terms of thinking about some of that thought leadership and thinking about how we need to be addressing these issues and making sure we’re not just fighting the battles of what happened in 2016, but also making sure we’re looking around corners, helping us to make sure and understand the risks and election integrity risks that we may see in different countries.

We know that not every country is unique in terms of the threats that it may face when it comes to elections. And so, having their expertise from having worked in these places for many, many years will be helpful to us as we develop our strategies and continue to evolve them in elections around the globe.

Operator: Your next question comes from the line of Kevin Roose from New York Times. Please go ahead.

Kevin Roose: Hi, guys. The Unite America Institute and other journalism organizations have called for a safe harbor exception to Facebook’s terms of service which would allow them to conduct research on the platform beyond what is currently allowed through terms of service. Is that something that you guys are open to or would consider?

Katie Harbath: Hey, it’s Katie. I think that one of the things that we’re proud of and that we’re working on very closely is our election research commission and trying to think about the best ways that we can allow research on our platform while also protecting the privacy of our users. And that’s very much still in the beginning stages, and so we are hoping to use that as some learning experiences as well as we’ve opened up the ad archive API and are working with different groups and getting feedback. This is a ongoing process of trying to find ways to make sure that folks can be doing research and understanding the impact of Facebook on elections.

Tom Reynolds: Thanks. Operator, we’re going to have time for one more question.

Operator: Your last question comes from the line of Glenn Chapman from AFP. Please go ahead.

Glenn Chapman: Oh, hey, am I glad. I was sweating there; I thought I wouldn’t slip in. All right, just one quick, practical question is when and where will this war room be up and running? That’s just sort of on a logistical side. And the other is what challenges if any are there to — while you’re doing all these efforts related to politics and the election to assuage or fend off accusations that there’s this bias toward quieting certain political voices — most notably in recent news coverage, conservative voices?

Samidh Chakrabarti: Great. I’ll take your first question there. We’re building this war room in our Menlo Park headquarters in California, and we’ve been building it for quite some time, both virtually, digitally and then now physically in the real world. And on your second question, I’ll pass it over to Katie.

Katie Harbath: Yes, so Facebook was built to give people a voice regardless of their political beliefs and you choose who you’re friends with, what you share and the pages that you follow, and that choice is at the heart of everything you see on Facebook. And so our goal is to help bring people together and to build a community for everyone.

Tom Reynolds: OK. We’re going to wrap up with that. Thanks, everybody, for joining. If you have any follow up questions, you can reach us at press@fb.com. Thanks again for joining and we’ll talk to you soon.

 

 



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy