By Mia Garlick, Director of Policy for Australia and New Zealand
At Facebook, we’re focused on protecting elections and making sure people have a voice in the political process. We’ve learned a lot from our work around the world over the past few years and have built a robust approach to safeguarding elections on Facebook. Our approach is multi-faceted and includes finding and removing fake accounts, reducing misinformation, disrupting bad actors and increasing ads transparency.
Today, we’re sharing more on our efforts to protect the upcoming Federal Election in Australia.
Restricting Foreign Electoral Ads
Combating foreign interference is a key pillar of our approach to safeguarding elections on our platform. As part of this commitment, we’re temporarily not allowing electoral ads purchased from outside Australia ahead of the election in May.
The restriction will take effect the day after the election is called and will apply to ads we determine to be coming from foreign entities that are of an electoral nature, meaning they contain references to politicians, parties or election suppression. We also won’t allow foreign ads that include political slogans and party logos.
Greater Transparency for All Ads
Last week, as part of our commitment to advertising transparency, we updated the Ad Library to make it easier to learn about all ads on Facebook and the Pages that run them. The Ad Library now includes all active ads any Page is running, along with more Page information such as creation date, name changes, Page merges and the primary country of people who manage Pages with large audiences. This information was previously only visible on a Page in the ‘Info and Ads’ section, but is now available to everyone through the Ad Library, including people who aren’t on Facebook.
Shining a brighter light on advertising and Pages makes both Facebook and advertisers more accountable, which is good for people and good for democracy.
Expanding Third Party Fact Checking to Australia
In many countries, we work with independent third-party fact checkers who review stories, check their facts and rate their accuracy. In Australia, we’ll launch third-party fact-checking in partnership with the international news agency Agence France-Presse (AFP) so we can continue to improve the accuracy of information on Facebook.
The independent fact-checkers we work with are certified through the non-partisan International Fact-Checking Network (IFCN). Today, we have 43 partners fact-checking content in 24 languages globally, and we’re investing in ways to scale these efforts further. (Update on June 13, 2019 at 5PM PT: AAP Fact Check has also joined our third-party fact-checking program to fact-check content in Australia.)
Once a story is rated as false, we show it lower in News Feed, reducing its future views by more than 80% on average.
Growing Our Capacity to Address Misinformation
We want to make sure people see accurate information about the upcoming election. That’s why we continue to improve the quality and authenticity of information on Facebook to fight false news. We do this in three main ways:
- We remove content that violates our Community Standards, which helps enforce the safety and security of the platform.
- For content that does not directly violate our Community Standards but still undermines the authenticity of the platform — like clickbait or sensational material — we reduce its distribution in News Feed so less people see it.
- We inform people by giving them more context on the information they see on Facebook. For instance, when someone comes across a story, they can tap on “About this article” to see more details on the article and the publisher.
Taking Action on Fake Accounts
Fake accounts are often behind harmful and misleading content and we work hard to keep them off Facebook. We block millions of fake accounts at registration every day. We also constantly improve our technical systems to make it easier to respond to reports of abuse; detect and remove spam; identify and eliminate fake accounts; and prevent accounts from being compromised. And we’ve made improvements to recognize these inauthentic accounts more easily by identifying patterns of activity, without assessing account content.
Globally, we took action on more than 1.5 billion fake accounts between April and September 2018, either during the registration process or within minutes of the account being created. We found 99.6% of these accounts using technology before anyone reported them to us.
Increasing our Safety and Security Efforts
W’re committed to tackling all kinds of inauthentic behavior and abuse on our platform from misinformation, misrepresentation and foreign interference, to phishing, harassment and violent threats. We know that all of these tactics can intensify during elections, which is why we invest in a combination of expert resources and technology to find, disrupt and remove this behavior.
We now have more than 30,000 people working on safety and security across Facebook, three times as many as we had in 2017. We have also improved our machine learning capabilities around political content and inauthentic behavior, which lets us to better find and removing violating behavior. Globally, we have removed thousands of Pages, groups and accounts that engaged in coordinated inauthentic behavior across our platforms, including recent takedowns in Australia. We’re committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity. We’re also providing safety and security guidance to help protect candidates and party Pages from hacking and impersonation.
Through this work, we want to make it harder to interfere with elections on the platform, and easier for people to make their voices legitimately heard in the political process. We have dedicated global teams working around the clock on every upcoming election around the world, including the Federal Election in Australia. For more on our work to protect elections around the world in 2019, see here.