Facebook Newsroom

Guest Post: Is Social Media Good or Bad For Democracy?

By Toomas Hendrik Ilves, Distinguished Visiting Fellow, the Hoover Institution
This post is part of a series on social media and democracy.

For some two centuries, the electoral process has developed alongside technology – radio and television, the introduction of voting machines, mechanical and electronic – but never has the impact been so dramatically disruptive as in the past decade, with the arrival of hacking, doxing, “fake news,” social media and big data.

Liberal democracies and the political process in Germany, France, the UK the US, Spain and elsewhere have been subjected to a variety of distinct digital “attack vectors” in the past two years. Some are linked to Russia, others to one or another political party or group. These vectors include a range of disparate tactics, all under the misleading and overused general rubric, “election hacking.” Some but not all use social media; some aspects of social media manipulations remain murky and poorly understood, mainly because social media companies until autumn 2017 have been loath to reveal what they know.

Hacking, or breaking into servers and computers, goes back at least to the early 1970s. As a tool of espionage, it was inevitable that political parties, parliaments and candidates would eventually be hacked too. Now, however, a more pernicious technique known as “doxing,” or making private information public, has become part of the political process. First used on a wide scale by Wikileaks to publicize stolen US State Department cables, doxing has become a tool of political campaigns.

In the recent US and French elections, doxing was used by one group to embarrass the opposing side. Russian hackers breached both Republican and Democratic servers but only released information on the Democrats. In France, no emails from the Front National, the far-right French party were doxed. As a new twist, some of the doxed emails from Macron’s servers were clearly faked, planted there to cause even more damage.

These are new techniques, at least at this massive level of dissemination. Kompromat, the Russian term for compromising material, real or not, has been a staple of political action for centuries. Yet only with the advent of social media, has kompromat found widespread distribution and no less important, redistribution via social media shares.

Social acceptance of purloined correspondence is also changing. It is difficult to imagine that the media would have accepted or publicized physically stolen correspondence, had the Watergate break-in in 1972 been successful. As the 2016 US election showed, publishing purloined digital correspondence created no ethical dilemmas, even for the New York Times.

“Is social media therefore good or bad for democracy?” Too many factors are at play — and too little is known about their impact — to answer this question fairly. Certainly the effect on electoral democracy has been profound. Moreover, the effects may not be felt in democratic elections themselves but in how governments react to perceived threats, that is by imposing limits on free expression. It is imperative, however, that we explore the issue, with honesty and candor.

‘Fake News’ or Disinformation

Until the digital era, the primary problem with “fake news,” or as it was called then, “disinformation,” was its dissemination. Editors took care that published information was reliable, fearing both libel laws and the loss of their publication’s reputation. If something was patently false, ridiculous or unverifiable, the broader public never saw it.

The classic example of manufactured lies, the claim that the AIDs-causing HIV virus was developed by the CIA, took months to migrate to Europe from the story’s initial placement in a provincial Indian communist party paper. Even when the story eventually did reach the European press, it never gained traction, other than as an example of Soviet disinformation.

Today, it’s possible to create a fake news outlet, with a fake masthead in Gothic typeface, put it on Facebook, Vkontakte or Twitter and watch it take off. The public sees an article in something that looks like a news site. If they press the share button or retweet icon before detecting the fraud, it takes a fraction of a second before it’s off to friends and followers who may consider that share to be additional confirmation or approval.

In an ambitious 2016 study by BuzzFeed, which examined the consumption of fake news shared on Facebook in the three months before the US election, the data showed that the top performing fake news stories generated 8.7 million shares, reaction and comments. That compared to 7.3 million for the top stories produced by major news outlets and shared on Facebook.

We can add to this the Pew Study from last year which found that two-thirds of Americans rely on social media for at least some of their news, and a more recent Dartmouth study showing that 27.4% of voting age Americans visited a pro-Trump or pro-Clinton fake news site in the final weeks of the 2016 US election. We cannot say for certain that such sites altered anyone’s vote but we must admit that false news on social media is now fundamental input to voters’ decision-making.

The problem with drawing conclusions from these numbers is that it is extremely difficult to judge the actual impact of this massive disinformation effort. Research is relatively recent, concern over the issue is new. The studies have been inconclusive, although it is clear that false stories do get shared and retweeted on a large scale. Those who wish to downplay the impact on voters would claim the BuzzFeed numbers and other studies do not prove there was an effect on the election; others are alarmed and pushing for measures to limit “fake news” through legislative or regulatory measures.

Electoral Democracy vs. Freedom of Expression

It is the last tendency, this call to legislate fake news, where the two pillars of liberal democracy – elections for the orderly transition of power and constitutionally guaranteed freedom of expression – increasingly come into conflict. This conflict is likely to become more serious in coming years. This past June, Germany passed a law (the Netzwerkdurchsetzungsgesetz or Network Enforcement Act) mandating fines of up to 50 million Euros ($59 million USD) for platforms that fail to take down hate speech or fake news within 24 hours of its posting. Because of its own history with extremism, Germany has always been particularly strict on hate speech. Social media makes it more so.

The technical, jurisdictional and implementation problems with Germany’s (or any other democratic countries’ similar) approach are legion. But there are even graver problems. Illiberal regimes typically cherry-pick and copy-paste sections of Western legislation to avoid criticism that their own regimes were too heavy-handed. The Russian Duma, as is their wont, already has introduced a copy-cat bill of the German law, mandating removal of material deemed “illegal” within 24 hours.

Pressure to regulate fake news will increase. Some countries – the US, Estonia (consistently ranked No. 1 for internet freedom by Freedom House) and others will probably resist. But it is not clear how long this will continue if governments see a threat to democracy or even to centrist parties currently in office. Germany, after all, in 2016 was ranked in fourth place in internet freedom, just behind the US and Canada. Now, after elections in September and an extremist right-wing party, AFD, gaining 13% of the vote, social media is seen as a source of political upheaval.

In the absence of more self-policing by social media platforms, pressure to regulate over the issue of “fake news” will not recede.

Technological Threats: Bots, Big Data and Targeted Dark Ads

“Fake news” is a concept easily grasped and it has dominated politicians’ concerns. Yet a handful of new threats — or “attack vectors” — may prove to be more of a threat to maintaining democracy.

One of those vectors is “bots.” The Twittersphere especially has been deluged by bots — or robot accounts tweeting and retweeting stories — that generally are fake and often in the service of governments or extremist political groups tying to sway public opinion. NATO’s Center of Excellence for Strategic Communication, for example, recently reported that an astounding 84% of Russian-language Twitter messages about NATO’s presence in Eastern Europe were generated by bots. The assumption, of course, is that the more something is seen, the more likely it will be believed.

Russian and “Alt-Right” bot accounts have set their sights on a variety of issues. The hashtag #Syriahoax appeared immediately after news of a Syrian chemical gas attack and at one point was retweeted by a single source every five seconds. Automated bots and human-assisted accounts — from within both the US and Russia — attacked Republican Senator John McCain after he criticized President Trump for his response to the violent Charlottesville protests. Former FBI director James Comey and Senate Democratic Majority leader Charles Schumer also have come under attack from Russian-based Twitter-bots.

In my view, Twitter itself has not been particularly forthcoming in addressing these concerns. Again, as with news stories, the unanswered question remains their efficacy. While Twitter bots can attract a fair bit of attention in the Alt-Right press, we still cannot say how much they affect political discourse or the outcome of elections.

(Update on January 26, 2018: An earlier version of this piece inaccurately said Twitter no longer has an office in Germany. It also mischaracterized the company’s response to investigators seeking information.)

Big Data. An altogether different technological issue took hold during the 2016 US election: the use of “Big Data Analytics,” primarily by the company Cambridge Analytica and its affiliates. Research by a PhD student at Cambridge University, Michal Kosinski, demonstrated that Facebook likes provided a highly useful source for a personality assessment of Facebook users called OCEAN, which stands for: openness, conscientiousness, extroversion, agreeableness, neuroticism. Cambridge Analytica initially said it used the OCEAN test — considered the best of its kind — in its work with both the Trump campaign and the Leave campaign, leading up to the Brexit referendum in the UK. Their stance has shifted quite a bit, from boasting to laying low.

Big data analytics can provide a granular view of voter concerns and political leanings, which in turn provides a new way to target voters in political campaigns.

This is precisely what Cambridge Analytica originally claimed regarding both the Leave (or Brexit) campaign as well as the Trump campaign. Later, either due to legal, privacy or simply political reasons Cambridge Analytica and its affiliates walked back the original claims. Currently it is not possible to tell which claims are true and which are not

Dark Ads. Since the beginning of electoral politics, campaigns have relied on speeches, ads and commercials that were visible to all of the electorate. This new technology with its finely granular approach, for the first time allowed campaigns to tailor ads to individual voters.

People by now are accustomed to seeing ads related to their previous internet searches; one could say this merely represents the extension of a new advertising technology to the political sphere…

Except… The targeted political ads of the US presidential and UK referendum, were not public. Rather, they were “dark ads,” (or in Facebook language “unpublished posts”) seen only by individual users, based on the profiles gleaned from their internet use and other personal data. If highly granular voter profiling is an unfortunate but inevitable result of Big Data analytics, then the lack of transparency in Facebook’s dark advertising represented a significant step away from the norms of the democratic process. Voters, journalists and other commentators didn’t know what message was actually sent out as voters went to the poll. When ads are public, they are open to criticism, as they have been throughout history. Yet, some 80% of advertising dollars by the Leave campaign went to social media. In the case of the US election, even that figure is unknown, other than that the Trump campaign, by some accounts, reportedly spent roughly $70 million (USD) on Facebook.

Facebook now says it will make all ads on its platform public, not just political ads, and is preparing to make that change. Unfortunately, it is unclear what impact these ads had on the political process before this change was announced.

Facebook initially maintained its policy for political ads is the same as for commercial advertising and refused to publish the ads, their frequency, those who the ads targeted, or the audience size. Under pressure, it has now rethought those views. That’s a positive step for the democratic process. Voters and the press reporting on candidates are entitled to know the whole picture.

Quo Vadis?

With the dramatic convergence of social media and election technology, debate about these issues is outpacing our knowledge of what is taking place. Hampered by a dearth of research on the political effects of “fake news,” bots, dark ads, as well as social media companies’ recalcitrance to disclose real data, political debates have been ad hoc, emotional and ill-informed.

In many ways it’s a race: will governments and parliaments react on too little information with legislation that encroaches on fundamental freedoms? Or will they wait for enough facts before enacting what seems to be the inevitable regulation of social media, beginning in Europe. How long will governments wait as they see a continuation of meddling in public discussions through social media? I suspect not for long, as we have already seen in the case of Germany.

The power of social media today mirrors the power of companies during the Industrial Revolution — railroads, energy and water companies that we know today as “utilities” deemed so vital that they need to be regulated. This may be the direction liberal democratic governments take with social media companies — deeming them too big, too powerful, potentially too threatening for politicians to tolerate. Not only center-left politicians in “statist” Europe but right-wing political figures such as Steve Bannon speak of regulating social media as utilities. This already has become a major issue for Facebook, Twitter and other media in the liberal world. Elsewhere, where there is no electoral democracy, there is no debate.

Toomas Hendrik Ilves was President of Estonia from 2006 to 2016 and is now a distinguished visiting fellow at Stanford University’s Hoover Institution.