Meta

Mark Zuckerberg Stands for Voice and Free Expression

Today, Mark Zuckerberg spoke at Georgetown University about the importance of protecting free expression. He underscored his belief that giving everyone a voice empowers the powerless and pushes society to be better over time — a belief that’s at the core of Facebook.

In front of hundreds of students at the school’s Gaston Hall, Mark warned that we’re increasingly seeing laws and regulations around the world that undermine free expression and human rights. He argued that in order to make sure people can continue to have a voice, we should: 1) write policy that helps the values of voice and expression triumph around the world, 2) fend off the urge to define speech we don’t like as dangerous, and 3) build new institutions so companies like Facebook aren’t making so many important decisions about speech on our own. 

Read Mark’s full speech below.

Standing For Voice and Free Expression

Hey everyone. It’s great to be here at Georgetown with all of you today.

Before we get started, I want to acknowledge that today we lost an icon, Elijah Cummings. He was a powerful voice for equality, social progress and bringing people together.

When I was in college, our country had just gone to war in Iraq. The mood on campus was disbelief. It felt like we were acting without hearing a lot of important perspectives. The toll on soldiers, families and our national psyche was severe, and most of us felt powerless to stop it. I remember feeling that if more people had a voice to share their experiences, maybe things would have gone differently. Those early years shaped my belief that giving everyone a voice empowers the powerless and pushes society to be better over time.

Back then, I was building an early version of Facebook for my community, and I got to see my beliefs play out at smaller scale. When students got to express who they were and what mattered to them, they organized more social events, started more businesses, and even challenged some established ways of doing things on campus. It taught me that while the world’s attention focuses on major events and institutions, the bigger story is that most progress in our lives comes from regular people having more of a voice.

Since then, I’ve focused on building services to do two things: give people voice, and bring people together. These two simple ideas — voice and inclusion — go hand in hand. We’ve seen this throughout history, even if it doesn’t feel that way today. More people being able to share their perspectives has always been necessary to build a more inclusive society. And our mutual commitment to each other — that we hold each others’ right to express our views and be heard above our own desire to always get the outcomes we want — is how we make progress together.

But this view is increasingly being challenged. Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous. Today I want to talk about why, and some important choices we face around free expression.

Throughout history, we’ve seen how being able to use your voice helps people come together. We’ve seen this in the civil rights movement. Frederick Douglass once called free expression “the great moral renovator of society”. He said “slavery cannot tolerate free speech”. Civil rights leaders argued time and again that their protests were protected free expression, and one noted: “nearly all the cases involving the civil rights movement were decided on First Amendment grounds”.

We’ve seen this globally too, where the ability to speak freely has been central in the fight for democracy worldwide. The most repressive societies have always restricted speech the most — and when people are finally able to speak, they often call for change. This year alone, people have used their voices to end multiple long-running dictatorships in Northern Africa. And we’re already hearing from voices in those countries that had been excluded just because they were women, or they believed in democracy.

Our idea of free expression has become much broader over even the last 100 years. Many Americans know about the Enlightenment history and how we enshrined the First Amendment in our constitution, but fewer know how dramatically our cultural norms and legal protections have expanded, even in recent history.

The first Supreme Court case to seriously consider free speech and the First Amendment was in 1919, Schenk vs the United States. Back then, the First Amendment only applied to the federal government, and states could and often did restrict your right to speak. Our ability to call out things we felt were wrong also used to be much more restricted. Libel laws used to impose damages if you wrote something negative about someone, even if it was true. The standard later shifted so it became okay as long as you could prove your critique was true. We didn’t get the broad free speech protections we have now until the 1960s, when the Supreme Court ruled in opinions like New York Times vs Sullivan that you can criticize public figures as long as you’re not doing so with actual malice, even if what you’re saying is false.

We now have significantly broader power to call out things we feel are unjust and share our own personal experiences. Movements like #BlackLivesMatter and #MeToo went viral on Facebook — the hashtag #BlackLivesMatter was actually first used on Facebook — and this just wouldn’t have been possible in the same way before. 100 years back, many of the stories people have shared would have been against the law to even write down. And without the internet giving people the power to share them directly, they certainly wouldn’t have reached as many people. With Facebook, more than 2 billion people now have a greater opportunity to express themselves and help others.

While it’s easy to focus on major social movements, it’s important to remember that most progress happens in our everyday lives. It’s the Air Force moms who started a Facebook group so their children and other service members who can’t get home for the holidays have a place to go. It’s the church group that came together during a hurricane to provide food and volunteer to help with recovery. It’s the small business on the corner that now has access to the same sophisticated tools only the big guys used to, and now they can get their voice out and reach more customers, create jobs and become a hub in their local community. Progress and social cohesion come from billions of stories like this around the world.

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. I understand the concerns about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands. It’s part of this amazing expansion of voice through law, culture and technology.

So giving people a voice and broader inclusion go hand in hand, and the trend has been towards greater voice over time. But there’s also a counter-trend. In times of social turmoil, our impulse is often to pull back on free expression. We want the progress that comes from free expression, but not the tension.

We saw this when Martin Luther King Jr. wrote his famous letter from Birmingham Jail, where he was unconstitutionally jailed for protesting peacefully. We saw this in the efforts to shut down campus protests against the Vietnam War. We saw this way back when America was deeply polarized about its role in World War I, and the Supreme Court ruled that socialist leader Eugene Debs could be imprisoned for making an anti-war speech.

In the end, all of these decisions were wrong. Pulling back on free expression wasn’t the answer and, in fact, it often ended up hurting the minority views we seek to protect. From where we are now, it seems obvious that, of course, protests for civil rights or against wars should be allowed. Yet the desire to suppress this expression was felt deeply by much of society at the time.

Today, we are in another time of social tension. We face real issues that will take a long time to work through — massive economic transitions from globalization and technology, fallout from the 2008 financial crisis, and polarized reactions to greater migration. Many of our issues flow from these changes.

In the face of these tensions, once again a popular impulse is to pull back from free expression. We’re at another cross-roads. We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us. Or we can decide the cost is simply too great. I’m here today because I believe we must continue to stand for free expression.

At the same time, I know that free expression has never been absolute. Some people argue internet platforms should allow all expression protected by the First Amendment, even though the First Amendment explicitly doesn’t apply to companies. I’m proud that our values at Facebook are inspired by the American tradition, which is more supportive of free expression than anywhere else. But even American tradition recognizes that some speech infringes on others’ rights. And still, a strict First Amendment standard might require us to allow terrorist propaganda, bullying young people and more that almost everyone agrees we should stop — and I certainly do — as well as content like pornography that would make people uncomfortable using our platforms.

So once we’re taking this content down, the question is: where do you draw the line? Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put people in danger. The shift over the past several years is that many people would now argue that more speech is dangerous than would have before. This raises the question of exactly what counts as dangerous speech online. It’s worth examining this in detail.

Many arguments about online speech are related to new properties of the internet itself. If you believe the internet is completely different from everything before it, then it doesn’t make sense to focus on historical precedent. But we should be careful of overly broad arguments since they’ve been made about almost every new technology, from the printing press to radio to TV. Instead, let’s consider the specific ways the internet is different and how internet services like ours might address those risks while protecting free expression.

One clear difference is that a lot more people now have a voice — almost half the world. That’s dramatically empowering for all the reasons I’ve mentioned. But inevitably some people will use their voice to organize violence, undermine elections or hurt others, and we have a responsibility to address these risks. When you’re serving billions of people, even if a very small percent cause harm, that can still be a lot of harm.

We build specific systems to address each type of harmful content — from incitement of violence to child exploitation to other harms like intellectual property violations — about 20 categories in total. We judge ourselves by the prevalence of harmful content and what percent we find proactively before anyone reports it to us. For example, our AI systems identify 99% of the terrorist content we take down before anyone even sees it. This is a massive investment. We now have over 35,000 people working on security, and our security budget today is greater than the entire revenue of our company at the time of our IPO earlier this decade.

All of this work is about enforcing our existing policies, not broadening our definition of what is dangerous. If we do this well, we should be able to stop a lot of harm while fighting back against putting additional restrictions on speech.

Another important difference is how quickly ideas can spread online. Most people can now get much more reach than they ever could before. This is at the heart of a lot of the positive uses of the internet. It’s empowering that anyone can start a fundraiser, share an idea, build a business, or create a movement that can grow quickly. But we’ve seen this go the other way too — most notably when Russia’s IRA tried to interfere in the 2016 elections, but also when misinformation has gone viral. Some people argue that virality itself is dangerous, and we need tighter filters on what content can spread quickly.

For misinformation, we focus on making sure complete hoaxes don’t go viral. We especially focus on misinformation that could lead to imminent physical harm, like misleading health advice saying if you’re having a stroke, no need to go to the hospital.

More broadly though, we’ve found a different strategy works best: focusing on the authenticity of the speaker rather than the content itself. Much of the content the Russian accounts shared was distasteful but would have been considered permissible political discourse if it were shared by Americans — the real issue was that it was posted by fake accounts coordinating together and pretending to be someone else. We’ve seen a similar issue with these groups that pump out misinformation like spam just to make money.

The solution is to verify the identities of accounts getting wide distribution and get better at removing fake accounts. We now require you to provide a government ID and prove your location if you want to run political ads or a large page. You can still say controversial things, but you have to stand behind them with your real identity and face accountability. Our AI systems have also gotten more advanced at detecting clusters of fake accounts that aren’t behaving like humans. We now remove billions of fake accounts a year — most within minutes of registering and before they do much. Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful.

Another qualitative difference is the internet lets people form communities that wouldn’t have been possible before. This is good because it helps people find groups where they belong and share interests. But the flip side is this has the potential to lead to polarization. I care a lot about this — after all, our goal is to bring people together.

Much of the research I’ve seen is mixed and suggests the internet could actually decrease aspects of polarization. The most polarized voters in the last presidential election were the people least likely to use the internet. Research from the Reuters Institute also shows people who get their news online actually have a much more diverse media diet than people who don’t, and they’re exposed to a broader range of viewpoints. This is because most people watch only a couple of cable news stations or read only a couple of newspapers, but even if most of your friends online have similar views, you usually have some that are different, and you get exposed to different perspectives through them. Still, we have an important role in designing our systems to show a diversity of ideas and not encourage polarizing content.

One last difference with the internet is it lets people share things that would have been impossible before. Take live-streaming, for example. This allows families to be together for moments like birthdays and even weddings, schoolteachers to read bedtime stories to kids who might not be read to, and people to witness some very important events. But we’ve also seen people broadcast self-harm, suicide, and terrible violence. These are new challenges and our responsibility is to build systems that can respond quickly.

We’re particularly focused on well-being, especially for young people. We built a team of thousands of people and AI systems that can detect risks of self-harm within minutes so we can reach out when people need help most. In the last year, we’ve helped first responders reach people who needed help thousands of times.

For each of these issues, I believe we have two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary. That’s what I’m committed to.

But beyond these new properties of the internet, there are also shifting cultural sensitivities and diverging views on what people consider dangerous content.

Take misinformation. No one tells us they want to see misinformation. That’s why we work with independent fact checkers to stop hoaxes that are going viral from spreading. But misinformation is a pretty broad category. A lot of people like satire, which isn’t necessarily true. A lot of people talk about their experiences through stories that may be exaggerated or have inaccuracies, but speak to a deeper truth in their lived experience. We need to be careful about restricting that. Even when there is a common set of facts, different media outlets tell very different stories emphasizing different angles. There’s a lot of nuance here. And while I worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true.

We recently clarified our policies to ensure people can see primary source speech from political figures that shapes civic discourse. Political advertising is more transparent on Facebook than anywhere else — we keep all political and issue ads in an archive so everyone can scrutinize them, and no TV or print does that. We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.

I know many people disagree, but, in general, I don’t think it’s right for a private company to censor politicians or the news in a democracy. And we’re not an outlier here. The other major internet platforms and the vast majority of media also run these same ads.

American tradition also has some precedent here. The Supreme Court case I mentioned earlier that gave us our current broad speech rights, New York Times vs Sullivan, was actually about an ad with misinformation, supporting Martin Luther King Jr. and criticizing an Alabama police department. The police commissioner sued the Times for running the ad, the jury in Alabama found against the Times, and the Supreme Court unanimously reversed the decision, creating today’s speech standard.

As a principle, in a democracy, I believe people should decide what is credible, not tech companies. Of course there are exceptions, and even for politicians we don’t allow content that incites violence or risks imminent harm — and of course we don’t allow voter suppression. Voting is voice. Fighting voter suppression may be as important for the civil rights movement as free expression has been. Just as we’re inspired by the First Amendment, we’re inspired by the 15th Amendment too.

Given the sensitivity around political ads, I’ve considered whether we should stop allowing them altogether. From a business perspective, the controversy certainly isn’t worth the small part of our business they make up. But political ads are an important part of voice — especially for local candidates, up-and-coming challengers, and advocacy groups that may not get much media attention otherwise. Banning political ads favors incumbents and whoever the media covers.

Even if we wanted to ban political ads, it’s not clear where we’d draw the line. There are many more ads about issues than there are directly about elections. Would we ban all ads about healthcare or immigration or women’s empowerment? If we banned candidates’ ads but not these, would that really make sense to give everyone else a voice in political debates except the candidates themselves? There are issues any way you cut this, and when it’s not absolutely clear what to do, I believe we should err on the side of greater expression.

Or take hate speech, which we define as someone directly attacking a person or group based on a characteristic like race, gender or religion. We take down content that could lead to real world violence. In countries at risk of conflict, that includes anything that could lead to imminent violence or genocide. And we know from history that dehumanizing people is the first step towards inciting violence. If you say immigrants are vermin, or all Muslims are terrorists — that makes others feel they can escalate and attack that group without consequences. So we don’t allow that. I take this incredibly seriously, and we work hard to get this off our platform.

American free speech tradition recognizes that some speech can have the effect of restricting others’ right to speak. While American law doesn’t recognize “hate speech” as a category, it does prohibit racial harassment and sexual harassment. We still have a strong culture of free expression even while our laws prohibit discrimination.

But still, people have broad disagreements over what qualifies as hate and shouldn’t be allowed. Some people think our policies don’t prohibit content they think qualifies as hate, while others think what we take down should be a protected form of expression. This area is one of the hardest to get right.

I believe people should be able to use our services to discuss issues they feel strongly about — from religion and immigration to foreign policy and crime. You should even be able to be critical of groups without dehumanizing them. But even this isn’t always straightforward to judge at scale, and it often leads to enforcement mistakes. Is someone re-posting a video of a racist attack because they’re condemning it, or glorifying and encouraging people to copy it? Are they using normal slang, or using an innocent word in a new way to incite violence? Now multiply those linguistic challenges by more than 100 languages around the world.

Rules about what you can and can’t say often have unintended consequences. When speech restrictions were implemented in the UK in the last century, parliament noted they were applied more heavily to citizens from poorer backgrounds because the way they expressed things didn’t match the elite Oxbridge style. In everything we do, we need to make sure we’re empowering people, not simply reinforcing existing institutions and power structures.

That brings us back to the cross-roads we all find ourselves at today. Will we continue fighting to give more people a voice to be heard, or will we pull back from free expression?

I see three major threats ahead:

The first is legal. We’re increasingly seeing laws and regulations around the world that undermine free expression and people’s human rights. These local laws are each individually troubling, especially when they shut down speech in places where there isn’t democracy or freedom of the press. But it’s even worse when countries try to impose their speech restrictions on the rest of the world.

This raises a larger question about the future of the global internet. China is building its own internet focused on very different values, and is now exporting their vision of the internet to other countries. Until recently, the internet in almost every country outside China has been defined by American platforms with strong free expression values. There’s no guarantee these values will win out. A decade ago, almost all of the major internet platforms were American. Today, six of the top ten are Chinese.

We’re beginning to see this in social media. While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok, the Chinese app growing quickly around the world, mentions of these protests are censored, even in the US.

Is that the internet we want?

It’s one of the reasons we don’t operate Facebook, Instagram or our other services in China. I wanted our services in China because I believe in connecting the whole world and I thought we might help create a more open society. I worked hard to make this happen. But we could never come to agreement on what it would take for us to operate there, and they never let us in. And now we have more freedom to speak out and stand up for the values we believe in and fight for free expression around the world.

This question of which nation’s values will determine what speech is allowed for decades to come really puts into perspective our debates about the content issues of the day. While we may disagree on exactly where to draw the line on specific issues, we at least can disagree. That’s what free expression is. And the fact that we can even have this conversation means that we’re at least debating from some common values. If another nation’s platforms set the rules, our discourse will be defined by a completely different set of values.

To push back against this, as we all work to define internet policy and regulation to address public safety, we should also be proactive and write policy that helps the values of voice and expression triumph around the world.

The second challenge to expression is the platforms themselves — including us. Because the reality is we make a lot of decisions that affect people’s ability to speak.

I’m committed to the values we’re discussing today, but we won’t always get it right. I understand people are concerned that we have so much control over how they communicate on our services. And I understand people are concerned about bias and making sure their ideas are treated fairly. Frankly, I don’t think we should be making so many important decisions about speech on our own either. We’d benefit from a more democratic process, clearer rules for the internet, and new institutions.

That’s why we’re establishing an independent Oversight Board for people to appeal our content decisions. The board will have the power to make final binding decisions about whether content stays up or comes down on our services — decisions that our team and I can’t overturn. We’re going to appoint members to this board who have a diversity of views and backgrounds, but who each hold free expression as their paramount value.

Building this institution is important to me personally because I’m not always going to be here, and I want to ensure the values of voice and free expression are enshrined deeply into how this company is governed.

The third challenge to expression is the hardest because it comes from our culture. We’re at a moment of particular tension here and around the world — and we’re seeing the impulse to restrict speech and enforce new norms around what people can say.

Increasingly, we’re seeing people try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves.

I personally believe this is more dangerous for democracy over the long term than almost any speech. Democracy depends on the idea that we hold each others’ right to express ourselves and be heard above our own desire to always get the outcomes we want. You can’t impose tolerance top-down. It has to come from people opening up, sharing experiences, and developing a shared story for society that we all feel we’re a part of. That’s how we make progress together.

So how do we turn the tide? Someone once told me our founding fathers thought free expression was like air. You don’t miss it until it’s gone. When people don’t feel they can express themselves, they lose faith in democracy and they’re more likely to support populist parties that prioritize specific policy goals over the health of our democratic norms.

I’m a little more optimistic. I don’t think we need to lose our freedom of expression to realize how important it is. I think people understand and appreciate the voice they have now. At some fundamental level, I think most people believe in their fellow people too.

As long as our governments respect people’s right to express themselves, as long as our platforms live up to their responsibilities to support expression and prevent harm, and as long as we all commit to being open and making space for more perspectives, I think we’ll make progress. It’ll take time, but we’ll work through this moment. We overcame deep polarization after World War I, and intense political violence in the 1960s. Progress isn’t linear. Sometimes we take two steps forward and one step back. But if we can’t agree to let each other talk about the issues, we can’t take the first step. Even when it’s hard, this is how we build a shared understanding.

So yes, we have big disagreements. Maybe more now than at any time in recent history. But part of that is because we’re getting our issues out on the table — issues that for a long time weren’t talked about. More people from more parts of our society have a voice than ever before, and it will take time to hear these voices and knit them together into a coherent narrative. Sometimes we hope for a singular event to resolve these conflicts, but that’s never been how it works. We focus on the major institutions — from governments to large companies — but the bigger story has always been regular people using their voice to take billions of individual steps forward to make our lives and our communities better.

The future depends on all of us. Whether you like Facebook or not, we need to recognize what is at stake and come together to stand for free expression at this critical moment.

I believe in giving people a voice because, at the end of the day, I believe in people. And as long as enough of us keep fighting for this, I believe that more people’s voices will eventually help us work through these issues together and write a new chapter in our history — where from all of our individual voices and perspectives, we can bring the world closer together.