Hate speech

Protected: The US Supreme Court has repeatedly ruled hate speech to be legal free speech.

Facebook founder Mark Zuckerberg is calling on governments to help social media companies to control hate speech, privacy and democracy online. What does he want exactly? And why now?

  • What has Mark Zuckerberg said?

    He has written an opinion piece in The Washington Post (which will be published in European newspapers this week) arguing that governments should have a more “active role” in regulating the internet.

    Every day, he wrote, companies like Facebook make decisions about what should be allowed to appear on their site. “If we were starting from scratch, we wouldn’t ask companies to make these judgments alone.”

  • What does he want?

    Specifically, he wants more laws in four areas: harmful content, election integrity, privacy and data portability.

    He wants common rules on what counts as harmful content that all social networks can follow, rather than leaving it up to companies to decide for themselves.

    He also wants standard rules for what is allowed during election campaigns, and around other “divisive political issues”.

    Finally, he called for more countries to adopt strong privacy laws like Europe’s GDPR, and clear rules that allow people to move their data from one site to another.

  • Why now?

    Trust in social media companies has taken a serious hit in the last few years.

    Firstly, US intelligence agencies confirmed that Russia used fake news and social media to attempt to sway the 2016 election in Donald Trump’s favour.

    Last year the Cambridge Analytica scandal broke, exposing Facebook’s lax data security and the ways our profiles can be used to manipulate us during elections.

    Then last month a man used Facebook to live-stream his massacre of 50 Muslims in Christchurch, New Zealand. Within 24 hours, Facebook had removed 1.5 million copies of the video. On YouTube, one copy was uploaded every second.

  • How was that allowed to happen?

    To understand the internet we have today, you have to wind the clock back more than 20 years. In the 1990s — several years before social media arrived — it wasn’t clear who would be responsible for what users posted online.

    Then in 1996, US Congress approved the Communications Decency Act. It included a 26-word clause known as Section 230, which changed the internet forever:

    “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

  • What does that mean?

    Today, this means that websites like Facebook are not legally responsible for the things their users post. Similarly, Amazon is not responsible for the things that third-parties use the platform to sell. Google is not responsible for what appears in its search results. Without Section 230, the internet would probably be very different.

    The same clause also allows tech companies to remove content that they deem to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable”. But they are not publishers in the way that newspapers are, so they cannot be held responsible for that content in court.

  • Can we change those laws?

    Because US laws are so open, the European Union has begun passing stricter rules in an attempt to regulate social media companies. In 2016, they agreed to follow the rules of an EU “code of conduct” on hate speech. You can see the results in the chart at the top of this article. After Christchurch, New Zealand’s prime minister argued that social networks should be considered “the publisher not just the postman”.

    But is this a good idea? It would be impossible for Facebook and YouTube to review every post in real-time. If they were held legally responsible for their users’ content, it could be the end of free speech on social media as we know it. Would it be worth it?

You Decide

  1. Should social media companies be legally responsible for the things their users post?


  1. Imagine you are updating Section 230 for the internet as we know it today. You have one sentence to explain the responsibilities of technology companies when it comes to the content which is posted on their site. What would you write?

Word Watch

General Data Protection Regulation. This was a sweeping law which came into force last year. It strengthened online privacy and data protections in Europe, requiring companies to get explicit consent from customers to use their data. (You may have noticed that many websites now ask if you will accept their use of cookies; this is due to GDPR.) It makes it easier for customers to withdraw that consent, and see the personal data that a company holds about them.
Move their data
This is known as “data portability”. It means that a company can collect all the data it has about a person, and send it to them in an easy format that can then be passed on elsewhere.
Cambridge Analytica
A British data analytics company. It used data from 87 million leaked Facebook profiles to create highly personalised political adverts and target them at voters during the 2016 US election and Brexit referendum. The company has now shut down.
A suspect shot and killed 50 people in two mosques in the city on March 15. He has now been arrested and will face trial in New Zealand.

PDF Download

Hate speech
214 KB