1 March 2021

Social Media and Online Speech: How Should Countries Regulate Tech Giants?

By Anshu Siripurapu and William Merrow

Social media has been blamed for spreading disinformation and contributing to violence around the world. What are companies and governments doing about it?

The role of social media and online speech in civil society has come under heightened scrutiny. The deadly riot at the U.S. Capitol on January 6 is just one example of violence which national security experts say was fomented in large part on social media platforms. Elsewhere in the world, social media has contributed to religious and ethnic violence, including against Muslims in India and Rohingya in Myanmar. Harmful misinformation, including about the COVID-19 pandemic, has also spread with ease and speed.

Platforms such as Facebook and Twitter have become the de facto public squares in many countries, and governments are adopting varying approaches to regulating them.
How do the major platforms regulate content?

The most popular platforms, most of which are run by U.S. companies, have similar content moderation policies. They bar posts that glorify or encourage violence; posts that are sexually explicit; and posts that contain hate speech, which they define as attacking a person for their race, gender, or sexual orientation, among other characteristics. The major platforms have also taken steps to limit disinformation, including by fact-checking posts, labeling the accounts of state-run media, and banning political ads.

Facebook and YouTube Are Most Popular Social Platforms
Active user accounts for selected social platforms, January 2021 or most recent available


These platforms generally comply with the laws of the countries where they operate, which can restrict speech even further. In addition to using moderation software powered by artificial intelligence, Facebook, Twitter, and YouTube (which is owned by Google) employ thousands of people to screen posts for violations.

Social Media Platforms Block Millions of Pieces of Harmful Content
Content removed or subject to other action in the first half of 2020
What are some of the controversies?

Critics say these platforms do not enforce their rules consistently. For example, both Twitter and Facebook have allowed accounts they say serve the public interest—most notably those of politicians such as former U.S. President Donald J. Trump—to post abusive or misleading content that might have been removed if it were posted by an ordinary user.
15,000
The number of moderators Facebook employs to screen content on its services.
Source:
NYU Stern Center for Business and Human Rights
Share

In Trump’s case, the companies instead appended fact checks to some of his posts, which some experts who track social media and misinformation criticized as insufficient. The two platforms eventually banned Trump, following the U.S. Capitol riots, but both have faced criticism for not taking similar actions abroad. YouTube has also come under fire for allegedly treating its star users, who bring in more revenue, more leniently. It was also criticized for not removing videos with false claims of U.S. election fraud and other misinformation quickly.

Critics say the companies are not incentivized to regulate hateful or violent speech because their ad-driven business models rely on keeping users engaged. At the same time, politicians in some countries, including the United States, argue that social media companies have gone too far with moderation, at the expense of free speech.

The World This Week

A weekly digest of the latest from CFR on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. Every Friday.

For their part, social media companies have argued that their policies are difficult to enforce. It can be tricky at times to distinguish hate speech from satire or commentary, for example. Some companies say the onus should not be on them to write the rules for the internet and have called for government regulation.
How are governments around the world approaching the issue?

Countries Regulate Social Media Differently
Regulations in selected countries, grouped by level of internet freedom according to Freedom House


Where the internet is free or partly free

Germany

A law known as NetzDG requires social media companies to quickly take down “manifestly illegal” content, including hate speech, or face large fines.

United States

Social media companies enjoy strong liability protections and are largely self-regulating. There are growing calls for government regulation.

India

Social media platforms are generally exempt from liability but can be compelled to remove content. Internet shutdowns are frequent, and the government is moving to assert greater control over some platforms.

Kenya

Social media platforms are fully accessible, but the government has at times asked companies to remove content it deemed objectionable, including music videos.

Brazil

Social media platforms are freely available. However, the legislature is weighing a bill that rights groups fear would undermine freedom of expression.

Australia

Platforms are required to quickly remove “abhorrent violent material” or face large fines. The law was passed after the Christchurch terrorist attack, which was livestreamed on Facebook.

Where the internet is not free

Russia

Online media is monitored by a government watchdog and routinely restricted. The legislature has moved to block U.S. platforms for allegedly censoring Russian state media.

Saudi Arabia

Though censorship is extensive, social media platforms operate relatively freely. The monarchy has been accused of manipulating online discourse.

China

It has some of the most restrictive censorship laws in the world. Many Western platforms are banned, and their Chinese equivalents are closely monitored by the government.

Ethiopia

Federal law requires social media companies to remove hate speech or disinformation within a day. The government has at times cut off access to platforms.

Sources : Freedom House; CFR research.

In the United States, social media platforms have largely been left to make and enforce their own policies, though Washington is weighing new laws and regulations. Other countries have implemented or proposed legislation to force social media companies to do more to police online discourse. Authoritarian governments generally have more restrictive censorship regimes, but even some Western democracies, such as Australia and Germany, have taken tougher approaches to online speech.

No comments: