11 June 2020

Zuckerberg’s dilemma: How to moderate Facebook amid violent unrest

Chris Meserole
Source Link

Late last week, as protests over George Floyd’s killing began to boil over, President Trump took to Facebook and Twitter. Rather than de-escalating the unrest, he instead fanned the flames. “When the looting starts, the shootings starts,” he intoned, echoing a phrase used by segregationist politicians to condone police violence against the black community. The response of many social platforms was unusually swift. Twitter restricted Trump’s message by placing a warning over his tweet before it could be viewed, while SnapChat declared that it would no longer promote Trump’s content in the coveted real estate of its Discover tab. Both steps were unprecedented. 

Facebook, by contrast, appeared to be stuck in cement. The company famous for its efforts to “move fast and break things” refused to move at all, with company founder Mark Zuckerberg standing by his decision not to remove or moderate Trump’s recent posts, even in the face of intense criticism from Facebook employees. According to the company’s existing policies, Trump’s posts were within bounds—they predicted but didn’t incite violence, Zuckerberg argued—and therefore should stand as published. 

Lost within the rancor over Facebook’s refusal to moderate Trump’s specific post was a comment by Zuckerberg about a potential policy shift that could be much more consequential. “If we were entering a period where there may be a prolonged period of civil unrest,” he told one employee, “then that might suggest that we need different policies, even if just temporarily, in the United States for some period.” Read the full exchange, and what’s clear is that while Facebook may not be censoring Trump’s posts anytime soon, it may instead develop a set of policies much more far-reaching in scope—namely, a separate framework for content moderation during times of heightened unrest and violence around the world, including in the United States. 


Free speech and its discontents

To see why this is such a big deal, it’s worth revisiting why content moderation is so hard to get right in the first place, particularly when government officials are involved. 

Within a democracy, the main purpose of free speech is to arrive at good policy outcomes. If everyone has the right to speak, the theory goes, then on the whole the wisdom of the crowds will prevail, and the resulting policies will be mostly reasonable. There’s little point in banning bad or offensive speech, in this view, because it will get diluted by good speech anyway. 

Where this approach falls apart are calls for violence. As the platforms have discovered, more speech is not a solution to bad speech when the latter calls for a given individual or group to be forcefully removed from political debate altogether. It’s hard to have a productive policy conversation when Abu Bakr al-Baghdadi or Richard Spencer are seated at the table. In that case, the solution is to remove or limit the bad speech altogether—which is exactly why Facebook, Google, and others have invested heavily over the past decade in identifying the content of terrorist organizations and hate groups. 

Yet what if the incitement to violence comes not from a terrorist, but a politician? Should they be removed from the table too? On the one hand, it’s hard to see how a call to violence by a politician can lead to good policy: such rhetoric will necessarily have a chilling effect, render genuine political debate difficult or even impossible, and possibly result in a loss of life. But on the other, for a company like Facebook to moderate political speech positions it not as an arbiter of truth so much as an arbiter of legitimate power, particularly when the speech is intended to guide the application of violence by the uniformed police or military.

In his congressional testimony last year, Zuckerberg insisted that that dilemma would not pose a problem. “If anyone, including a politician … is calling for violence,” Zuckerberg told Congresswoman Alexandria Ocasio-Cortez, “we will take that content down.” But as his response to Trump’s “shooting” comment makes clear, moderating the content of government officials is easier said than done, in part because of uncertainty about the imminence of the threat. And when the official is as high-ranking as the president of the United States, political incentives will tend to push Facebook toward inaction.

How to establish emergency content moderation policies

Zuckerberg’s reluctance to moderate the speech of political figures makes his comments to Facebook staff about the possibility of “emergency” or “crisis” content moderation rules, leaked to Vox’s Shirin Ghaffary, all the more intriguing. Such rules would provide grounds for Facebook to preserve the status quo in most cases, while more aggressively restricting speech in others. The effect of such a framework would likely be to reset the default approach to moderation. In “normal” times, the responsibility of Facebook would be to err on the side of free speech. But in moments of “emergency,” the responsibility would be to err on the side of preventing further violence—presumably by aggressively curtailing hateful or bellicose rhetoric, even when the speech comes from government officials and does not contain an imminent or specific threat.

For Facebook, the appeal of such a system is that it offers a way of maintaining its current commitment to free speech principles while taking more seriously the harm that can come from violent rhetoric. But in addition to adjudicating speech in moments crisis, it would also require Facebook to adjudicate what constitutes an emergency. And the more restrictive Facebook’s emergency moderation policies were, the graver that responsibility would be. 

As Zuckerberg himself acknowledged in his Q&A with staff, Facebook is not equipped at present to bear such responsibility. To be sure, the company has experimented with crisis policies in specific contexts, such as the restrictions it placed on sharing private messages in WhatsApp during a wave of ethnic violence in India in 2018. It has also curtailed dangerous speech by military and state officials in Myanmar. Yet in each case Facebook piloted new policies on an ad hoc basis. It lacks the kind of policy infrastructure necessary to establish and invoke emergency policies in a way that is principled and globally consistent. 

In light of recent events, Zuckerberg would be wise to invest in such an infrastructure now. In the coming months, further crises in the United States and elsewhere seem almost inevitable. If he does so, Facebook should consider three principles:

Context matters, not regime. For Facebook, the focus should be on what is happening within a country rather than the kind of political institutions it has. Facebook will face significant pressure to treat democracies and authoritarian regimes differently, but it should refrain from doing so—the focus should be on behavior and context rather than regime. Facebook does not want to be in the position of arbitrating the democratic legitimacy of different states, since countries with functioning democratic institutions are themselves susceptible to moments of political and ethnic violence—as the United States currently illustrates. 

Clear on ramps and off ramps. The dangers of establishing “emergency” policy guidelines are twofold: the first is that they will be arbitrarily imposed; the second is that they will become the new normal and remain in place after the emergency has ended. Facebook will need to guard against both by articulating clearly and transparently when “emergency” policies will be applied and when they will be lifted. In theory this could take the form of objective criteria such as the incidence of fatal violence. However, given the difficulty of recording and reporting on political violence in many countries, it will more likely need to be a set of institutional processes and benchmarks for reviewing when emergency measures should be extended or removed. 

Independent oversight. Facebook’s “emergency” policies will work only so long as they are viewed as credible and legitimate. As a company, Facebook’s decisions about when to impose emergency policies—particularly amid fraught moments of democratic politics—will be viewed as self-interested and strategic rather than impartial. Facebook should therefore work with an independent third-party to draft and implement its policies. Alternately, it should establish an independent body of its own, as it did most recently in establishing an oversight board to adjudicate decisions to take down content. 

Building out the infrastructure to reliably and consistently implement emergency policies for content moderation will not be easy. As Facebook has discovered repeatedly over the past decade, developing robust policy processes and expertise can exact significant political, financial, and opportunity costs. 

But those costs pale in comparison to continuing the status quo. During moments of violence and unrest, the worst that can happen by moderating political speech is not that a politician’s voice might be unduly restricted. It’s that people might die, particularly people from marginalized communities. Facebook’s answer in the face of such death cannot always be that it was too principled to intervene.

Chris Meserole is a fellow in Foreign Policy at the Brookings Institution and deputy director of the Brookings Artificial Intelligence and Emerging Technology Initiative. He is also an adjunct professor at Georgetown University.

No comments: