Pages

28 March 2018

Facebook’s Hate Speech Problem

Chintan Girish Modi

There is a fine line between removing hate speech and protecting free speech. Facebook needs to learn where that boundary lies. Mark Zuckerberg is in the news again. This time he is not receiving an honorary degree from Harvard, selling Free Basics, or donating another chunk of his large fortune towards setting up a charity. Facebook, the company he co-founded, is in troubled waters. The sharp currents are being felt not only in the United States of America, where he lives, but all the way across in India as well.  


On March 21, 2018, arch rivals Bharatiya Janata Party and the Indian National Congress took to sparring with each other over alleged links with the now notorious data analytics firm Cambridge Analytica. Meanwhile, Facebook was apparently helping millions of people find common ground across three of the world’s most heavily armed borders. Facebook’s peace tracker shows that it connected 26,04,986 users from India and Pakistan in addition to 1,95,435 users from Israel and the Palestinian Territory, and 1,40,397 users from Ukraine and Russia that very day. This is, by no means, a minor accomplishment in regions that are known for cross-border terrorism, military occupation, and ceasefire violations. Perhaps, someday, Facebook will set up a system to track friendships between BJP and Congress supporters.

Amidst the worldwide condemnation of Facebook for a data breach that has compromised the personal information of close to 50 million Americans, it can be difficult to appreciate how this social media website has strengthened Track III diplomacy. India and Pakistan, for instance, are known for a visa regime that makes it impossible for most of their citizens to visit each other’s countries. I have been fortunate enough to visit Pakistan a few times to participate in educational exchanges and literature festivals. However, dozens of friends on either side will never have that opportunity. Facebook enables them to meet virtually, and learn about each other’s lives, when their governments erect multiple barriers to such dialogue. It gives citizens from both countries a platform to build a counter-narrative to the hate peddled by jingoistic politicians and war-mongering media. They can mourn together over Sridevi’s death, and crack jokes about whose politicians are more corrupt.

I think it would be disastrous, however, to make a messiah out of Facebook. Despite the company’s insistence on branding the social media portal as a community, it is evidently a business whose policy on hate speech merits closer scrutiny. Wikipedia tells me that Zuckerberg’s net worth was estimated to be US $72.5 billion as of March 5, 2018. I might faint if I began converting that amount into Indian currency. On a more serious note, this year on March 13, the Observer Research Foundation in India published a report titled ‘Encouraging Counter-Speech by Mapping the Contours of Hate Speech on Facebook in India’ authored by Maya Mirchandani, Ojasvi Goel and Dhananjay Sahai. It is based on a study conducted “with support from Facebook to analyse posts and comments on prominent public pages posting in India…(which) belong to mainstream news organisations, community groups, religious organisations, and prominent public personalities.” I will attempt to analyse this report, and point out the contradictions inherent in Facebook’s approach towards its user base.

Have you read through Facebook’s community standards? If you have already joined the #DeleteFacebook movement, you probably do not care. If you haven’t, these standards are aimed at encouraging respectful behaviour online. Facebook identifies as ‘hate speech’ any content that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, gender identity, serious disabilities and diseases. It claims to disallow organizations and people dedicated to promoting hatred against these protected groups from having a Facebook presence. If you have an uncle posting about how homosexuals should be killed or mosques should be demolished, and these posts are not being removed, you can get in touch with Facebook. The community is expected to report hateful content so that Facebook can remove hate speech.

Interestingly, humour, satire and social commentary are allowed, and so is insensitive, distasteful and offensive content if it does not violate Facebook’s policies. Now that sounds confusing. Can humour not be hateful? Can satire not incite violence against certain individuals or groups? Can people who find a post offensive not destroy property, or harm themselves? Facebook’s distinction between what qualifies as hate speech, and what does not, seems poorly defined. The ambiguity here comes from the tension between two significant priorities — condemning hate speech and protecting free speech, both of which are crucial in democratic societies that uphold the right to life as well as civil liberties. An incitement to violence may originate from hate speech spouted by individual users acting of their own volition or on behalf of organizations, and it is not entirely unlikely that some of these might enjoy the patronage of politicans in power. Restrictions on free speech are enforced by the state itself through laws that are meant to protect state secrets, ensure public order, and prevent violence that can break out when religious sentiments are hurt.

TheWire.in reported that, at the Rising India Summit in New Delhi on March 17 this year, India’s Union Minister for Information and Broadcasting Smriti Irani revealed her ministry’s plan to draft legislation that would regulate online content because it functions in an ecosystem that has no clear conduct of conduct. Would this new law apply to the BJP’s own IT Cell that is famous for its social media warriors, or would it target only the worshippers of Indira Gandhi who have forgotten the Emergency and Operation Blue Star? Only time will tell.

A few days later, on March 22, Ravi Shankar Prasad, India’s Union Minister for Electronics and Information Technology told Economic Times, “I have conveyed my concerns to social media companies. They must ensure that their platforms are not used to defame people, promote terrorism and extremism.” If he is serious, I look forward to a day when the late Jawaharlal Nehru will not be trolled for hugging his sister, and no humans will be sacrificed to ensure the safety of cows.

It is not surprising that the Indian government is keen on monitoring online content. Facebook is where many activists now do their recruiting, organizing, and message-bombing. A lot of this material is openly critical of government policies, and aims to bring out facts and stories that are not highlighted in mainstream media. People with political muscle can curb such free speech by calling it hate speech. The ORF report recognizes the possibility of manipulation on these lines.

Hate speech can include words that are insulting to those in power and derisive or derogatory of prominent individuals. At critical times, such as during election campaigns, the concept of hate speech may be prone to manipulation. Accusations of fomenting hate speech may be traded among political opponents or used by those in power to curb dissent and criticism. Personal abuse, hate speech and violent extremism often exist in the same ecosystem, with one feeding into the other, not necessarily in a linear way.

What is Facebook’s stance here? Is it truly the safe space that it promises to be? Does it let users post whatever they deem fit, and trust them to self-regulate as a community? Does it collude with governments that want to limit the freedom of their citizens? In 2017, the Brennan Center for Justice at the New York University School of Law published a paper titled ‘Countering Violent Extremism’ written by Faiza Patel and Meghan Koushik. It states, “Facebook has drawn criticism for, among other things, deactivating the accounts of several prominent Palestinian journalists, deleting accounts and posts relating to the conflict in Kashmir, and removing an iconic Vietnam War photo of a young napalm victim because it ran afoul of nudity restrictions. While Facebook conceded that these materials and accounts were taken down by mistake and restored them, the cases illustrate the difficulty of making judgements about what materials fall within its broadly phrased community standards.”

The ORF report does acknowledge internet shutdowns by the Indian government “in times of a security or a law-and-order problem” as well as posts about Kashmir that have been blocked or removed by Facebook. It is useful to know that the timeframe of the ORF study, July 2016 to 2017, coincides with three major incidents in the Kashmir valley — the killing of Burhan Wani from the Hizbul Mujahideen in a counter-terrorism operation by the Indian security forces, the lynching of Deputy Superintendent Ayub Pandith by a mob accusing the Kashmiri Muslim police officer of plotting to kill Mirwaiz Umar Farooq of the All Parties Hurriyat Conference, and a terrorist attack on a busload of Amarnath pilgrims by Pakistan-backed Lashkar-e-Taiba terrorists in South Kashmir. What has Facebook got to do with counter-terrorism operations? Is it not supposed to focus on connecting friends and families so that they can share cat pictures, show off their new haircut, and stalk erstwhile lovers?

While the ORF report places Facebook’s mandate of identifying and removing hate speech within the larger framework of a project widely known as ‘Countering Violent Extremism’ (CVE), it is silent about the shady history and critiques of CVE within the United States of America where this concept originated. A Reuters report from February 25, 2016 indicates that the US Department of Justice and the US Department of Homeland Security reached out to Facebook, Twitter and Google to take the lead in “disrupting online radicalization” because of the government’s own “limited success in combating Islamic extremist messaging.”

The Brennan Center paper establishes that CVE has been part of discussions about counterterrorism for over a decade but it shot to prominence in 2011 when the White House issued its National Strategy for Empowering Local Partners to Prevent Violent Extremism in the United States. The Obama administration gave millions of dollars to police departments, academic institutions, and non-profit groups to fund a secondary set of preventive measures that would supplement law enforcement counterterrorism tactics such as surveillance, investigations, and prosecutions. Officials in the Trump administration have apparently floated the idea of renaming it ‘Countering Radical Islam’ or ‘Countering Violent Jihad’.

Patel and Koushik write, “Regardless of whether CVE is called Countering Radical Islam or not, the programs initiated under this rubric by the Obama administration — while couched in neutral terms — have, in practice, focused almost exclusively on American Muslim communities. This is despite the fact that empirical data shows that violence from far right movements results in at least as many fatalities in the US as attacks inspired by Al Qaeda or the Islamic State.” According to them, CVE stigmatizes Muslim communities as inherently suspect, creates serious risks of flagging innocuous activity as pre-terrorism, and suppresses religious observance and speech. They add, “These flaws are only exacerbated when CVE programs are run by an administration that is overtly hostile towards Muslims, and that includes within its highest ranks individuals known for their frequent and public denunciations of a faith that is practiced by 1.6 billion people around the world.”

What would CVE look like in the Indian context, which is not so different from the American one in that we too have an administration that is overtly hostile towards Muslims? Would the government’s new legislation to regulate online content incriminate and discriminate against citizens from minority communities under the pretext of curbing hate speech and countering violent extremism? Patel and Koushik remind us that, in 2016, Ben Emmerson, the United Nations Special Rapporteur for Counterterrorism and Human Rights, “issued a report highlighting the conceptual weaknesses of the CVE framework and cautioned that the approach jeopardizes anti-discrimination norms, freedom of expression, freedom of movement, and securitizes the protection of human rights in undesirable ways.”

Under CVE, the Federal Bureau of Investigation in the US directs schools “to keep a watch on students’ political views and identify those who are curious about the subject matter of extremism.” If the same happened in India, it is easy to imagine that students from Muslim neighbourhoods as well as students from states like Jammu and Kashmir, Nagaland, Chhattisgarh, Jharkhand, Assam, Punjab with a history of secessionist movements might come under the scanner. Their online activity on school computers would be strictly monitored. This would amount to ethnic and racial profiling, violating the fundamental rights of these citizens. Moreover, such a directive would make schools physically and emotionally unsafe for students, damaging the spirit of critical thinking and questioning. School teachers are not trained for intelligence gathering, and have no professional experience in law enforcement. Their job is to encourage learning.

Facebook appears reluctant to take serious responsibility for these concerns. It expects users to fight hate speech with ‘counter-speech’ or “crowd-sourced responses to extremist or hateful content” as noted in the ORF report. It has been funding non-profit organizations and community initiatives to train users to post comments that disagree with hateful content, and promote human rights. However, Facebook seems to provide few protections against surveillance from the state, which is clearly not as benevolent as it might like us to believe.

No comments:

Post a Comment