Pages

24 July 2017

Report Watch Vol. III: State Control of Online Content



A look at the latest digital and cyber scholarship: computational propaganda, trolls in China, and internet censorship. 

Christopher Zheng is an intern and Alex Grigsby is the assistant director of the Digital and Cyberspace Policy program at the Council on Foreign Relations.

The Net Politics report watch series of posts distills the most relevant digital and cyber scholarship to bring you the highlights. In this edition: computational propaganda, the Chinese fifty cent party, and global internet censorship.

Woolley, Howard, and their team at the Oxford Internet Institute examine the manipulation of public opinion in nine countries, including Brazil, China, Russia, and the United States. Their assessment of what they call computational propaganda—“the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks”—is impressive and draws upon insights gleaned from interviews and analyzing millions of posts across seven social media platforms.

In their review of the data, they find that: 

In some countries a significant chunk of social media activity is generated by automated accounts (approximately 45 percent of Twitter activity in Russia is automated and that a “tiny number of right wing accounts generate 20% of the political content” on Twitter in Poland); 
Not all computational propaganda is automated (opinion manipulation occurs on Chinese Weibo but mostly through human curation, not automation); 

A mix of automation and human curation makes for the most effective computational propaganda campaigns; 

Political actors in democracies rent “networks of accounts for purpose built-campaigns”; and 

The methods used in computational propaganda can also be adapted to spread positive information, such as debunking hoaxes, fact-checking, and promoting public service, as was the case in Canada and Germany. 

Woolley, Howard et. al. acknowledge the challenge of drawing overarching conclusions from nine separate countries. However, their comparative approach allows academics, policymakers, and social media companies to address the challenge posed by computational propaganda.

China watchers have always suspected the presence of a fifty cent party in China—seemingly ordinary people hired by the Communist party to flood online discussion with pro-Chinese messages in an effort at “reverse censorship.” King, Pan, and Roberts debunk this claim in a paper that could have also been titled: “Everything You Know About the Fifty Cent Party Is Wrong.” Using an archive of leaked emails from the Zhanggong District Internet Propaganda Office, the researchers were able to quantitatively analyze almost 44,000 social media posts from fifty cent party members.

King, Pan, and Roberts find that the primary purpose of fifty cent members is to post cheerleading and subject-changing messages, not engaging government critics for debate as originally believed. The authors argue that cheerleading, instead of criticism of another country or arguing, are consistent with the Chinese government’s aim to avoid collective action at all costs. Perhaps unsurprisingly, fifty cent content often spikes during significant political or cultural events—especially those that have drawn large protests in the past—such as the Qingming festival or the July 2013 riots in Xinjiang.

Furthermore, their analysis reveals that fifty cent members are not actually paid fifty cents for their posts given that almost all of the posters are already government employees, and presumably posting as part of a job requirement.

Zittrain et al. continue their long-running project to analyze trends in internet censorship. They found evidence of some form of internet blocking, such as filtering online content or restricting the tools people can access to bypass internet censors, in twenty six of the forty five countries that the study examined. Unsurprisingly, countries such as China, Saudi Arabia, and Iran exhibited the most extensive filtering practices whereas countries like Singapore, Hungary, and Lebanon engaged in limited filtering.

Their study finds that: 

The increased adoption of HTTPS, which encrypts connections to web servers, poses challenges to censors, which cannot only censor one offending page with HTTPS enabled, but must censor the entire website on which that page is hosted; 

Content that governments would like to censor is increasingly moving to centralized platforms (e.g. Facebook) instead of individual websites, which can offer content more protection from certain censorship efforts like DDOS attacks but also make it more vulnerable to government pressure on the platforms; 

There has been an increase in faith-based filtering at the internet service provider level in the Middle East, with evidence that Saudi Arabia, the United Arab Emirates, and Bahrain block Shiite content where Iran blocks Sunni content; and 

There is evidence that Malaysia, despite its government’s claims that it does not filter the internet, “filters pornography and gambling websites substantially” and has selectively blocked content related to Prime Minister’s Najib Razak’s alleged implication in a graft scheme.

No comments:

Post a Comment