Pages

26 December 2016

A New Development in the Fight Against Online Extremism

By: Nicole Magney

Over the past several years, tech and social media companies have struggled to provide a comprehensive response to the problem of terrorism-related content posted online. In early December, Twitter, Facebook, Microsoft, and YouTube announced a new initiative to share databases of online “hashes” in an effort to curb the spread of extremist material online. As defined in a statement released by Twitter, hashes are digital fingerprints for “violent terrorist imagery or terrorist recruitment videos or images” that have been posted on social media sites.[i] This new measure will increase the efficiency of removing similar terrorist content across multiple social media networks. However, the announcement raises the questions of why it took so long for companies to agree to this and whether sharing these databases will actually be successful in reducing terrorist content online. Therefore, tech and social media companies should continue to build on this new shared data initiative, rather than assume that this initiative alone will be enough to curb the future spread of online extremist material.

Particularly since the rise of the Islamic State’s online presence, social media and tech companies have grappled with striking a balance between removing offensive and violent content and protecting users’ freedom of expression. While companies were right to question the legitimacy of limiting users’ online rights in murky situations, they also struggled to respond to seemingly clear-cut cases. In early 2015, Facebook and YouTube instituted policies that allowed users to flag and report terrorist-related content to site administrators for removal.[ii] However, particularly egregious cases where content clearly violated terms of use—for example, an Islamic State YouTube video published three days before the attack in Sousse, Tunisia in June 2015, which showed three grisly mass executions—continued to proliferate.[iii]

The new initiative seeks to fill some of the gaps in social media and tech companies’ policies toward online extremism. The initiative is the brainchild of the Counter Extremism Project (CEP), formed and led by Frances Townsend who advised President George W. Bush on Homeland Security. CEP has long advocated that social media and tech companies improve their approach to identifying and removing terrorism-related content. The organization encourages these companies to proactively search for terrorism-related content, rather than simply remove content reported by other users.[iv]

Despite the progress of the new initiative, some question how impactful this type of database sharing will prove in reality. As a result of CEP’s pressure and a grant from Microsoft to pursue the project, Dartmouth University’s Hany Farid developed the hashing software that records digital fingerprints of terrorist content on social media sites. Several social media sites had already employed similar hashing technology to combat the spread of child pornography on their platforms. While Microsoft, Facebook, Twitter, and YouTube have all agreed to use the technology to share hash codes for offending content, Farid warns that companies are still able to individually decide whether flagged content should be removed.[v]

Therefore, one company may deem a terrorist-related video or image inappropriate and remove it, while another may allow it to remain on its site. Ostensibly, this caveat is included so that each company can ensure they are not unfairly removing content. In reality, however, this will hinder the efficacy of the initiative, as terrorist-related content posted on social media sites proliferates most quickly during its first few hours of existence.[vi] If a site administrator must approve the removal of flagged content, which takes time, chances are that the content will have already achieved its purpose.

Failures aside, this initiative evidences that social media and tech companies have made slow strides to remove terrorism-related content and suspend offending accounts more efficiently, while respecting the rights of other users. Just a day after the new initiative was announced, President Obama praised the role of tech companies in cutting the Islamic State’s online propaganda presence in half compared to its peak.[vii] While encouraging, this statistic should not lead to the assumption that the battle against the proliferation of online terrorist material is over.

The new initiative demonstrates that companies are taking the threat of online terrorist propaganda seriously. However, the fact that tech and social media companies are only now agreeing to share terrorist content databases and still seeking to maintain flexibility in removing offending content indicates that more work is needed. Policymakers and concerned citizens alike should be wary of assuming that the Islamic State’s weakened online presence means victory in the fight against online extremism more broadly. Despite the inevitable rise and fall of individual terrorist groups, extremists’ exploitation of social media is not going anywhere. Therefore, social media and tech companies would do well to fine tune their policies addressing this threat now, before another, even more social media savvy group makes it necessary.

[i] “Partnering to Help Curb the Spread of Terrorist Content Online,” Twitter, December 5, 2016, accessed December 5, 2016, https://blog.twitter.com/2016/partnering-to-help-curb-the-spread-of-terrorist-content-online.

[ii] J.M. Berger, “The Evolution of Terrorist Propaganda: The Paris Attack and Social Media,” Brookings Institution, January 27, 2015

[iii] Scott Higham and Ellen Nakashima, “Why the Islamic State Leaves Tech Companies Torn between Free Speech and Security,” Washington Post, July 26, 2015, accessed December 5, 2016, https://www.washingtonpost.com/world/national-security/islamic-states-embrace-of-social-media-puts-tech-companies-in-a-bind/2015/07/15/0e5624c4-169c-11e5-89f3-61410da94eb1_story.html?kmap=1&utm_term=.727edaeeb2ff.

[iv] “Digital Disruption: Fighting Online Extremism,” Counter Extremism Project, accessed December 5, 2016, http://www.counterextremism.com/digital-disruption#dd-news.

[v] Patrick Tucker, “The Flaw in Tech Giants’ Plan to Fight Extremist Content,” Defense One, December 7, 2016, accessed December 8, 2016, http://www.defenseone.com/technology/2016/12/flaw-tech-giants-plan-fight-extremist-content/133716/?oref=d-river.

[vi] Daniel Milton, “Communication Breakdown: Unraveling the Islamic State’s Media Efforts,” Combating Terrorism Center at West Point (October 2016): 45.

[vii] “President Obama on His Approach to Counterterrorism over the Last Eight Years,” Medium, December 6, 2016, accessed December 6, 2016, https://medium.com/@WhiteHouse/president-obama-on-his-approach-to-counterterrorism-over-the-last-eight-years-b8a7c55cb422#.9qi57jjmq.

No comments:

Post a Comment