Pages

17 January 2021

It's Harder to Boot Right-Wing Extremists from Social Media Than ISIS

BY PATRICK TUCKER

Experts who watched the right-wing mob attack the U.S. Capitol last week recognized a familiar pattern in the use of social media to recruit and organize; they'd seen the same thing from ISIS and other terrorist groups. They say that the kind of online measures that worked against the latter will work against the former — but at greater cost.

Studies on the effectiveness of tactics like purging and deplatforming to defeat Islamic extremism show that pushing adherents from major social-media networks limits the reach and effectiveness of propaganda and can even change the nature of the group. But right-wing content is much more technically and logistically difficult to defeat.

Extremists of all stripes tend to share certain characteristics. A 2018 report from the Jena Institute for Democracy and Civil Society found that Muslim extremism and anti-Muslim extremism in Germany mirrored each other in various ways, including recruitment, mobilization, and coordination strategies — and even ideology. Both types of extremist groups nursed perceptions of victimhood, painted the other as antagonists, and blamed cultural pluralism for the rise of their adversaries. “This becomes particularly evident in their internet propaganda on social media,” the report said.

Right-wing groups in the United States have similarly become energized by depictions of social justice movements such as Black Lives Matter and loosely organized counterprotest groups, often referred to as Antifa. These serve as a proximate and identifiable target. In the months before the Jan. 6 protests, far-right groups such as the Proud Boys clashed in Washington, D.C., with counter protestors. In December, the leader of the Proud Boys took credit for burning a Black Lives Matter hung outside a D.C. church.

The ultimate effectiveness of social-media companies’ efforts to purge their platforms of Islamic extremists remains an open question, and their side effects less than fully understood. Yet the companies learned to detect key phrases and especially images used by these groups, which allowed them to tag dangerous content with hashes and quickly assemble content databases that could be shared across platforms. This in turn allowed companies to block ISIS content before it even showed up on their sites, even as ISIS information operators established new accounts to spread it.

But there is a great deal more right-wing extremist content and of far greater variety. Users can generate it themselves more easily; meme images are a lot easier to produce than beheading videos. This makes it far more difficult to block automatically. Once it is created, most of it can spread until a human moderator intervenes. You can apply machine-learning methodologies but these will result in a perhaps-unacceptable ratio of false positives. Is a post about the Confederacy a historical note or a call to arms? It’s the sort of reasoning humans do but not algorithms, at least not easily. Also, apart from the violent and pro-terror content that is easy to flag, it is often difficult to define a clear border between legitimate content, even content that suggests a physical threat, and other images or messages that violate terms of service.

This leads to what the technology companies acknowledge is a subjective and uneven approach to moderation. On Facebook at least, right-wing content tends to be more popular. (“Right-wing populism is always more engaging," a Facebook executive told Politico. It speaks to "an incredibly strong, primitive emotion" by touching on such topics as "nation, protection, the other, anger, fear.") Efforts to block extremist content has fueled conservative users' suspicions that they are being targeted for their values, and companies have shown little appetite for alienating half of their U.S. user base. 

That was the case even before companies took it upon themselves to—unilaterally—broaden the definition of content that they found objectionable to include not only violent threats and open racism but also conspiracy theories related to COVID-19, the 2020 election, and the QAnon notion that a global agenda is run by a secret society of cannibalistic pedophiles. There is a lot of it. In just the last few days, Twitter officials say, they have removed 70,000 accounts related to QAnon. Right-wing content is also much more migratory. When ISIS users were booted from Twitter and Facebook they were relegated to a few channels on chat apps like Telegram and some message boards. Two alternative social media networks, Gab and Parler, have sprung up to accommodate right-wing users nursing grievances against larger networks. (Parler has since been blocked from the Apple and Google app store and Amazon booted the site from its servers.)

To sum up: compared to Islamic extremist content, the effort to block right-wing extremist content is technically and organizationally more difficult, carries larger financial risks for companies, and lacks a cross-industry standard for doing it. Moreover, there are abundant alternative spaces to which right-wing extremists can to go to market their cause. Is there any reason to think that purging or deplatforming will be effective?

Some evidence suggests so. When Facebook banned the far-right group Britain First in March 2018, the group tried to re-assemble on Gab. Just as large-follower ISIS accounts lost influence as they were forced from platform to platform, so Britain First saw much lower user engagement after they were booted from the larger site.“This suggests that its ban from Facebook (as well as from Twitter in December 2017) has left [Britain First] without a platform to provide a gateway to a larger pool of potential recruits,” said a 2018 study from the Royal United Services Institute, or RUSI. “Its removal from the major social media platforms has arguably left it without the ability to signpost users to sites such as Gab, which Britain First is still using freely.”

The authors note that the move reduced the variety of themes and subjects discussed by the group. On Facebook, the group had discussed British nationalism, “culture and institutions” — themes and ideas that might resonate with a wide, mainstream user base. After moving to Gab, Britain First itself "became the most prominent theme, with images focusing on the behaviours and members of the group. This suggests a renewed focus on building the group’s identity and emphasising the notion of a brotherhood by joining the group.”

In other words, deplatforming from mainstream sites reduces the reach and changes the character of extremist groups, diminishing their wider appeal as the smaller user base devolves into myopic self-debate. The RUSI authors conclude: “Removal is clearly effective, even if it is not risk-free. Despite the risk of groups migrating to more permissive spaces, mainstream social media companies should continue to seek to remove extremist groups that breach their terms of service.”

No comments:

Post a Comment