Pages

27 March 2017

On the Internet, Nobody Knows That You’re A Russian Bot


Following the hacking of the U.S. Democratic National Committee and the revelations that Russian-affiliated groups seem to have conducted online influence operations in the lead up to the 2016 U.S. election, multiple commentators have speculated that large numbers of Russian social media bots were backing Trump in the lead up to his “unpresidented” victory. Similar concerns have been expressed about the forthcoming German and French elections, with experts and even government agencies issuing warnings about the impending possibility of Russian influence exerted online via bots and other techniques.

Political bots are part of a nebulous and nefarious digital media ecosystem that our team at the University of Oxford is calling computational propaganda, a term which covers automated systems that spread disinformation, certain types of trolling, and various data-driven efforts to shape public opinion online. These bots—social media identities that use automated scripts to rapidly or strategically disseminate content—are rapidly becoming an important element of online politics, and seem to blur the lines between political marketing, algorithmic manipulation, and propaganda.

However, our current ability to accurately trace bot activity back to those controlling and deploying them is limited. Although it is possible—and perhaps even probable—that there has been Russian-linked bot activity on major social media platforms in the lead up to the U.S. election (and that this could occur again in the lead up to the various elections happening in Europe this year), it is important to keep in mind the following caveats, many of which apply not just to bots, but also other forms of computational propaganda.

Detecting bots is hard. The social media platforms on which these bots operate are not transparent (especially Facebook), making it difficult to get concrete data on bot activity. The techniques generally used by researchers to sniff out bots on Twitter, such as frequency and network analyses, are slowly improving, yet lag behind the increasingly sophisticated bots they are supposed to detect.

Another core problem is of attribution: when we do uncover large networks of bots, internet protocol address masking services make it very easy for the users to obfuscate where the content is really coming from and who is actually behind it. In my own fieldwork in Poland, it has become apparent that many of these bots feature hybrid levels of automation, maintaining enough human involvement as to make them virtually indistinguishable from a real user on a platform like Facebook. While we can use manual heuristics to infer certain details, this is far from a perfect approach to verifying bot origin.

Of course, these challenges are exactly what could make political bots attractive for state or non-state actors, who will almost certainly be able to maintain plausible deniability by operating through proxies and taking simple steps to cover their tracks. Bots are an easy to implement, low-cost and low-risk approach. There is little that overtly distinguishes a Russian pro-Trump bot from an American pro-Trump bot.

We do not yet fully understand the effects that bots can have on individual and group behavior, especially in political contexts. Bots can provide subtle nudges to the indices which users may assume are trustworthy: metrics such as likes, retweets, shares, or views, can easily and invisibly be gamed. How does this affect the end user? Our team’s recent research is only now beginning to get insights into the ways that these bots can affect the dissemination of hyper-partisan or misleading information. It is likely that the effects of computational propaganda need to be understood not in the traditional sense as explicitly causing changes in voter behavior, but rather in a more nuanced sense as having shifted trends, affected media coverage, and subtly shaped the political discourse.

Finally, the impact of these developments will likely vary in differing socio-political contexts, with various legal, policy, and media environments rendering certain populations more or less vulnerable to information manipulation than others. For this reason, our team is currently conducting a cross-national study of computational propaganda in ten different countries, slated for release in a series of working papers this summer.

Recent Russian influence operations are a worrying development, but it is important that efforts to investigate them acknowledge the considerable limitations of our current understanding. In that sense, the possible existence of malicious Russian bots is just as concerning as how little we actually know about them.

This article appeared at the Council on Foreign Relations Blog.

No comments:

Post a Comment