Pages

18 July 2020

Human-machine detection of online-based malign information

by William Marcellino

What are the main features of the current malign information operations threat context?

What insights can be identified from the application of proof-of-concept machine detection in a known Russian troll database?

In what ways could the machine learning model developed by RAND be applied in the future?

As social media is increasingly being used as people's primary source for news online, there is a rising threat from the spread of malign and false information. With an absence of human editors in news feeds and a growth of artificial online activity, it has become easier for various actors to manipulate the news that people consume. Finding an effective way to detect malign information online is an important part of addressing this issue. RAND Europe was commissioned by the UK Ministry of Defence's (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA's efforts to help the UK MOD develop its behavioural analytics capability.


Our study found that online communities are increasingly being exposed to junk news, cyber bullying activity, terrorist propaganda, and political reputation boosting or smearing campaigns. These activities are undertaken by synthetic accounts and human users, including online trolls, political leaders, far-left or far-right individuals, national adversaries and extremist groups. In support of government efforts to detect and counter these activities, the research team successfully developed and applied a machine learning model in a Russian troll database to identify differences between authentic political supporters and Russian trolls shaping online debates regarding the 2016 US presidential election. To trial the model's portability, a promising next step could be to test the model in a new context such as the online Brexit debate.

Key Findings

Social media is increasingly being used by human and automated users to distort information, erode trust in democracy and incite extremism

Today, online communities are increasingly exposed to junk news, cyber bullying, terrorist propaganda, and political reputation boosting or smearing campaigns. These activities are undertaken by synthetic and human users, including online trolls, political leaders, far-left or far--right individuals, national adversaries and extremist groups.

Our research produced a machine learning model that can successfully detect Russian trolls
The research team successfully applied a machine learning model in a known Russian troll database to identify differences between authentic political supporters and Russian 'trolls' involved in online debates relating to the 2016 US presidential election.

Using text mining to extract specific search terms, the study team harvested tweets from 1.9 million user accounts, before using an algorithm to identify different online communities. The analysis identified 775 inauthentic Russian troll accounts masquerading as liberal and conservative supporters of Clinton and Trump, as well as 1.9 million authentic liberal and conservative supporters.

To trial the model's portability, a promising next step in our effort could be to test our model in a new context such as the online Brexit debate

A feasible next stage in our effort could be to test our model for detecting malign troll accounts on social media in a new context to explore the model's portability. A relevant choice for this could be the online debate over Brexit, in anticipation of the UK's departure from the European Union.

No comments:

Post a Comment