10 November 2021

Computational Propaganda: Challenges and Responses

Federico DAlessio

In recent years, the world has experienced a substantial rise of cybercrimes across many countries, and especially as a result of the digitalisation of jobs due to the various lockdowns implemented in 2020 (Riley, 2021). Technological progress will make online criminality more sophisticated and thus even more dangerous and harder to defend against. Thus, a multidisciplinary approach is needed to fight this phenomenon by adopting a variety of techniques from social and computer sciences. This essay will focus on computational propaganda, and more precisely on the use of bots on social media. The paper will first define what computational propaganda is, while highlighting its main features from different perspectives. It will later examine the challenges faced when countering online propaganda. Lastly, the essay will critically analyse and evaluate the possible responses and solutions to this issue.

Understanding computational propaganda

Computational propaganda can be described as an “emergent form of political manipulation that occurs over the Internet” (Woolley and Howard, 2018, p. 3). It is carried out particularly on social media, but also on blogs, forums and other websites that involve participation and discussion. This type of propaganda is often executed through data mining and algorithmic bots, which are usually created and controlled by advanced technologies such as AI and machine learning. By exploiting these tools, computational propaganda can pollute information and rapidly spread false news around the internet (Woolley and Howard, 2018).

Data mining is utilized to personalize adverts and automated bots to promote a certain point of view or perspective, while also disrupting the communication and campaign of the opposition (Howard, Woolley and Calo, 2018). Therefore, political adverts are tailored accordingly, and the information is spread to a greater amount of people. Bots shape discussions and share multiple posts on social media, in order to spread false or partisan information and support a particular party or group, as well as to promote hate campaigns (Woolley and Howard, 2018). In this way, computational propaganda can influence the outcome of democratic processes, such as elections and referenda. A critical factor to consider is that data mining and bots are, respectively, performed and created by humans (Howard, Woolley and Calo, 2018). Hence, computational propaganda could be carried out by activists or political actors, who exploit technologic advancements to promote their objectives or endorse their candidates. This is usually executed on platforms that engage the public in discussions and decisions, such as social media and blogs. One may argue that bots serve as facilitators to spread information and thus they could be conceived as benign. However, there are many cases in which bots are used for malicious intentions, such as spreading false information or derailing opposition campaigns.

In fact, these tools have often been used in electoral campaigns to manipulate and influence the opinion of the voters, menacing both online and offline aspects of the community. For instance, the outcomes of the 2016 UK referendum and the 2016 US elections were allegedly affected by influence campaigns mainly carried out on social media, since almost a third of tweets about the UK referendum and a fifth about the US elections were shared by bots (Schneier, 2020).

These figures reveal how important is the role of bots in computational propaganda, and to what extent this strategy can impact political systems and undermine the credibility of media institutions. It also shows how foreign governments are able to influence the outcome of democratic practices of other countries by engaging in ‘information warfare’ campaigns. For this particular act, in 2018 thirteen Russian nationals and three companies were charged for interference in the US political system, including the 2016 presidential elections (United States Department of Justice, 2018).

Moreover, recent research found that 81 countries are performing computational propaganda, 57 of which are using automated bots on social networks (Bradshaw, Bailey and Howard, 2020). This is a crucial factor because it proves that this form of manipulation is on the rise, as well as the use of social media as a source of information: in fact, a recent poll found that almost half of American citizens rely on such platforms to get news (Shearer and Mitchell, 2021). Therefore, the more social networks become popular, the more people are affected by computational propaganda and the easier it is to influence public opinion.

Social media applications were conceived as a platform where freedom and democracy would prevail, but in recent years many concerns have been raised about an increasing presence of accounts, mostly fake, sharing false news (Bradshaw and Howard, 2019). This can cause repercussions on both the platforms and conventional media, which have seen a decline in public trust. Moreover, a study on terrorist groups ISIS and Al Qaeda has found that these organisations also spread propaganda in the cyberspace through social networks such as Facebook and Twitter (Choi, Lee and Cadigan, 2018). Focus must be thus put on these platforms and whether their system of control and detection of inappropriate or illegal content is efficient enough to avoid the spread of false or dangerous information.

Furthermore, computational propaganda is expanding into other fields. Many bots have been used to spread misinformation and disinformation about healthcare: for instance, by running anti-vaccine campaigns (Broniatowski et al. 2018). A very recent example is the large amount of false news shared during the Covid-19 pandemic through machine learning techniques such as automated bots (Khanday, Khan and Rabani, 2021). The dangerous aspect of this type of propaganda is that public consensus about the benefits of vaccines and other medications erodes, and people will more likely believe in quick and simplistic solutions, instead of scientific research based on empirical evidence.

From a sociological perspective, computational propaganda and the manipulation of social media contribute to the generation of echo chambers, which refer to environments where people come across information that only reinforces their own point of view (Woolley and Howard, 2018). For instance, social media algorithms adjust the content that users can see and thus create filter bubbles (Barberá, 2020). Therefore, the individual is isolated and mainly finds users with similar opinions. This form of ‘enclave deliberation’ leads to a further strengthening of the user’s perspective, who encounters little opposition (Barberá, 2020). As a result, this will also favour an increase of partisan stances where there is no room for challenge nor compromise.

Echo chambers can thus pollute public discussions, by making them homogenous contexts where opposing opinions are rejected. These aspects may lead to a polarization of the political discourse, which could also allow extremist stances and conspiracy theories to emerge (Barberá, 2020). As individuals participate in online discussions solely with like-minded people, they are able to filter out all the content that challenges their position on social or political topics. Therefore, the absence of counter information would induce their ideas to go through a process of polarization. This is a significant matter, because exposing yourself to opposing views is needed in a democratic environment, as to have a clear and balanced understanding of relevant issues.

Another consequence of computational propaganda is the spread of misinformation and disinformation, which could intensify socio-cultural differences, as well as reduce public trust towards conventional media and democratic institutions (Lavorgna, 2020). Traditional media would thus lose legitimacy and the public could gear towards alternative sources of information, such as social networks (Bennett and Livingston, 2018). In turn, this can dissuade media organizations from investing money and time in meticulous and factual reporting. This happens especially in developing countries where media institutions are not well established and only reach a small percentage of the population (Guess and Lyons, 2020). Apart from creating social divisions, in these contexts the spread of disinformation and misinformation may also increase violence amongst the population and contribute to the spread of weaponized online propaganda.

All the factors previously analysed contribute to the development of an environment in which impartial and clear evidence counts less than emotional responses and sentiments of the public. This concept is usually explained as ‘post-truth’ politics, where the line between facts and subjective feelings is blurred (Block, 2019). As a result, political actors can make false information seem true to the eyes of the public, whose decisions are driven by instincts and emotions, rather than empirical evidence.

Challenges and responses

Over the years computational propaganda and the use of bots on social media have become widespread tools to influence public opinion. It is becoming increasingly harder to address this challenge as there is a shortage of legal framework and awareness on these modern techniques of manipulation (Lavorgna, 2020). It is partly due to the inadequacy of institutions and the lack of precise knowledge about these automated tools: for instance, it is still not clear whether bot traffic is always negative and in what circumstances it may not be (Woolley, 2020). As for the role of social networks, it has been complicated to develop and implement effective policies over the responsibility of these platforms, also because of the inaccessibility of data (Bennett and Livingston, 2018).

Another critical element worth analysing is the problem of attribution, namely the difficulty in identifying both instigators and perpetrators of such actions. This is mainly due to the technological advancements of AI and machine learning techniques, which grant bots the ability to adapt to different environments (Woolley, 2020). In addition, computational enhancement provides anonymity and automation, which allows offenders to hide their identity and bots to perform repetitive tasks at a much faster rate than human actors (Woolley and Howard, 2018).

Despite the slow legal development around this issue, it could be argued that a multidisciplinary approach will facilitate the regulation and control of computational propaganda. Aside from a technological modus operandi, this challenge should be addressed by adopting techniques from different fields of study. Improving technologies of detection can help contrast this issue, but also sociological and legal approaches would further simplify this process of mitigation.

Machine learning and AI may be improved and used to combat computational propaganda, as human actors alone cannot deal with this matter. These technologies could be utilized to prevent malicious usage of bots, in order to detect and regulate practices of online propaganda (Woolley, 2020). In addition, the use of high-powered software such as data intelligence platforms will help individuals to gather and analyse the information found on the web, and it will assist companies and professionals such as journalists and researchers to better understand and fight disinformation (Woolley, 2020).

Electoral campaigns should also be secured with digital tools, as to provide political actors and voters with an efficient form of detection and response to misinformation and disinformation (Schia and Gjesvik, 2020). An example could be fact checking applications that can verify the veracity of the information shared over the internet. This would also improve public trust towards the democratic process, as well as media institutions.

In addition to technological solutions, social policies are also needed. Raising awareness about computational propaganda and automation of bots allows individuals to better understand the world of social networks (Schia and Gjesvik, 2020). Further collaboration and cooperation in the government to promote and improve this process would be beneficial, in such a way as to allow citizens to acknowledge facts and counter arguments. As a result, critical thinking will be promoted through awareness campaigns and political education schemes in order to build trust and enhance the ability of the public to spot false news and find alternative solutions (Schia and Gjesvik, 2020).

These strategies may gain some immediate success, but they cannot be the only solution to this vast dilemma. It is crucial also to identify who is behind online propaganda operations, and at the same time understand who the targets of these campaigns are. In fact, social platforms suffer from a lack of transparency that does not allow to precisely measure the impact of computational propaganda on society (Schia and Gjesvik, 2020). Therefore, there is the need of regulations and policies that would directly address the role of social networks over the spread of disinformation. Social media must thus take more responsibility over the impact of data mining and automation on society and politics (Woolley, 2020).

Furthermore, regulation of bots could be implemented by revisiting election laws or communication policies, since in many states they are usually obsolete and do not take into consideration that new forms of technology are able to influence opinions and polarize discussions (Howard, Woolley and Calo, 2018). Transnational corporations need to cooperate, and international law frameworks need to keep up with the ever-evolving cyberspace in order to regulate the use of these automated tools. Moreover, a better legislation on contributions and expenditures of political parties would be helpful to investigate on who may be driving these activities.

Conclusion

Computational propaganda deploys automated bots on social media to influence users and induce them to support a specific political agenda. As a result, this practice can potentially create public consensus where there was little or did not exist, while drastically altering public opinion. The drivers of such campaigns usually have political aims, such as influencing the outcome of elections or referenda. Computational propaganda thus has a severe impact on the democratic process, as it weakens institutions and traditional media outlets. According to recent cases and statistics, this phenomenon is on the rise and is expanding to other fields, such as terrorist propaganda and healthcare disinformation. Although computational propaganda is not technically illegal, it can be described as a form of political deviance which undermines democratic principles. It also has social repercussions, such as the creation of echo chambers that make online public discussions homogenous, as well as the polarization of political communication.

Contrasting computational propaganda presents several challenges. There is a lack of legislation aimed at this issue as it makes use of always evolving technologies. In fact, advanced technologies allow the instigators and perpetrators of online propaganda to remain anonymous and hidden. Therefore, it has been very complicated to implement appropriate policies to combat this form of manipulation. An approach that would include techniques from multiple subjects and fields of study is thus needed, in such a way as to consider every implication that computational propaganda leads to.

On a technological level, AI and machine learning can be exploited by governments to counter propaganda through bots on social platforms. Sophisticated tools such as fact checkers should also be utilized to hinder the spread of disinformation and improve public trust towards media institutions. From a sociological perspective, raising awareness and critical thinking through education and cooperation helps make the public more informed and prepared to such events. In conclusion, an international legal framework that regulates the automation of bots and the role of social networks should be implemented, as to avoid negative impacts on society and politics.

No comments: