28 July 2022

Social Media Misinformation and the Prevention of Political Instability and Mass Atrocities

Kristina Hook · Ernesto Verdeja

Introduction

Social media’s enormous impact is unmistakable. Facebook sees approximately 300 million new photos uploaded daily, while six thousand Tweets are sent every second.1 The most popular YouTube channels receive over 14 billion views weekly,2 while the messaging app Telegram boasts over 500 million users.3 Social media platforms connect people across societies, facilitating information sharing in ways unimaginable only two decades ago. The manipulation of social media platforms has also spread widely, and such platforms have been used to promote instability, spread political conflict, and call for violence. Researchers believe that organized social media misinformation campaigns have operated in at least 81 countries, a trend that continues to grow yearly, with a sizable number of manipulation efforts by states and private corporations.4

We argue that a wide range of actors connected to the instability and atrocity prevention community must incorporate emerging issues linked to social media misinformation (SMM), and we provide recommendations for doing so. A simple but disturbing truth confronts diverse professions whose work pertains to atrocity prevention: misinformation can rapidly shapeshift across topics, but only a few narratives need to take hold to erode trust in facts and evidentiary standards. Taking advantage of the dense, extensive social interconnections across social media platforms, actors can launch numerous falsehoods, accusations, and conspiracies and observe which narratives take hold. As a growing part of contemporary asymmetric conflict, malicious actors — whether foreign or domestic state actors, parastatal groups, or non-state actors — determine when, where, and how often to attack. Defenders, which include targeted governments, civil society organizations, tech corporations, media outlets, and others, must prioritize where to focus and how to respond. The nature of this asymmetry means that defenders find themselves in a reactive crouch. The quantity, speed, and increasing sophistication of misinformation pose profound challenges for instability and atrocity prevention stakeholders.

The paper is divided into several sections. Following a discussion of key terms, we turn to the main socio-political, psychological, and social media factors that increase the impact of social media misinformation. The paper next outlines the main functions of SMM, and then explores several challenges for those whose work intersects with atrocity prevention, including social media corporations, established (legacy) media, non-governmental civil society actors, researchers and civil society, and governments and multilateral organizations. The paper concludes with policy recommendations for these various stakeholder groups.

This policy paper draws on various sources to survey contemporary social media misinformation patterns and provide recommendations for multiple stakeholders. Our research uses three primary sources. The first involves current scholarly and practitioner research. Second, we conducted interviews from October 2021 to May 2022 with 40 experts in governments and intergovernmental organizations, 13 in human rights organizations, 10 in technology corporations and tech oversight organizations, eight in media outlets, six in misinformation fact-checking and monitoring groups, 16 in research centers, and 11 in computer science communities.5 These involved semi-structured interviews with groups and individuals of an hour or more in length. The questions fell into several categories: specification of the misinformation and disinformation problems and their political, legal, and social consequences, particularly concerning atrocities and instability; interviewees’ team or organizational responses; assessment of the broader technical, legal, and policy initiatives currently in place, including their strengths and limitations; discussion of additional steps needed from the wider practitioner community; and, identification of the major challenges for the future.6 Lastly, we draw on our own project with colleagues, Artificial Intelligence, Social Media, and Political Violence Prevention, which uses novel artificial intelligence (AI) tools to identify types and patterns of social media visual misinformation in unstable political situations, particularly in the Russia-Ukraine context.7

Definitions

In this paper, we adopt inclusive formulations of various definitions of key terms relevant to policy development.8 Social media refers to computer-based platforms that enable people to communicate and share information across virtual networks in real-time. This includes community apps such as Facebook and Twitter, media-sharing channels like YouTube and Instagram, messaging apps such as WhatsApp, and hybrid platforms like Telegram. Scholars typically distinguish between social media misinformation — involving false or misleading information shared on these platforms — and social media disinformation (SMD), the subset of intentionally shared misinformation.9 In practice, the boundaries are porous. Many purveyors of misinformation believe what they are sharing and are not intentionally spreading false information.

Additionally, not all harmful social media posts are wholly false. They may contain some truth, be largely true but misleading or out of context, or may advocate claims that are problematic but do not necessarily qualify as imminently dangerous. Furthermore, proving the origin of specific pieces of misinformation can be exceedingly difficult.10

Additionally, social media disinformation may be part of influence operations: sustained campaigns usually organized by states, parastatal entities, or organized non-state actors. In other cases, SMD is significantly ad hoc and decentralized. In keeping with much of the policy community, we use social media misinformation as inclusive of all these forms of problematic social media uses and refer to SMD when there is a plausible indication of an intent to deceive, such as with influence operations. We nevertheless address relevant distinctions throughout the paper.

Factors Increasing Misinformation Resonance

SMM can elevate the risk of atrocities across various political conditions,11 ranging from repressive/authoritarian (China, Myanmar, Venezuela, Russia, etc.) to semi-democratic (Philippines, India, Indonesia, etc.).12 SMM has also taken hold in countries that have historically had strong democratic institutions, including the United States and the United Kingdom, partly due to a lack of trust in institutions and domestic political influences. Contextual differences play a significant role, as these differences correlate with other mitigating institutional and societal factors that help lower the salience of violent SMM as civil liberties increase. This section discusses three key clusters of factors that affect SMM resonance: 1) socio-political cleavages, 2) individual and group psychological dynamics, and 3) the social media ecosystem.13

Socio-Political Cleavages

Socio-political cleavages are key to elevating the likelihood of domestic political instability, including atrocities. These include significant social and political polarization, anti-democratic or weakened democratic regimes, and severe governance or security crises.

Severe social and political polarization refers to the hardening of in-group/out-group differences and weakening socialization processes that could otherwise moderate tensions. It occurs through the spread of dehumanizing discourses and formal and informal policies and practices. It also reinforces perceived normative differences between groups: out-groups are seen as threats to the interests, goals, security, or survival of the in-group.14 At its most extreme, such polarization may increasingly manifest through violent behavior, including attacks against opponents. Misinformation both draws from and intensifies polarization, underscoring the reinforcing nature of radicalization dynamics. 15

Regime type also matters. Authoritarian and semi-authoritarian governments are much more likely to use disinformation to target opponents, silence dissent, and shape public discourse. However, SMD and SMM have also been effective in diverse contexts of “democratic backsliding”: democracies where the rule of law is unevenly applied, the free press is attacked or marginalized, and populist leaders are increasingly unrestrained by constitutional or legal checks (such as in Hungary, Turkey, and the United States). Although misinformation may originate from numerous sources, including civil society, the critical point is that with weakened legal and institutional constraints on executive authority, social media may become a powerful platform for misinformation and disinformation that is specifically perpetrated by state authorities or their proxies.

Profound governance or security crises are especially hospitable environments for SMM. These crises can include the likelihood or onset of armed conflict or collective violence, highly contested power transfers (e.g., extremely divisive elections, coups), constitutional crises, or the imposition of emergency rule. Crises heighten the political stakes, making social media “another battlefront in the narrative war.”16

In these contexts of generalized misinformation, sustained disinformation campaigns by states and their proxies can create further instability. State-sponsored SMD is often internally targeted, but SMD is increasingly becoming part of foreign policy destabilization and pressure campaigns, as seen with Russian disinformation in contexts ranging from Ukraine to the United States. In short, foreign involvement intensifies the instability factors noted above.17

These socio-political factors contribute to distrust between citizens and official sources as well as among citizens, increasing the potential influence of social media misinformation on those who feel socially, politically, or economically alienated.

Psychological Dynamics

Three broad categories of psychological dynamics increase group and individual susceptibility to social media misinformation: 1) belonging, 2) intelligibility, and 3) confirmation bias.18

The first category concerns a natural need for social belonging and security through membership in a group. Research shows that people have a powerful psychological need to connect with others, finding self-worth through community.19 Social media participation feeds into this need directly: it can satisfy, at least partly, the need for belonging by connecting like-minded others and strengthening psychological well-being. Accordingly, people heavily involved in a particular online community may find it difficult to critique dominant narratives, especially if information comes from a trusted or known source. Challenging dominant positions may generate criticism, humiliation, or even expulsion.

To make a complex, seemingly dangerous world intelligible, people often infer clear-cut causal connections, motivations, and relationships where they do not exist. While this is a common psychological heuristic, conditions of heightened instability make this particularly dangerous. Misinformation can replace complex or confusing political phenomena with reductive stories of good-versus-evil and us-versus-them. These epistemological shortcuts, which discount complex analyses and often foreclose critical scrutiny of one’s assumptions and preferences, are amplified in social media echo chambers that reinforce our world views.20

A final psychological factor is confirmation bias, the tendency to point to information that confirms already-held beliefs while rejecting contradictory information. Social media worsens this bias;21 research finds that people online gravitate toward those news sources that affirm their views and move away from challenging sources.22 Furthermore, effective disinformation campaigns intensify these biases to create politically divisive in-group/out-group distinctions. Research shows that social media users are much more likely to share unverified stories than to post a correction when stories have been shown to be false or manipulated.23

These psychological dynamics become especially important in atrocity-risk contexts, where there is already an ongoing moral reorientation from out-group targeting as passively permissive to actively beneficial.24 As we will discuss further, SMM can create the perception of widespread societal buy-in to this moral recasting, i.e., facilitating a bandwagon effect.

Social Media Factors

Several context-specific social media factors also contribute to SMM impacts and should be carefully assessed in atrocity-risk contexts. These factors include: a robust social media ecosystem with a developed “attention economy,” and an SMM focus on topics of high political valence (highly salient and divisive, such as an election) where the political outcome is uncertain and political success requires some degree of public support.

A robust social media ecosystem refers to a substantial proportion of the public who are regularly on social media. This activity sparks large, dense networks facilitating the rapid spread of information. These platforms vary in format, regulation, online culture, and popularity. Given their role in spreading information and SMM, we also include non-mainstream or regionally popular formats like Telegram and Gab and messaging apps like WhatsApp and Viber in this category. Some social media platforms, such as VK (Russian Federation) or TikTok (China), are also believed to share user data with authoritarian governments.25 These differences require nuances, such as foreign influence operations, domestic influence operations, and ill-informed information sharing, to be teased out in each context through case studies beyond this paper’s scope.

Despite these differences, the “attention economy” is a crucial aspect of social media.26 Researchers have noted that a person’s attention is a scarce resource, and social media tech companies rely on keeping users’ attention in order to drive revenue.27 Tech companies have a range of techniques, such as using “friends,” “likes,” “subscriptions,” and other quantifiable measures to secure users’ attention, which in turn strengthen the social media ecosystem and create a focused, engaged audience for advertisers. The public valuation of tech companies is tied to their number of users, and without sustained public outcry, companies are often slow to crack down on misinformation that may risk driving away users. Some companies have directly addressed troll factories, shell accounts, bots, or other amplifiers of misinformation by establishing special response protocols (discussed in Section V), but these actions are influenced by revenue concerns generated from the attention economy.28

High political valence topics will vary by context, but the point is that they elicit strong public reactions and are framed to resonate along cleavages like political ideology or party, ethnicity, race, religion, or other recognized fault lines.29 Uncertain political outcomes that depend on public support are especially susceptible to various forms of information disorder, including misinformation and disinformation.30 In particular, disinformation campaigns are often used to secure backing by discrediting the opposition. The lead-up to presidential elections in Brazil (2018)31 , Colombia (2022)32 , Indonesia (2019)33 , and the United States (2020)34 illustrate these dynamics. These campaigns are often sustained, coordinated, and sophisticated efforts to manipulate public sentiment and views, well beyond uncoordinated or sporadic misinformation posts.

Nevertheless, the context-specific nature of disinformation content, provenance, and dissemination patterns poses significant challenges for systematic theorizing and forecasting how disinformation will interact with high political valence issues. “Disinformation is not a wholly portable subject,” one expert told us. “Knowing about the subject of disinformation is not sufficient. In an individual context, you must know about the nuances of a politicized topic before combining this knowledge with technical, computational, and other social science approaches to disinformation. Disinformation is a derivative of specific real-life events.”35
Functions of Social Media Misinformation

Existing research suggests that SMM is not a direct cause of severe instability or mass atrocities but it can be an enabler,36 legitimizing and accelerating violence in a variety of ways. Below, we frame these within three broad categories concerning the primary functions of misinformation. SMM contributes to:

Out-group Vulnerability by:Fraying social relations between individuals and groups. Research has found that misinformation campaigns weaken social ties and separate people into increasingly self-contained online political communities, with fewer opportunities to encounter counter-narratives or other sources of information. It also opens potential avenues of radicalization.37

Spreading dehumanizing or polarizing discourse that normalizes perceptions of political opponents as untrustworthy adversaries or even existential threats. Subjection to such severe and ongoing dehumanizing discourse legitimizes marginalization, the denial of rights, and sometimes violence.38

Targeting opposition groups or leaders with specific false accusations such as corruption or disloyalty, or smear campaigns to undermine their credibility. This is especially common prior to pivotal transitions, such as upcoming elections, that often correlate with increased atrocity risks.

Attacking opposition figures to dissuade public deliberation and curtail speech. Sustained and relentless harassment of opponents discourages alternative views from emerging and signals to potential critics that they, too, may be targeted. It may also lower the perceived costs of violence against opposition figures.

In-group Cohesion by:Presenting one’s group as the authentic defenders of important values and collective survival. This is a group-affirming function as it repositions one’s group as engaged in a righteous struggle with enormous stakes, which may lower the perceived prohibitions against violence. Groups’ differences are framed in heavily normative terms with little space for critique or compromise.

Building a collective identity around perceptions of persecution, situating fear and grievances at the center of identity, a noted dynamic often present in atrocity contexts.39

Advocating collective self-defense or preemptive attack against perceived enemies, another feature noted in atrocity-risk contexts.40 Social media is an effective platform for spreading calls for direct assaults on opponents and also for the practical work of organizing collective action off-line, including the creation of extremist advocacy organizations, secret cells, and militia groups.41

Erosion of Civil Trust and the Spread of Epistemic Insecurity by:Introducing unrelated narratives or counterattacks. Misinformation can be part of a larger diversionary effort to draw attention from damaging criticisms or counterarguments.42

Spreading doubt on a specific issue, such as electoral fairness, candidate integrity, or opposing party intentions.

Creating what we term epistemic insecurity, where charges of bias, dissimulation, and conspiracies undermine truth claims, evaluative standards, and evidence, and which are replaced by alternative, unsubstantiated narratives. Sustained misinformation campaigns can have a cumulative effect of sabotaging factual reporting and narratives while lowering social and epistemological barriers to far-fetched conspiratorial thinking.43 Several experts in disinformation response units of tech companies noted that foreign-supported SMD campaigns often are directed at spreading epistemic insecurity, or what they termed “perception hacking.”44

The points above underscore misinformation’s varied and self-reinforcing effects in fragile political contexts. These effects raise profound challenges for the atrocity prevention community, as discussed in the following section.

SMM Challenges for Instability and Atrocity Prevention

Social media misinformation presents various practical challenges to atrocity-focused work, affecting the full spectrum of efforts from early warning, deterrence and response, to prosecution. Any instability and atrocity prevention program across these categories needs to consider the following challenges to craft viable, effective policies and strategies:
Speed

The rapid pace of social media and the technology culture associated with its rise accelerate the spread of misinformation, pushing it faster and further. One technical and policy expert said “maximum engagement strategy” pushes information into the public sphere with little regard for critical inquiry or healthy skepticism.45 Research suggests that social media platforms are also changing norms, expectations, and practices in journalism — shaping the professional cultures across all digital, print, television, and radio industries — as journalists report implicit or explicit pressure to “publish content online quickly at the expense of accuracy” for profit reasons.46 Our interviews revealed a similar pattern in social media, noting the pressure policymakers (and others) feel to respond to crises within hours. Speed also contributes to crowding out unpopular opinions with little thought given to dissenting views, an effect seen in the amplification of COVID-19 disinformation and misinformation worldwide.47 The speed of SMM concentrates, strengthens, and amplifies confirmation biases.

Chaos

Social media contributes to an overabundance of data, challenging instability and atrocity prevention policy experts to identify and prioritize what is most important for decision-making and filter out misinformation, all in contexts that strain resources. The rise of SMM has corresponded to a modern transition to the information age, where information itself has become a productive force.48 Mainly due to social media activity, data are now created at an unprecedented speed, with an estimated 44 trillion gigabytes of data generated daily.49 Yet far from solving policymaker needs, these trends have created an informational overload: “Without new capabilities, the paradox of having too much data and too little insight would continue.”50 In public remarks, a former U.S. Department of Defense advisor said, “If you could apply human judgements, it’s wonderful, otherwise substituting human judgment and talent with computing analytics does not work.”51 The turmoil of atrocity-risk contexts present decision-makers with some of the world’s most complex challenges for actionable pattern identification. One former government analyst noted that SMM is “gasoline to the fog of war.”52

Curation and Control

Social media invites curation, providing abundant opportunities for fostering polarization and SMD influence operations by external actors. The experts interviewed across professional sectors identify several themes related to the unique curation powers of social media and its implications.53 One analyst framed every social media interaction as having invisible third parties that shape the interaction, from software engineers who make various behavioral assumptions about users in developing social media algorithms to political actors employing dissimulation to advance their goals. Another policymaker discussed how viral SMM content, like false accusations of imminent violence by opponents, can dominate political deliberation by saturating the information sphere. Research confirms that perception biases are linked to how imminent a given risk is perceived rather than the actual severity of danger — a dynamic that atrocity perpetrators can exploit to dehumanize targeted groups.54

Experts also cited greater credulity for SMM when passed from another trusted source such as a relative, friend, or colleague. One researcher argued that social media has extended traditional marketing practices of segmentation (dividing one’s market into targeted groups) into micro-segmentation, allowing SMD promulgators to test various disinformation narratives through promoted ads and adapting the content in real-time based on audience feedback. The overlap of competing narratives, conspiracy, and chaos creates a strategic messaging advantage for those able to leverage these data, including SMM and SMD actors.55 These dynamics highlight the importance of legacy media institutions and investigative journalism in countering atrocity risks associated with SMM, although existing challenges limit their effectiveness in this area, as we address in the policy recommendation section.

Capacity and Anonymity

New technologies are increasingly affordable and simple to use. While social media platforms may bring a democratizing aspect to the information space, this ease of access and anonymity also give disinformation creators, amplifiers, and funders more opportunity to act with impunity. Growing capacity and anonymity increase the likelihood of SMM, though our interviewees also suggested that these features may likewise enable human rights defenders to develop or extend decentralized grassroots campaigns in novel ways (e.g., Ushahidi and Una Hakika in Kenya).56 Although cases differ, our interviews reveal a general divide among experts over how to respond to the double-edged consequences of greater capacity and anonymity. Some experts see a net positive that allows for new means of localized peacebuilding, information sharing, and anonymity for otherwise vulnerable sources, while others remain skeptical of the ability of peacebuilding groups to use social media effectively in the face of strong SMM and disinformation campaigns by states or other armed actors.57

Speech Rights: Balancing Civil Liberties and Civilian Protection

Long-standing debates and global differences surrounding the balance between national security, civil liberties, and civilian protection have sharpened in the face of SMM. Some states like New Zealand58 and intergovernmental organizations like the EU59 have strengthened oversight, standards, and penalties for misinformation and disinformation content on social media. Other states, such as Myanmar, China, or Russia60 , use national security as a pretext for limiting free speech and civil liberties. Different global contexts have diverse norms, ethical standards, and values around the blend of civil liberties, national security, and civilian protection.61


Additionally, major tech companies like Meta, Google, and Twitter, among others, are often hesitant to employ policies that may be seen as limiting free speech, noting the practical and logistical difficulties of enforcing consistent standards across many legal contexts.62 This is further complicated by conflicting national and regional laws, providing unclear direction and accountability for these companies whose reach is nearly global (the 2022 EU Digital Services and Digital Markets Acts, for instance, is more demanding than current U.S. legislation).

Regardless, corporate content moderation efforts, even from companies such as Meta that have invested significant resources in this area, are often ineffective, slow, or inconsistently applied, either because of insufficient resources used to combat misinformation, unclear legal obligations and expectations that change by jurisdiction, or a commitment to profits over truth.63 Other social media platforms, such as Gab and Telegram, have weaker moderation rules and are less willing to regulate potentially dangerous speech.64 Without clear, robust, and implementable international standards, the balance between civil liberties and civilian protection will continue to vary across situations, including any national legal frameworks that might exist in an atrocity-risk context.65

Censorship and Accountability Efforts

Conflicting actions between stakeholders, such as removing content flagged by social media moderators as violating community standards, may lead to a lack of information access for prosecutors, investigators, and prevention policymakers, especially in international contexts not subject to legal warrants. This challenge has been noted by social media companies surveyed for this paper, yet potential solutions remain under review by some of the companies surveyed at the time of this writing. Relatedly, when asked what the major challenge to their efforts was, governmental policymakers stated data access, an issue made more complicated by security classifications. As expressed by policymakers interviewed, except in criminal cases, technology companies’ disclosure of internal data, including highly profitable micro-segmentation metrics, remains a voluntary step. Uneven data-sharing policies by technology companies operating in contexts with highly variable civil liberties, such as China and the United States, complicate matters. These dynamics often offer less opportunity for democratic states that sponsor atrocity prevention efforts to use these data in their work. However, even in democracies, government demands for information on account owners and connections raise profound, contested privacy and surveillance issues. Additionally, social media monitoring efforts to remove dangerous content may lead to a lack of data and information access for prosecutors and investigators pursuing atrocity perpetrators.66

Policy Recommendations

Effective atrocity prevention requires a coherent, integrated vision across the socio-political, psychological, and technical domains. These approaches respectively address the societal dynamics that enable atrocities, psychological dimensions that link atrocities and SMM (including the attention economy), and other technical components (e.g., identifying, monitoring, classifying, and countering SMM). Multiple interviewees underscored the context-specific nature of addressing SMM but also highlighted general prevention strategies that can be adopted. Countering SMM is one component of atrocity prevention, but an increasingly important one. This last section includes recommendations tailored to different groups of stakeholders, although we envision some of these recommendations holding for multiple actors. These stakeholders include social media corporations, established (legacy) media, non-governmental civil society actors, researchers and civil society, and governments and multilateral organizations. Each of the actors plays a significant role in countering SMM. We include specific guidance for each group and broader recommendations to strengthen strategic partnerships among them for atrocity prevention.

Social Media Corporations

This category includes private corporations like Meta (Facebook/Instagram), Twitter, and Google (including YouTube), among others. Although many large corporations have offices and strategies designed to tackle SMM, the following steps are needed:Recognize that the platforms play a complex, central role in any effort to curtail disinformation. Remind shareholders that being a field leader includes addressing SMM; the consumer base expects this.

Adjust algorithms amplifying SMM, especially to reduce the reach of SMM and conspiracy accounts by demoting and de-prioritizing this content from users’ feeds. De-platforming accounts that call for violence works.67

Close fake and bot accounts regularly and proactively. Ensure a field-wide standard that provides regular announcements with statistical information and external evaluations (as large companies like Meta currently do68 ) on these efforts to show the companies are responsive to public demand.

Continue to clarify and consistently enforce policies and procedures for content monitoring, flagging, and removal.

Institute robust oversight mechanisms and empower staff to exercise decisional authority on flagging and removal. Ensure staff are protected from corporate retaliation or contract non-renewals for third-party vendors.

Prioritize SMM cases continually, not just when under public scrutiny. Blend efforts to include a clear focus on upstream, preventive monitoring rather than reactive response efforts. Doing so will better equip companies to work with other stakeholders in identifying upstream efforts, thereby preventing (and not just responding) to SMM, especially disinformation. This involves more robust efforts to integrate “lessons learned” from previous cases.

Commission and implement annual polarization and conflict awareness education among staff, senior decision-makers, and content monitoring specialists. The latter must have regular refreshing training and extra insight into context-specific trends. Similarly, work with instability and atrocity prevention experts to deepen such knowledge in internal crisis monitoring teams. This allows companies to anticipate future atrocity and instability episodes better and prepare accordingly.

Invest more in local partnerships in the Global South for content moderation. Local experts are an essential asset but often need greater support and resources.69 Several experts interviewed noted that many tech company experts have little experiential and other knowledge of conflict-affected societies awash with violent SMM. In contrast, tech company stakeholders admitted that significant gaps in their contextual knowledge exist despite hiring some local experts.70

Protect content analyst and moderator employees in violent or repressive locations, including by greater anonymization of company sources where possible.

Strengthen and formalize rapid response linkages to the instability and atrocity prevention community. Some of this has occurred, but experts note it remains relatively weak.71 Invest in building these networks before crises.

Resist self-censorship demands from recognized authoritarian regimes, including the sharing of surveillance information on human rights defenders working to prevent violence in their local contexts.

Maximize the effectiveness of corporate giving by sponsoring and participating in digital media literacy programs, journalism grants that allow media paywalls to be removed during instability and atrocity crises, and academic research grant programs for poorly understood aspects of SMM (such as the specific reinforcing processes connecting online extremism and in-world violence).

Work with advertisers to limit or stop advertising on known SMD sites which monetize disinformation.72

Established (Legacy) MediaRecognize the explicit overlap between robust journalism, atrocity prevention, and SMM. Years of strong investigative journalism may later be necessary to counter dehumanizing calls for violence.73

Prioritize fact-checking and analysis over merely reporting extreme discourse to build a shared basis of facts to counter atrocity-related SMM.

Invest resources in long-term coverage of key issues, creating a foundation of public record facts that can counter later disinformation campaigns that elevate atrocity risks.

Work with civil society groups to lead digital media literacy programs, an essential component of reducing bandwagon effects that dehumanize or enable atrocity risks through viral SMM.

Publicize, protect, and adhere to journalistic norms, explaining to the public what these are and why they are essential. Build trust as a facts-based arbiter by acknowledging mistakes or failures around these norms. This trust will be essential during the heated moments of accumulating SMM-linked atrocity risks.

Speak openly about the distinction between news aggregators as sources of information and a reduced presence of primary or investigative news reporting.

Remove paywall protection for critical news coverage.

Participate in, support, and foster career advancement opportunities associated with non-major city reporting opportunities (e.g., the USA Today Network for rural community reporting), using these opportunities to widen public trust and digital literacy. Consider whether these models can be supported in contexts with long-term, structural atrocity risks.

Host public events with academics and public intellectuals who can discuss human tendencies toward epistemic insecurity.

Normalize bias and cognitive dissonance discussions, including for journalists, and be open with the public about the steps taken to minimize these in reporting.

Non-Governmental Civil Society ActorsHost tabletop exercises and SMM-linked atrocity prevention simulations for stakeholders across all categories, reducing siloed efforts and fostering relationships before crises strike.

Working with other civil society organizations, create an action plan to strengthen coordination around monitoring and moderation on social media platforms to avoid piece-meal strategies that work at cross-purposes. Solicit input and buy-in from social media companies.

Build public advocacy for social media accountability. Maintain relationships with social media stakeholders, and when possible, work in such a way that builds political capital for reformers within the system. Additionally, maintain public pressure on tech companies that “whitewash” superficial SMM efforts.

Pressure tech companies to establish common binding norms and policies on SMM (little incentive exists for them to do so individually). The Global Internet Forum to Counter Terrorism is a start but should be expanded substantially through greater transparency, increased membership, and metrics that move beyond “low-hanging fruit.”74

Encourage financial support for news media to drop paywalls to access critical news.

Be realistic about organizational strengths and resources, planning a division-of-labor strategy among other civil society organizations.

Contribute detailed conflict mapping of instability contexts and actors, setting the groundwork for tailored and sustained SMM responses. Knowledge of these contexts can help tech companies fulfill the recommendation for more upstream, preventive efforts.

Practice reflexivity and prioritize platforms for Global South actors to contribute their expertise.
Researchers and Civil SocietyParticipate in tabletop exercises with other stakeholders, building networks for practitioners in the field. For senior academics, internally advocate for such activities to fulfill early-career researcher performance metrics (e.g., tenure), building the bench of scholars with SMM and atrocity prevention expertise at all career levels to ensure long-term sustainability.

Expand fact-checking networks to monitor, flag, and publicize disinformation. This may include combining expertise across high political valence issue areas (elections, public health, security, etc.).

Remember the different, short-term timelines of atrocity prevention/SMM response, and consider how research can provide empirically grounded frameworks for information organization in real-time.

Regularly speak with other stakeholders, asking what types of research are needed for practitioners’ atrocity prevention/SMM toolkit.

With civil society actors, lead trainings in SMM monitoring and digital media literacy for the public.

Partner with psychologists and other experts on education programs about SMM-related biases.

Prioritize the integration of SMM analysis into violence and instability early warning modeling.75

Support the role of local experts, especially from traditionally marginalized communities including in the Global South, in knowledge production around SMM. Provide support and anonymity when in repressive or violent contexts.

Practice media skills using available resources (e.g., university media offices, conversations with local journalists, and organizations like the Alan Alda Center for Communicating Science) and write these into grants. Employ these skills to “translate” research to SMM stakeholders and the public.

Develop more precise and operable frameworks of “harm” that recognize the speed and scale of social media diffusion;76 work with the instability and atrocity prevention experts on these efforts.

Governments and Multilateral OrganizationsDevelop and strengthen internal SMM analytical capacity, including hiring information officers with experience in the linkage between instability, atrocity prevention, and SMM.

Work with academic experts and integrate SMM into early warning and accountability policy toolkits. Fund case study-specific research to determine what approaches, tools, and lessons learned may be portable (or not) across contexts.

Increase policy analyst and policymaker knowledge and internal awareness of how social media operates across different platforms, including those with little regulation like Telegram and Gab.

Be mindful of too much direct government involvement, as this can reinforce mistrust of government. Ask internally whether civil society or other stakeholders can better lead public messaging on a contentious subject.

Ask questions on these topics internally and with government and multilateral counterparts, specifically about possible internal mismatches in SMM working definitions. Tailored questions can elicit whether competing mandates or mismatched working definitions reduce policy effectiveness.

Strengthen legislation and international agreements to ensure that tech companies that remove SMM can share material relevant to atrocities/instability with human rights researchers and prosecutors. Tech companies often remove material quickly for violating their relevant Terms of Service and save it for a limited period, but this material can be valuable for analysis and accountability. Current legal avenues, such as the Stored Communications Act, Cloud Act, and various Mutual Legal Assistance Treaties (MLAT), are cumbersome and slow.77 The Berkeley Protocol on Digital Source Investigations provides practical suggestions for improvements.78

Clarify institutional roles and policies in countering SMM internally and publicly.79 Avoid actions that may reduce the effectiveness of civil society efforts to counter SMM.

Remember the erroneous tendency to prioritize process over outcome.

Democratic governments and multilateral organizations should use their influence and lobbying platforms with other governments. Despite the perceived influence of social media companies, their local content moderators can still face governmental harassment in non-free contexts.

Utilize global and regional organizations and platforms (e.g., UN, EU, AU, OAS) to integrate SMM analysis with instability and atrocity prevention networks, policies, and doctrine. Currently, this work is ad hoc, with little concrete sharing of best practices.80 Intergovernmental agencies can be crucial in coordinating and sharing knowledge on standards, policies, and practical tools for combatting SMM.

In addition to non-governmental organizations and academic-led monitoring efforts, legislators and parliamentarians should consider legislation that establishes accountability and oversight mechanisms for violence and incitement that can be directly connected to platforms.81

Acknowledgments

We would like to thank our interviewees, many of whom requested anonymity. Additionally, we’d like to extend our gratitude to Isabelle Arnson, Amir Bagherpour, Rachel Barkley, Jeremy Blackburn, Cathy Buerger, Kate Ferguson, Tibi Galis, Andrea Gittleman, Derrick Gyamfi, Maygane Janin, Ashleigh Landau, Khya Morton, Savita Pawnday, Max Pensky, Iria Puyosa, Sandra Ristovska, Walter Scheirer, Lisa Schirch, Karen Smith, Fabienne Tarrant, Tim Weninger, Kerry Whigham, Rachel Wolbers, Lawrence Woocher, Oksana Yanchuk, and Michael Yankoski for discussing these issues with us. Finally, we thank Ilhan Dahir, James Finkel, Shakiba Mashayekhi, and Lisa Sharland from the Stimson Center.

No comments: