23 June 2020

The Challenges of Countering Influence Operations

ELISE THOMAS, NATALIE THOMPSON, ALICIA WANLESS

This paper contains references to disturbing subject matter, including violence, sexual violence, child abuse, xenophobia, hate speech, and verbal harassment.

Influence operations are organized attempts to achieve a specific effect among a target audience. In such instances, a variety of actors—ranging from advertisers to activists to opportunists—employ a diverse set of tactics, techniques, and procedures to affect the decisionmaking, beliefs, and opinions of a target audience.

Yet much public discourse has failed to paint a nuanced picture of these activities. Media coverage of influence operations often tends to be negative, stoking fears about how influence operations might undermine the legitimacy of liberal democracies. But the purpose behind such campaigns must be considered when assessing their effects. In electoral settings, influence operations can refer not only to coordinated state efforts to influence a foreign election but also to positive advocacy campaigns such as those designed to encourage people to vote. That being said, governments and industry actors face growing pressure to do something about malign influence operations, but these campaigns must be clearly understood to be addressed effectively.


In reality, influence operations are neither inherently good nor bad, and it is up to societies themselves to decide what conduct and responses are and are not acceptable. The question of whether some influence operations are acceptable or not is highly ambiguous because it is so hard to ascertain the motives driving the actors behind them. This analysis examines a case study involving an online influence operation originating in Israel and targeting audiences in a host of countries including Australia, Canada, the United Kingdom, and the United States. The operators of this content mill push highly politicized online content and gain access to foreign audiences, at least in part for apparent financial gain.

Elise Thomas
Elise Thomas is a freelance journalist and researcher working with the International Cyber Policy Center at the Australian Strategic Policy Institute. Her writing has been published in Wired, Foreign Policy, the Daily Beast, the Guardian Australia, SBS, Crikey, and the Interpreter of the Lowy Institute.

This study explores the difficulties in making simple assessments about influence operations, and this particular case study serves as a basis for analyzing publicly available information about social media community standards and government legislation aimed at countering influence operations, for the purpose of identifying gaps in and challenges for the solutions proposed so far. This study ultimately points to the need for clearer consideration of what constitute acceptable persuasive techniques and tactics of online engagement, and it further highlights the lack of clear existing guidelines for policy development.

KEY TAKEAWAYS

Influence operations defy easy categorization. Influence operations often fail to fit neatly into boxes outlined by individual policies or legislation. They are run in a complex environment where actors overlap, borders are easily crossed and blurred, and motives are mixed—making enforcement challenging. In this case study, actors share highly politicized online content but also appear to benefit financially from their actions, making it difficult to ascertain whether their motives are primarily political, commercial, or both.

Relevant policies by social media platforms tend to be a patchwork of community standards that apply to individual activities of an influence campaign, not the operation as a whole. Policies published by social media companies often focus on individual components of influence operations. This approach attempts to neatly categorize and distinguish actors (foreign versus domestic), motives (political influence and profit), activities (including misrepresentation, fraud, and spamming behavior), and content (such as misinformation, hate speech, and abuse). This piecemeal approach to enforcement raises questions about whether officials within social media platforms fully understand how influence operations work and how such campaigns are more than the individual behaviors that compose them.

Social media networks have more opportunities to counter influence operations through their platform policies than governments do with existing legislation. Social media companies have implemented various policies to govern how their platforms are used, providing opportunities for combating influence operations. They also have greater access to information about how their platforms are used and have domain-specific expertise that allows them to create more tailored solutions. Fewer avenues exist for countering such influence operations using government-led legal mechanisms. This is not only because of the relative paucity of laws that govern online activity but also because law enforcement requires attribution before they can act, and such attribution can be difficult to ascertain in these cases. This means that governments have generally done little to help private industry actors determine what kinds of influence operations are unacceptable and should be combated. In the absence of such guidance, industry actors are de facto drawing those lines for society. Governments could do more to help guide industry players as they determine the boundaries of acceptable behavior by participating in multi-stakeholder efforts—some of which have been set up by think tanks and nonprofits—and by considering legal approaches that emphasize transparency rather than criminalization.

The influence operations uncovered by media scrutiny are not always as easy to counter as those writing about them might hope. Savvy influence operators understand how to evade existing rules, so that their activities and content do not breach known policies or legislation. Media coverage that showcases examples of influence operations seldom explains whether and how these operators violate existing platform policies or legislation. This is a problem because distasteful influence operations do not always overtly violate existing policies or laws—raising questions about where the lines are (and should be) between what is tolerable and what is not, and, moreover, who should be determining those lines. Even when existing policies clearly do apply, these questions persist. Stakeholders should more clearly assess what constitutes problematic behavior before rushing to demand enforcement.

INTRODUCTION

Influence operations are organized attempts to achieve a specific effect among a target audience. Such operations encompass a variety of actors—ranging from advertisers to activists to opportunists—that employ a diverse set of tactics, techniques, and procedures to affect a target’s decisionmaking, beliefs, and opinions. Actors engage in influence operations for a range of purposes. Covert political influence operations originating from foreign sources have been the subject of intense scrutiny in recent years and have stoked fears about how influence operations might undermine the legitimacy of liberal democracies. However, influence operations can also be motivated by commercial interests rather than conviction.

At times, there is no neat line dividing the two, a point that the case study featured in this analysis demonstrates. Churning out political content can be profitable, as the infamous Macedonian fake news industry, which captured headlines in the wake of the 2016 U.S. elections, demonstrated.1 The more outrageous, divisive, or hyperbolic the content, the more clicks it drives and the more money website owners can earn from advertisers. The fact that this conduct is motivated by profit rather than political conviction does not reduce the potential political or social implications, including the possibility of inciting tensions or causing harm.

Some forms of influence operations (state-linked ones) are easier to handle. When state-linked covert information operations are discovered, they are likely to be in violation of online platforms’ terms of service and therefore removed. Platforms like Facebook and Twitter have also taken to publicly disclosing their efforts to remove state-linked influence operations. The associated publicity imposes reputational costs on governments, political candidates, and other high-profile actors whose involvement in an influence campaign is discovered. These reputational costs may discourage them from trying again (at least in the same manner or immediately following public attribution of such ties).

Natalie Thompson is a James C. Gaither Junior Fellow with the Technology and International Affairs Program.

More ambiguous forms of influence operations are harder to parse. Mixed-motive operations, in which operators benefit financially from sharing highly politicized content and the goal is therefore neither unambiguously commercial nor purely political, present a challenge for platform operators and regulators. When such campaigns are operated by private citizens with no official political affiliations, the costs of public disclosure may be much lower. Such an individual may not have a significant public reputation to maintain, and in some cases their activities may not technically violate a platform’s terms of service without compelling evidence of particular motivations, which can be difficult, if not impossible, to discern. And though media organizations are often quick to decry this behavior and call for enforcement, it is not always entirely clear if the behavior of mixed-motive operators is necessarily problematic. Some may argue that these actors have simply found clever ways to profit from the design of social media platforms, which reward highly engaging content.

Beyond this, the profit motive is a powerful one. Operators may have already sunk substantial time, efforts, and costs into establishing their so-called businesses. Strong incentives to continue are likely to make mixed-motive operations resilient, tenacious, and recidivist problems for social media platforms. This is especially true if the perpetrators suffer no real consequences after their operations are exposed (beyond, perhaps, some temporary disruptions as they replace the social media accounts they employ).

In either case, determining when an influence operation becomes problematic requires stakeholders to think holistically about the context in which an influence operation occurs, the actors who perpetrate it, the goals of their operations, the means by which they accomplish them, and the scale at which they operate. Influence operations can be problematic when operators attempt to disguise their identities or their aims, when they rely on false or misleading information, or when they cause real-world harm to their target audiences. In the absence of compelling ways to measure the effects of influence operations and their countermeasures and the lack of a whole-of-society approach to determining acceptable techniques of persuasion, influence operations continue to provoke much confusion and anxiety.

The case study examined here targets existing far-right, xenophobic, and anti-Muslim audiences on social media in an operation that appears to have both political and profit-driven elements. This case study highlights the way in which mixed-motive campaigns can fall through the cracks of existing regulatory frameworks, especially when they leverage preexisting and largely authentic social media audiences. The campaign centers on a set of thirteen websites that produce low-quality, inflammatory content that is then shared synchronously across a network of at least nineteen social media pages. The operators have gained moderator privileges on social media pages targeting audiences in Australia, Canada, the United Kingdom (UK), and the United States (among other countries) to share their content and drive traffic to the domains. Given characteristics shared by the websites, it is likely that they are run by a single operator or set of operators. Because the content produced by the domains and shared on social media platforms is often highly politicized and because the operators likely profit from advertising revenue from the domains, it is unclear whether they are motivated by political interests, commercial interests, or both.

This case demonstrates the limits of current ways of addressing this type of mixed-motive influence operation. Past reporting on this content mill identified it as part of a network of at least nineteen anti-Muslim Facebook pages pushing out synchronized posts,2 resharing the same content within seconds of one another.3 The posts tend to use Blogspot links to cloak the true sources of such content, such as freepressfront.com and speech-front.net.4 The network was in part established by Israel-based Facebook accounts, which then approached users manning pages in other countries, offering to provide content to their audiences in exchange for also being made administrators on those pages.5 This activity was linked by the Guardian to an individual named Ariel Elkaras in Israel; in the past, this individual had posted on search engine optimization and web marketing forums under the username Ariel1238a, seeking advice on how to monetize content.6 When approached by journalists, Elkaras denied knowledge of or involvement in the content mill.

However, shortly after this exchange, several of the domains and a large amount of content were taken down.7 (This study has not independently verified the attribution to Elkaras and will continue to refer to the group or individuals responsible for the content mill as the operators.) In response, Facebook appears to have removed a number of pages directly operated by the campaign, but it has not taken action against multiple other Facebook groups infiltrated by the content mill and its owners.8 Many of the domains (some of which were briefly taken down following the Guardian’s 2019 reporting) are still active, and their content is still being widely shared across Facebook, Twitter, Reddit, and Gab. In short, the problem is far from over.

THE ANATOMY OF AN ONLINE CONTENT MILL

The business models of online content mills like the one featured in this case study rely on generating large amounts of low-quality web content (often either cheaply produced or simply plagiarized) to entice internet users to visit their websites, allowing the organizers of these sites to generate advertising revenue. The operators of content mills commonly use social media to drive traffic to their sites.

Alicia Wanless is the co-director of the Partnership for Countering Influence Operations.

As of February 2020, the content mill in question still appears to be an active threat. Some of the domains it uses that were temporarily taken down appear to have been reinstated. And while Facebook removed the pages controlled directly by the content mill to promote its content, the campaign’s operators are still active as administrators on a number of other far-right and anti-Islamic Facebook pages and continue to use them to churn out problematic online content. To understand how the content mill operates, it is vital to understand not only the content it produces and the domains it uses to host it but also the social media channels it employs to amplify its reach and attract readers.

THE CONTENT MILL’S CONTENT AND REACH

Before delving deeper into the inner workings of this content mill, it is important to give a sense of the type of content it produces. The articles that the operators are publishing on the website domains they control have a relatively consistent format. Many of them are composed of between two and five paragraphs and relate to an embedded tweet or YouTube video from a range of sources. These low-quality articles consistently present Islam, in general, and Palestinian Muslims, in particular, in an overwhelmingly negative light.

Most of the content is not necessarily overtly false, but it tends to be misleadingly slanted, cherry-picked, or otherwise taken out of context. Like much low-quality content online, the overall approach seems to be to present the most inflammatory narrative possible (see screenshot 1). And while there is little evidence to suggest that outright deception is the intention, the operators also do not seem to care whether the information they present is true.

The freespeechfront.net homepage

For example, in September 2019, an article titled “Arab-Muslim Father Terrorizes his Infant Son With Gunshots” was posted on free-speechfront.info.9 The article was based on a tweet from the Australian Jewish Association, commenting on a video of a man abusing an infant. The article reads:

Arab-Muslim father decided to terrorize his baby son and share the video on social media. Instantly the video went viral and sparks outrage all over the world.

The video posted on Twitter with the following description:

“NEVER TOO YOUNG WHEN BORN FOR JIHAD! . . . Shocking video. Arab/Muslim father decides to acclimatise his newborn son to gunshots! . . . This is ‘Palestinian’ culture where hatred of Jews exceeds the love of children. Children are weapons of jihad, used as human shields and taught terrorism . . .”

One of the top comments on the video referred to a famous statement by Israeli Prime Minister Golda Meir, who said: “ . . . We can forgive the Arabs for killing our children. We cannot forgive them for forcing us to kill their children. We will only have peace with the Arabs when they love their children more than they hate us . . .”

The video in question was real but had no actual connection to Palestinians or Palestine. In reality, the incident took place in Saudi Arabia, and the perpetrator—believed to be the baby’s brother—was arrested.10 It is unclear whether the article’s framing, which misplaces the video in the context of broader Arab-Israeli and Palestinian relations, is a case of accidental misinformation or intentional disinformation, or whether it was developed initially by the Australian Jewish Association or amplified by free-speechfront.info. The actual facts of the case would have been easy to establish, not least because multiple Twitter users replied to the initial tweet to point out that it was incorrect. What is clear is that tens of thousands of social media users were exposed to this misleading information. According to metrics from Buzzsumo, a proprietary research and analytics tool that provides information about the social media reach of online content, the free-speechfront.info article received over 39,900 engagements on Facebook, including 24,800 reactions, 7,300 shares, and 7,800 comments.11

Another illustrative example of the content mill’s typical posts is an article posted on free-speechfront.net in November 2018, titled “Shock as Bulgaria, Romania, Serbia and Greece Declare War Against Radical Islam, the UN and the EU as They Join Forces With Israel.”12 The content of the article is a somewhat disjointed mash-up. The first half relates to Israeli President Benjamin Netanyahu’s meeting at the Craiova Forum summit early that month with the leaders of the four countries, including his efforts to rally their support for Israel’s positions in the United Nations (UN) and European Union (EU) on issues like Palestinian recognition and statehood.13 (The article includes two embedded tweets from Netanyahu’s official @IsraeliPM Twitter account.)

The second half of the piece is an attack on the UN and the abuse of Christian populations in Muslim-majority countries (including a paragraph lifted without attribution from a speech by Netanyahu on persecuted Christians in Iran).14 This part of the article appears to have been copied from an earlier article on the since-deleted freespeechtime.net, which was part of the previous generation of domains connected to this particular content mill.

This example sheds light on the cheap, dubious ways content mills mass-produce and promote low-quality content. It demonstrates how content mills recycle stories—the aforementioned article uses Netanyahu’s visit to the Craiova Forum as a news hook and then fills out the rest of the story with old and unrelated content. The same technique, and the same filler content, was used again for an article on politicsonline.net a few months later.15 This article also demonstrates how content from the website is laundered through the broader information ecosystem. In addition to generating over 6,600 engagements on Facebook, 393 shares on Twitter, and hundreds of likes and retweets,16 the article’s content also spread across multiple other fringe and far-right blogs.17

Using the service Social Insider, this investigation identified the posts that received the highest engagement on Facebook pages infiltrated by the operators of the influence campaign (a tactic that will be discussed in more detail below).18 Social Insider is a proprietary online tool that analyzes social media posts to identify which posts generate the most user engagement.

As an example, the most popular post (from February 14) on the “Guardians of Australia” Facebook page in the thirty days prior to February 24, 2020 linked to an article on thepolitics.online (see screenshot 2).19 The article claimed, “A father from Afghanistan, barely making ends meet, sold his own daughter to a 55-year-old cleric for a goat and some food. Footage doing the rounds on the Internet shows the moment local women gave him a beating,” and it included video of the Russian state-backed media conglomerate RT’s coverage of a story from 2016. The opening paragraph of the article was found verbatim on numerous other websites including those of RT and former footballer turned conspiracy theorist David Icke.20 At the time of the original RT story, the Washington Post ran an article on a similar subject stating that the girl had been rescued and was “in a shelter, while the man [had been] arrested and jailed,” according to Afghan officials.21

The most popular “Guardians of Australia” post (February 2020)

The most popular post by a Facebook page called “Never Again Canada” for the same period featured an article about Auschwitz (as of May 2020, this group’s Facebook page has been removed).22 (The content mill often promotes pro-Israeli pieces in addition to overtly Islamophobic ones.) The group’s second-most popular post promoted another thepolitics.online article containing a video titled “Watch: Adult Iranian Muslim Man Marries a[n] 11-Year-Old Girl.”23 The opening paragraph of this piece was also found on other websites such as trump-train.com, a so-called news site aimed at Americans but registered in India.24

The introduction was nearly the exact same as a Radio Free Europe/Radio Liberty (RFERL) article, which went on to note that the marriage had been annulled due to public backlash.25 For its part, the article on thepolitics.online reported, “The girl is said to be around 11. The man is reportedly 33. They were recently wed in a remote southwestern Iranian province with a video of the ceremony posted online.” By comparison, RFERL stated, “The girl is said to be around 11. The man is reportedly twice her age. They were recently wed in a remote southwestern Iranian province with a video of the ceremony posted online.” According to Google search return dates, both the RFERL and thepolitics.online pieces were published on September 4, 2019. This mishmash of directly copied or slightly reworded content taken from other sources is typical of the content published on the content mill’s domains. This strategy is not unique to the operators—plenty of low-quality content online is derived from other sources—but the content mill does demonstrate consistency across domains in terms of the quality of the content shared.

Given that both of these examples are stories that were covered by more reputable media outlets, it appears that the content is not entirely false, per se. Rather, the operators tend to mislead by not specifying that both cases were resolved and the victims protected—follow-up information that was most likely available when the pieces were posted. Indeed, in the first example, over three years had lapsed between the Facebook post and the publication of the original article.
THE CONTENT MILL’S WEB DOMAINS

Beyond the sort of content that the content mill seems to churn out, it is also worth examining the web domains on which the content tends to be published. The content mill operation is based on a series of domains dating back to at least 2017. There appear to have been several generations of domains, which were mostly Blogspot sites in the beginning but later became mostly independently registered domains. For example, the online content eventually migrated from sites like on-linepolitics.blogspot.com and freespeechtime.blogspot.co.il to ones like thepolitics.online and freespeech-time.com.

At least thirteen relevant domains were active as of February 10, 2020, and many of them appear to be interconnected. Ten of these domains use the same Google Analytics tracking code, which indicates that they are controlled by the same actor(s) (see table 1).26 Google Analytics helps website owners track their web traffic. When website owners run Google Analytics on their sites, a unique eight-digit identification number linked to their Google Analytics account is inserted into the source code of the sites. One Google Analytics account can be used to track multiple sites. For example, one website could be marked UA-XXXXXXXX-1, with the repeated Xs representing the eight-digit identification number, while a second related website would be identified with the number UA-XXXXXXXX-2, and so on. These identification numbers can be used to discern which websites are likely run by the same set of operators.
TABLE 1. SUSPECTED ACTIVE DOMAINS LINKED TO THE CONTENT MILL
URL Date registered IP Archive Google Analytics IDthepolitics.online 1/27/2019 216.239.38.21 http://archive.is/Apvkr UA-27810882-14
free-speechfront.info 1/14/2019 216.239.34.21 http://archive.ph/SUNtw UA-27810882-11
freepressfront.com 12/22/2018 216.239.32.21 http://archive.ph/Ahm4F UA-27810882-12
freespeechfront.net 1/14/2019 216.239.36.21 http://archive.ph/cYyOR UA-27810882-9
i-supportisrael.com 6/7/2017 216.239.32.21 http://archive.ph/hApJC UA-27810882-3
politicaldiscussion.net 4/9/2018 198.20.92.26 http://archive.ph/oS2Ot UA-27810882-7
thetruthvoice.net 1/20/2019 192.64.119.249 http://archive.ph/ZZEls UA-132814408-1
speech-point.net 2/6/2019 216.239.34.21 http://archive.is/RirNd UA-27810882-10
speechline.net 9/21/2018 216.239.38.21 http://archive.ph/oS2Ot UA-27810882-16
speech-point.com 10/16/2018 216.239.34.21 http://archive.ph/ygZUI UA-157443622-2
threalnews.net 2/18/2019 216.239.32.21 http://archive.ph/LPA3J UA-27810882-15
freespeech-time.com 11/19/2018 216.239.32.21 http://archive.ph/MzwZ4 UA-27810882-21
politicsonline.net 2/22/2019 216.239.32.21 http://archive.ph/MH0NH UA-157443622-1

That is not all. An eleventh domain, thetruthvoice.net, uses a different Google Analytics code, but a close investigation of thepolitics.online (one of the initial ten linked domains) in February 2020 found that both websites share a different common identifying feature. At the time, they were both using the same MGid JavaScript, another type of analytics tracking script as seen below (see screenshots 3 and 4). This script appears to have since been removed from one of the domains (thepolitics.online).

Use of JavaScript tracking code on thetruthvoice.net

Use of JavaScript tracking code on thepolitics.online

Two additional web domains share a different Google Analytics tracking code from the initial ten domains but also appear highly likely to be a part of the content mill. The content from politicsonline.net and speech-point.com is highly consistent in tone and subject with the content shared by the other identified domains and often is shared by the same social media accounts. There is an obvious similarity between some of the domain names—between politicsonline.net and thepolitics.online, for instance, and between speech-point.com and speech-point.net; these striking parallels echo the similarities between other domains such as free-speechfront.info and freespeechfront.net. The speech-point.net domain also uses the exact same wording in its cookie and privacy policy as five of the other domains.27 Most obviously, speech-point.net, speech-point.com, speechline.net, free-speechfront.info, freespeechfront.net, and thepolitics.online all contain the sentence “First, Our blog [sic] designed to share legitimate political views while exercising our rights to freedom of expression and freedom of speech,” and they all also include an invitation to contact the site owners at “the following email” without listing an email address.28

There is further evidence of some overlap in content between the domains. For example, an article titled “U.S. Cuts All Funding and Announces Withdrawal From U.N.’s Cultural Agency Over Pro-Islamic Bias” was published on politicsonline.net on April 4, 2019.29 The same article appears to have been previously published on freespeech-time.com in November 2018, although it has since been removed.30 While these two domains have not been definitively proven to be part of the operation, it is possible that they are linked.

Most of these domains present themselves as news sites, with no overt information about who is responsible for running them. The exceptions to this are politicaldiscussion.net, a discussion forum, and thepolitics.online, which has an About Us page that describes the site as an “anonymous blog written by Israelis” and denies that it is a news site (see screenshot 5).31 Despite this disclaimer, the site seems to be presented to look like a news site.

Overall, it appears plausible to conclude that these domains are run by the same actors. The similarity of the content in terms of style, tone, and subject matter as well as the repeated sharing of content across sites provide ample circumstantial evidence. The use of shared analytics accounts gives even stronger weight to the hypothesis that these domains are operated by the same individuals. These details demonstrate that, at the very least, eleven of these domains are connected through a single Google Analytics account, meaning that if there are multiple operators, they are working in coordination with the other sites.

The “About Us” page of thepolitics.online

For research purposes, the authors of this study used Uberlink to designate these domains as seed pages and conduct hyperlink network analysis of outbound and inbound links to these websites. Uberlink employs a web-based software known as VOSON to track and analyze when web pages link to one another through hyperlinks;32 using the content mill domains as seeds, or points of origin, the software identifies all of the domains that link to the seeds or to other domains in the network.

In the case of the content mills’ domains, this analysis returned a network of 143 domains with 239 connections between them. The visualization tool Gephi was subsequently used to create a network mapping of these websites, which is featured below (see figure 1). The mapping displays the relationships among the identified domains, all of which trace back (through some number of nodes) to the content mill domains.


This visualization reveals some surprising curiosities about the web domains that appear to be affiliated with the content mill. While the content mill has clearly targeted Western countries like Australia, Canada, the UK, and the United States, it does not target those countries exclusively. Nearly one-third of all the domains found in the hyperlink network analysis contained .in, the internet’s top-level country code domain for India. Why would that be the case?

There are no definitive answers, but given the rise in sectarian violence targeting Muslims in India, it is possible that the operators see the country as a market for Islamophobic content or that India happens to be a serendipitous market for the purveyors of this inflammatory content.33 Together with the websites speech-point.com, therealnews.net, and politicsonline.net, these websites formed a distinct community in the network marked by the color teal in the center of the network. Even though those India-centric domains form the biggest cluster in the network analysis visualization, this study focused on the content mill’s activities in the West because the majority of the Facebook pages’ content and domains in question were aimed primarily at American, British, and Canadian audiences, even if they were consumed by other audiences.

Another smaller community centered on the domain i-supportisrael.com, which is one of nine domains (6 percent of the total) that contained the term “i-supportisrael.” A dozen domains (or 8 percent of the total) contained the term “Israel.” This community is displayed in blue in the top right corner of figure 1.34

But the vast majority of domains in this network appear unrelated in nature to the content pushed by the seed websites. The primary purpose of these secondary domains may not be to spread content produced by the mill, but they are part of an ecosystem that helps boost such content in online search returns by increasing hyperlinks to them. Many of the connected domains appear to be directories listing and hyperlinking to other websites (such as www.toplist.co.in, www.linkedbd.com, and www.infolinks.top); this method of adding backlinks is one way to boost a website’s search returns (and therefore advertising revenue).35 This could further the claim that this operation at least partially is financially motivated rather than solely driven by political motives, but a group that wants to spread its content for political reasons might face similar motives to further its reach, underscoring the difficulty of determining the intent of online actors.
THE CONTENT MILL’S FACEBOOK PAGE INFILTRATION

But the influence operation depends not only on the inflammatory content it creates and the domains that host it: it also relies on social media networks to help promote its work and attract readers. Internet archives show that the content mill previously operated its own Facebook pages, including pages linked to the free-speechtime.com and freespeechtime.net domains (in the latter case, the Facebook page was titled “We Love Israel”).36 The social media giant has since removed these pages.

Yet things did not stop there. Previous reporting by the Guardian uncovered evidence that the content mill operators have switched to using more covert tactics, infiltrating existing Facebook pages and using them to drive traffic to the content mill’s domains. According to the Guardian, the operators of the campaign systematically contacted the moderators of established right-wing, pro-Israel, and anti-Islamic Facebook pages.37 Posing as passionate volunteers, the operators offered to assist the moderators in operating these pages. On being given administrator privileges, the operators co-opted these pages to post content from their own domains.38

The content mill appears to have begun employing this strategy in 2016 using pages in the United States and Israel, before expanding over the course of 2018 and 2019 to include at least nineteen pages, according to the Guardian.39 (BuzzFeed’s reporting pointed to twenty-five pages.)40 The campaign’s operators are mainly targeting audiences in Australia, Canada, the UK, and the United States, though Austrian, Israeli, and Nigerian audiences have been targeted as well. To give a sense of how prolific the influence operation has been, the Guardian’s analysis found that the network posted 5,695 coordinated posts in October 2019, a month’s worth of posts that generated 846,424 likes, shares, or comments over that period. According to reporting from December 2019, “the network [had] published at least 165,000 posts and attracted 14.3 million likes, shares or comments,” according to the Guardian team.41

BuzzFeed has reported on the campaign’s use of link redirection on some of the Facebook groups based in the United States and Canada.42 Their investigation found multiple Facebook pages using links from Google’s Blogger platform to disguise the real domain that hosts the content. For example, on Facebook, a link might appear to be from fpf-blog.blogspot.com, but when the user clicks, they are redirected to freepressfront.com instead. Experts that spoke to BuzzFeed suggested that this tactic might be an effort to evade detection by Facebook.43

This problematic behavior has not yet been stopped in its tracks. As mentioned previously, the BuzzFeed and Guardian investigations into the content mill were published in April and December 2019, respectively. As of February 2020, Facebook pages directly controlled by the operators appear to have been taken down, but other groups that the operators have infiltrated are still running and still promoting their content. For instance, according to Facebook’s transparency data (see screenshot 6 below), half of the eight administrators for the page “Guardians of Australia” are based in Israel. The page routinely shares content that has nothing to do with Australia, including posts from the content mill domains (see screenshot 7).44

Transparency data for the “Guardians of Australia,” Facebook page

Unrelated inflammatory content on the “Guardians of Australia” Facebook page

Interestingly, the surviving Facebook pages also often share content from Hananya Naftali, an influential Israeli social media figure and social media adviser to Netanyahu (see screenshot 8). While the content mill operators have a clear profit motive for driving traffic to their web domains, the intent behind their sharing of Naftali’s content is more ambiguous. Such sharing could reflect a genuine desire to share the content, an effort to build a bigger audience for the pages by sharing popular content, or both.

“Guardians of Australia” group’s promotion of Hananya Naftali content
THE CONTENT MILL’S OTHER SOCIAL MEDIA OUTREACH

While Facebook appears to be by far the most significant element of the content mill’s social media operation, its operators do appear to be active on other platforms, including Twitter, Reddit, and Gab.

On Twitter, there are at least two accounts that appear likely to be a part of the content mill. One of these, @TimeSpeech, uses the same branding as freespeech-time.com and the group’s now-removed Facebook page (see the screenshots 9 and 10 below).

@TimeSpeech Twitter account

The website header of freespeech-time.com

The @TimeSpeech Twitter account shares content not only from freespeech-time.com but also from the other domains in the network (including speech-point.com and truthvoice.net). This reinforces the conclusion that they are likely part of the content mill even though they use different Google Analytics tracking codes).

Another domain, i-supportisrael.com, links to a Twitter account called @ISupport_Israel, which uses the same branding as the website. Both the @TimeSpeech and @ISupport_Israel Twitter accounts have pinned tweets promoting politicaldiscussion.net, a discussion forum that uses the same Google Analytics tracking code as the other content mill domains. The @ISupport_Israel account also shares content from all the content mill domains. Based on this evidence, it seems reasonable to conclude that these accounts are likely part of the content mill operation.

Three additional Twitter accounts are worth mentioning. All three use women’s names—Sheila Berger (@SheilaB16315388), Alicia (@Alicia05972932), and Liza Rosen (@lizarosen101)—and have set a Star of David as their profile images. Most content that the three accounts share comes from the content mill domains, and they sometimes retweet one another, as do the @TimeSpeech and @ISupport_Israel accounts. However, based solely on the simple facts that they often tweet content from the mill and retweet one another, there is not enough evidence to definitively conclude that these accounts are part of the content mill.

Meanwhile, four accounts on Reddit—unislamic, andynushil, Alisa1554, and lizalol151665—similarly appear to be dedicated to posting content almost entirely from the content mill domains.45 They tend to post in conservative-leaning subreddits including r/Republican, r/The_Donald, r/HillaryForPrison, r/Conservative, r/IslamUnveiled, r/Australianpolitics, and r/exmuslim. The emphasis on U.S.- and Australia-focused right-wing subreddits echoes the influence campaign’s choice of targeted Facebook pages.

The andynushil account is interesting for several reasons. It is the oldest of the four, dating back to 2017, and it is the only one with a verified email.46 It uses the display name “FreeSpeechTime.” The andynushil account broke character once in November 2019, posting in the Electronic Dance Music Production subreddit.47 It is the only one to share videos from a YouTube account named “Free Speech Watch.”48 This YouTube account was created in November 2019 and appears likely to be the content mill’s first venture onto the video-sharing platform (see screenshot 11).

Comparison of content on the Free Speech Watch YouTube account and freespeechfront.net

Interestingly, AndyNush is also the name of an account on Gab, a social media platform that markets itself as a place for “political speech protected by the First Amendment” and that has become a popular online home for far-right web users.49 This account almost exclusively has shared content from the content mill domains.50 At least one other Gab account, using the name Rachelrose,51 has also been dedicated entirely to sharing the content mill’s articles. The influence operation’s apparent activity on Gab is a further indication of a tendency to target right-wing audiences.

In summary, the content mill is a persistent operation using at least a dozen or so domains and multiple social media platforms and accounts to spread inflammatory and misleading anti-Islamic and anti-Palestinian content. The evidence suggests a strategy of targeting right-wing and far-right audiences, but given that the operators stand to gain financially from driving traffic to their domains, they have clear commercial interests at stake as well. Despite multiple efforts by journalists to expose the group’s activities, the content mill has not been significantly disrupted. Its content has been shared widely and continues to play on and contribute to existing social tensions.
PROSPECTS FOR PUBLIC AND PRIVATE ENFORCEMENT

What enforcement options do social media platforms and government actors have for addressing the kinds of influence operations that this content mill and others like it have employed?

Industry actors have put forth the most extensive options, yet even these measures are a patchwork of community standards that tend to apply to individual elements of an influence campaign without addressing the operation as a whole. Still, given the abundance of data available to social media platforms and their wealth of experience attempting to deal with malicious actors online, their policies are better tailored and specifically formulated to navigate the intricacies of online influence operations.

State-driven legal mechanisms for countering such influence operations are far less robust for a number of reasons: a paucity of laws that govern online activity and are well-suited to address influence operations, the difficulties of attributing malicious activity, and jurisdictional hurdles that may negate legal solutions even when perpetrators can be identified.

No comments: