7 January 2023

The Bird Has Been Freed, and So Has a New Era of Online Extremism

Ella Busch

FREEING THE FAR RIGHT

“The bird is freed.” With these words, @elonmusk announced his official takeover of the Twitter platform on October 27, 2022 at 11:49pm.[1]Elon Musk, the CEO of Tesla -and now Twitter- bought the company for $44 billion this fall. His implementation of a “Twitter 2.0” has been nothing short of problematic, with his self-proclaimed “extremely hard-core” workplace strategy[2] resulting in the resignations of half of the company’s previous 7,500 employees. As the company’s sole board member, Musk has used this authority to apply his personal ideology of unmoderated speech, or “free speech absolutism” to Twitter. The company has already stopped enforcing its previous Covid-19 misinformation policy, reinstated formerly-banned accounts (including that of former President Donald Trump), and has scaled back its moderation efforts.[3] This lack of moderation risks more than the circulation of false or hurtful communications: it is likely to cause extremists to flock to the platform in order to take advantage of unregulated speech, disseminate propaganda, and radicalize potential recruits to terrorist groups. Twitter’s new ownership and content moderation standards will worsen far right extremism in the US because they allow for the creation and spread of far-right extremist (FRE) propaganda as well as the reemergence of figures that inspire and unify the far-right. To mitigate this risk across all social media platforms, the United States must amend its current legislation relating to corporate responsibility in moderating hate speech online.

Social media plays a large role in the radicalization of far-right extremists, the largest category of domestic terrorists in the United States. 18 US Code 2231 defines domestic terrorism as “acts dangerous to human life that occur primarily within US territory and are intended (i) to intimidate or coerce a civilian population; (ii) to influence the policy of a government by intimidation or coercion; or (iii) to affect the conduct of a government by mass destruction, assassination, or kidnapping.”[4] Right-wing extremism is defined as “the use or threat of violence by subnational or non-state entities whose goals may include racial or ethnic supremacy; opposition to government authority; anger at women, including from the involuntary celibate ( or “incel”) movement; and outrage against certain policies, such as abortion.”[5] This group poses the greatest threat to national security, as 90% of all terrorist attacks in the United States in 2019 were perpetrated by far-right adherents.[6] The prevalence of the far-right can be largely attributed to the popularity of social media, which serves as a platform for extremists to share their views and indoctrinate others into their ideology. In 2016 alone, 90% of extremists were radicalized at least partially via social media, 23.4% of which utilized Twitter as a primary source of extremist content.[7]

Terrorists use the internet for many purposes, just as ordinary people do. They use social media to host content, whether through images, videos, or live-streams of propaganda. They target a variety of audiences, including potential recruits, the media, and their enemies.[8] The online extremist community serves as an ‘echo chamber’ for hateful ideas; it confirms pre-existing radical beliefs by providing a sense of community by connecting individuals with these shared beliefs. The rapid spread of extremist ideas facilitated by social media in turn facilitates the process of indoctrination into violent groups.[9] In 2019, J.M. Berger concluded, “it is safe to assume that the total number of alt-right adherents on Twitter, including deceptive accounts such as bots and sock puppets, exceeds 100,000 and probably exceeds 200,000.”[10] Twitter’s significant far-right community is aided by the platform’s hashtag feature, which allows followers to attract attention to and create communities around certain shared views. Data shows that the most common ideas promoted by the far-right Twittersphere are Pro-Trump, white nationalist, general far-right, anti-immigrant & anti-Muslim, trolling/shitposting, and conspiracy & fake news content.[11]

MUSK & MODERATION

On November 16, 2022, in an effort to downsize Twitter’s global workforce, Musk sent a late-night ultimatum to his staff. “Going forward, to build a breakthrough Twitter 2.0 and succeed in an increasingly-competitive world, we will need to be extremely hardcore. This will mean working hard hours at a high intensity. Only exceptional performance will constitute a passing grade.”[12] After a mere twenty-four hours to make their decision, the majority of Twitter’s senior staff decided to resign -including its former Head of Safety and Integrity, Head of Global Ad Sales, Chief Privacy Officer, Chief Security Officer, and Chief Compliance Officer-[13]largely out of concern for Musk’s ideology of free speech absolutism and its potential ramifications for the platform. He has since dismissed the platform’s former content moderation team, stating that a “content moderation council with widely diverse viewpoints'' would be created in its place to determine moderation issues such as the suspension and reinstatement of Twitter accounts. One month later, Musk reneged on this promise, claiming that a group of investors barred him from creating such a council. He retweeted his former post announcing his visions for the council, adding, “A large coalition of political/social activist groups agreed not to try to kill Twitter by starving us of advertising revenue if I agreed to this condition [of creating the council]. They broke the deal.”[14] Instead, such content-related decisions would be made by Musk himself.

The absence of content moderation will allow far-right extremists to populate the platform and spread their ideologies to a wider audience. Twitter’s former Policy on Violent Organizations gave the company the right to permanently suspend accounts that “identify through their stated purpose, publications, or actions as an extremist group; have engaged in or currently engage in violence and/or the promotion of violence as a means to further their cause, and target civilians in their acts and/or promotion of violence.”[15] Twitter has cited such policies in the termination of over 1.7 million of such accounts since August 2015.[16] Under Musk, there are no such rules. He describes the content moderation policy of Twitter 2.0 as “freedom of speech, but not freedom of reach,” stating that tweets deemed ‘negative’ or ‘hateful’ would be allowed on the site, but only to people who search for them.[17] Previously-suspended accounts, meanwhile, would be granted “general amnesty, provided that they have not broken the law or engaged in egregious spam.”[18] In the twelve hours following Musk’s takeover of Twitter, the platform saw a 500% increase in the use of the “N Word;”[19] anti-Black slurs overall rose 5000%.[20] Dozens of accounts espousing racially-charged and neo-Nazi commentary were created on the platform, thanking Musk for reinstating their moderated speech.[21] Within one week of Musk’s acquisition, posts containing the word “Jew” increased fivefold, the majority of which were anti-Semitic in nature.[22] Slurs against gay and trans persons have increased by 58% and 62%, respectively.[23] These groups are all specifically targeted by FREs. According to the Anti-Defamation League, “These changes are already affecting the proliferation of hate on Twitter, and the return of extremists of all kinds to the platform has the potential to supercharge the spread of extremist content and disinformation.”[24]

REPERCUSSIONS OF ACCOUNT REINSTATEMENT

The reinstatement of previously-banned accounts will allow for a reemergence of important, unifying figures among the far-right, including that of former President Donald Trump. Trump was one of the most controversial members of the ‘Twittersphere’ throughout his term. The majority of his posts attacked Democrats, minorities, immigration, or US allies. 1,710 of these tweets promoted conspiracy theories, an additional 40 of which promoted allegations of voter fraud.[25] Trump’s views and campaign of misinformation did not immediately result in his online suspension, but they did earn him a loyal follower base, primarily among the far-right. Trump was officially removed from Twitter for inciting this group to violence in what was arguably an act of terror: the storming of the US Capitol on January 6, 2021 by a pro-Trump mob. The movement began on Twitter when Trump rallied his followers to “fight like hell” in protest of the inauguration of President Joseph Biden, which was viewed as an instance of election fraud. The viral hashtag, #StopTheSteal, resulted in an insurrection of far-right extremist groups -mainly the anti-government Proud Boys and Oath Keepers- as well as “spontaneous clusters” of non-affiliated lone wolf actors who collaborated in using violent force to destroy property, assault law enforcement, and disrupt the electoral process.[26] Trump’s role in the event resulted in his removal from Twitter, Instagram, Facebook, and Snapchat for the past 22 months. Musk reinstated Trump's Twitter account through a singular ‘yes or no’ poll -with 51.8% voting yes- on November 19, 2022, complete with his former 59,000 posts and 72,000,000 followers.

It is unclear, however, if Trump will actually rejoin Twitter; following his online suspension, Trump as he created his own social media platform, Truth Social,[27] which has become a hotspot for unregulated FREs content. The most common topics discussed on Truth Social concern gun rights, the January 6 insurrection, vaccines, LGBTQ issues, and abortion.[28] Truth Social is known to espouse the far-right conspiracy theory, QAnon,[29] which believes that Donald Trump is waging a secret war against satanic pedophiles within the Democratic party and remains the legitimate president of the United States.[30]Truth Social was found to have at least 88 users promoting QAnon ideology on their accounts, 32 of which were previously banned by Twitter. In August 2022, Trump was found to have reposted 65 QAnon-related messages over a four-month period, resharing the QAnon slogan WWG1WWGA (Where We Go One Where We Go All), as well as messages relating to “a war against sex traffickers and pedophiles.”[31]

Trump is not the only controversial character who is rejoining the Twittersphere. Musk has reinstated a plethora of controversial far-right-leaning accounts to Twitter. Among these include the accounts of Jordan Peterson, a Canadian psychologist that champions misogyny;[32] the conservative Christian satire website, Babylon Bee, which mockingly awarded the transgender US Assistant Secretary for Health and Human Services, Rachel Levine, with the title of “Man of the Year;”[33] Senator Marjorie Taylor-Greene, a known QAnon adherent who had repeatedly violated Twitter’s Covid-19 misinformation policy[34]; and Andrew Anglin, the founder of the Daily Stormer, a neo-Nazi website.[35] The provision of ‘general amnesty’ to these hateful accounts sets a dangerous precedent in favor of the proliferation of FRE views on Twitter.

CODIFYING CORPORATE RESPONSIBILITY

Corporate responsibility to moderate extremist content must be codified through amendments to our current communications-related laws. Section 230 of the Communications Decency Act (CDA) states that companies are not responsible for user-generated content in the United States.[36] However, this law is thirty years old, and technology has come a long way since 1996. The need for reform is putting policymakers in a bind, wrestling the need to place greater restrictions on the ever-changing social media landscape with the risk of violating current policy. Section 230 is currently drafted in a way that enables Elon Musk to continue his pursuit of free-speech absolutism.[37] Two pending Supreme Court cases may challenge Musk’s current sense of security: Gonzalez v Google and Twitter v Taamneh. In both cases, the families of terrorist victims filed actions against Google, Facebook, and Twitter for aiding and abetting in the radicalization of the terrorists who carried out the attacks. The Supreme Court will decide whether these services should be held accountable for knowingly aiding terrorism.[38]These cases present an opportunity for the United States to strengthen its counterterrorism efforts online, possibly through mechanisms similar to those of the EU’s Digital Services Act. If nothing is done to change current legislation, the government risks a spike in far-right terror attacks in the United States, a phenomenon that is correlated with social media use.[39]

Even prior to Musk’s acquisition of Twitter, the social media industry has faced great obstacles to effectively address online extremism because it requires international and transplatform cooperation. All social media companies vary in their rules and terms of service, yet online terrorist campaign typically cross three or more platforms: one smaller, less-regulated platform for private coordination, a second platform to store copies of data, and a third, large social media platform (such as Twitter) to amplify their message. This means that, even if one platform suspends an account or removes terrorist content from their platforms, terrorists can easily move to another platform. Although regulators usually focus on user-generated content, many terrorist efforts on social media are related to funding and coordination, rather than official propaganda, meaning that much of terrorist content goes unnoticed by algorithms or human moderators. According to Dr. Erin Saltman, the Director of Programming at the Global Internet Forum to Counter Terrorism, ultimate privacy policies -such as those promoted by Musk- give terrorists a “free pass” to post dehumanizing and violent content.[40]Brian Fishman, who leads Facebook’s counterterrorism efforts, says that policymakers’ pleas to “do better” are no longer enough; policymakers and academics must directly work alongside tech companies to come up with best practices against online extremism. The scale of the online counterterrorism challenge is massive, exacerbated by terrorists’ abilities to circumvent online enforcement efforts.[41]In addition to platform-wide rules, regulations, and transparency measures, companies must interact regularly with policymakers to adapt to the ever-changing technological landscape.

THE INTERNATIONAL PERSPECTIVE

Twitter is facing legal and financial repercussions due to the dismantling of its content moderation policies. The European Union (EU), has warned Twitter that it risks heavy fines -up to 6% of the company’s annual global operations revenue- or a complete operations ban if it fails to meet the content moderation standards set by the EU’s Digital Services Act (DSA). This law will take effect early next year, requiring that companies police content that promotes terrorism, child sexual abuse, hate speech, and commercial scams.[42]Germany in particular has taken issue with Twitter’s lack of regulations, as the country has some of the strictest anti-hate speech laws in the Western world. Its Enforcement on Social Networks Law (NetzDG) allows for fines of up to €50 million for failure to comply with content moderation standards.[43]

It is less clear how the United States will respond to such violations; while legislation has been proposed to counter hate speech online, free speech laws greatly inhibit their ratification. In March 2021, US Democrats reintroduced the Protecting Americans from Dangerous Algorithms Act, which will “hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to online violence.” In another bipartisan effort, Senators Amy Klobuchar and Cynthia Lummis introduced the NUDGE Act (Nudging Users to Drive Good Experiences on Social Media) in February 2022, which aims to study interventions against harmful language on social media.[44]To date, neither bill has been passed.

The quantity of resignations by senior leadership has been an additional source of concern from the United States Federal Trade Commission (FTC), particularly in the wake of another FTC dispute in May, in which Twitter paid a $150 million fine to settle allegations of misusing users’ private information.[45]As part of this deal, Twitter agreed to a condition upon which it must report all changes in company structure to the FTC within fourteen days of such a change. Such consent orders carry the force of law; if violations are proven, they may result in fines, restrictions, and even sanctions on individual executives.[46]Since Musk neglected to inform the FTC of the company’s mass layoffs, Twitter faces the possibility of incurring such sanctions. Similarly, Musk faces the scrutiny of Apple and Google, which have the power to remove apps that violate their content moderation standards. Apple’s developer standards state that apps cannot include sexually-explicit, discriminatory, or “just plain creepy” content, including rhetoric against users’ “religion, race, sexual orientation, gender, [or] national ethnic origin.”[47] Due to the rapid increase in hateful and discriminatory rhetoric on Twitter, it is highly likely that Apple may bar the app altogether.

THE FUTURE OF TWITTER

Racist and anti-Semitic trolls have caused some of Twitter’s largest advertisers to leave the platform for its lesser-known rival app, Mastodon,[48] including entities such as General Mills, Pfizer, Chipotle, United Airlines, and Audi.[49] IPG, one of the world’s largest advertising companies, has also warned its clients against advertising on Twitter out of moderation concerns.[50]This advertising exodus has cost Twitter roughly $4 million per day in advertising revenue; Musk himself has admitted that the company faces bankruptcy. The Global Alliance for Responsible Media, an influential ad industry trade group, appealed to the possibility of bankruptcy in an open letter pleading for Twitter “to adhere to existing commitments to ‘brand safety.” [51] Instead of responding with strengthened free speech measures, Musk has made plans to reduce the company’s reliance on advertising and instate a “Twitter Blue” subscription to boost revenues. This system will provide users with the blue check signaling account-verification for the fee of $7.99/month. The program’s launch has been delayed, however, in an attempt to avoid a 30% App Store fee, which is standard for apps requiring in-app purchases. [52]

There are various hypothetical scenarios regarding the future of Twitter. According to the company’s former Head of Trust and Safety, Yoel Roth, so long as the company faces political scrutiny and relies upon advertising for 90% of its revenue, Twitter will face “unavoidable limits” to the extent of its free speech policy. “In the longer term, the moderating influences of advertisers, regulators, and, most critically of all, app stores may be welcome for those of us hoping to avoid a dangerous escalation in the volume of dangerous speech online.”[53]

Bloomberg’s Parmy Olsen disagrees, comparing the case of Twitter to that of Telegram, an encrypted instant messaging service founded by libertarian billionaire Pavel Durov. Like Musk, Durov is a staunch advocate for free speech, as reflected in the platform’s incredibly scant content moderation policy. Twitter, she says, has sixteen rules regarding content; Telegram has three. Despite being relatively unknown in the United States, Telegram is twice the size of Twitter and its lack of moderation has not impeded its popularity, [54] suggesting that Twitter 2.0 may continue to thrive, albeit in a different way than before. It is important to note, however, that Telegram has its own thriving online terrorist community. [55] Darrell M. West from the Brookings Institution, meanwhile, outlines five potential scenarios for the future of Twitter: bankruptcy, little content moderation coupled with lots of extremism, difficulty in maintaining technical infrastructure (due to the terminations of engineers and policy-related staff), a reliance on premium services to fund the platform, or a combination of these possibilities. [56] No matter what the outcome, Elon Musk’s Twitter takeover has done irreparable damage to the platform’s reputation and future endeavors.

By “freeing the bird,” Musk is not only risking the spread of hateful ideas or the accelerated radicalization of future terrorists, but he is risking the integrity of Twitter as a social media giant. By reinstating the accounts of individuals espousing FRE ideas -including former President Donald Trump- Musk is sending a message that he encourages such rhetoric on his platform, regardless of the consequences. The case of Twitter calls for a fundamental change in content moderation standards to counter violent extremism, perhaps similar to those seen in the European Union. In his pursuit of absolute free speech, Musk has “sent up the batsignal to every kind of racist, misogynist, and homophobe that Twitter was open for business, and they have to react accordingly.” [57] This has paved the way for the proliferation of far-right extremism -the most pressing counterterrorism issue facing our country- on one of the world’s most popular social media websites.

No comments: