Pages

28 August 2019

Why Facebook, YouTube, and Twitter are bad for the climate

By Dawn Stover

Earlier this year, the video-sharing website YouTube updated its systems “to begin reducing recommendations of borderline content and content that could misinform users in harmful ways”—for example, videos claiming the Earth is flat. More recently, YouTube, which is one of the two most widely used online platforms in the United States, announced in a blog post that it would remove “content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.” It’s all part of YouTube’s efforts to eliminate hate speech and “denialism.”

Climate denialism, however, appears to be unaffected by this house-cleaning. A recent study analyzing the content of 200 randomly selected YouTube videos related to climate change found that a majority of the videos either propagated conspiracy theories about climate change and climate engineering (such as the notion that the government is using “chemtrails” to manipulate the weather), or denied the scientific consensus that human activities are the primary cause of climate change. One conspiracy video alone had been viewed more than 5.3 million times.


YouTube, which is owned by Google and used by 73 percent of American adults, is just one of the online giants that are shaping the national conversation about climate change. Together with Facebook, Twitter, and other social media, they are doing a lousy job of informing their users about what is arguably the most urgent threat to life as we know it. Willingly or not, these platforms have become purveyors of false and misleading news; accomplices in efforts by denialists to cast doubt on science; and enormous energy guzzlers in their own right. They may have an important role to play in solving the climate issue. At the moment, though, they are part of the problem.

Facebook: falsehoods are fair game. “When it comes to efforts to avert catastrophic climate change, Facebook is no ally. They are an enemy.” That’s what Michael Mann of Pennsylvania State University, one of the world’s best-known climate scientists, told the news website ThinkProgress after Facebook was criticized for refusing to take down a doctored video of House Speaker Nancy Pelosi in late May. In a statement to the Washington Post, Facebook defended its refusal to remove the video by saying: “We don’t have a policy that stipulates that the information you post on Facebook must be true.” That policy is a welcome mat for climate deniers.

“There’s more!” wrote Katharine Hayhoe, another prominent climate scientist, who posted a link to the ThinkProgress article on Twitter. “The article doesn’t even mention how [Facebook] quietly classified ‘clean energy’ and ‘climate change’ as political topics last summer.” After that, Hayhoe said she could no longer use Facebook to promote her “Global Weirding” climate-change videos, which are produced by KTTZ Texas Tech Public Media, to friends of people who liked her Facebook page. “I can’t unless I apply to be a political entity,” she said, “which I will not, because science isn’t blue or red.”

With the exception of YouTube, no social-media platform comes close to Facebook’s reach. According to surveys, about 7 in 10 adults in the United States use Facebook, and roughly three-quarters of its users visit the site at least once a day. Users spend an average of almost an hour a day on the site, and more than 40 percent say it’s a place where they get news. Never mind whether that news is actually true.

In an interview published by Recode last year, Facebook founder and CEO Mark Zuckerberg called Holocaust denial “deeply offensive” but said “I don’t believe that our platform should take that down, because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.” Shortly after that, Zuckerberg sent an email elaborating on the company’s position: “Our goal with fake news is not to prevent anyone from saying something untrue—but to stop fake news and misinformation spreading across our services.”

As it turns out, Facebook does occasionally take down accounts that spread falsehoods. Just this week, both Facebook and Twitter said they had removed China-linked accounts that were spreading false reports about protesters in Hong Kong. Left untouched by Facebook, though, is a video that spread misleading climate information to 5 million viewers by July 2018—and has now been seen 10 million times. Facebook not only provides climate deniers with a podium for spreading inaccurate information, but has also hosted climate-denying ads and partnered with climate-denying organizations.

Twitter: bots of trouble. The microblogging and social-networking site Twitter is not as widely used as Facebook or YouTube, but one in five US adults participate. Twitter users are generally younger, better educated, and more affluent than the US population as a whole, and Twitter is especially popular with journalists and others who are news-focused. That includes President Donald Trump, who frequently uses Twitter to make announcements and express opinions, even though polls indicate that 7 in 10 Americans believe Trump tweets too much.

Like Facebook and YouTube, Twitter is an amplifier for climate misinformation. But it’s not just human beings who are spreading that misinformation; a 2017 analysis by the Pew Research Center reported that two-thirds of the tweeted links to popular websites were posted by automated accounts known as bots.

In a pilot study of tweets containing key words and phrases such as “climate change” and “global warming,” researchers at Brown University collected tweets by 144,000 users over a three-week period last summer and found that about 23,000 of the users were bots—and they were generating about 20 percent of the tweets.

Last year, Twitter admitted that more than 50,000 accounts with links to Russia had used its platform to spread automated tweets about the 2016 US presidential election. But Russian bots didn’t just tweet about candidates; Twitter told the House Science, Space and Technology committee that more than 4 percent of the Russian-generated tweets between 2015 and 2017 involved energy and climate issues. Some of the tweets expressed concern for the climate, while others mocked climate change. (Russian-linked accounts also used Facebook and its affiliated company Instagram to spread propaganda about climate change.)

In March, Twitter introduced a new approach intended to “improve the health of the public conversation on Twitter.” The company’s co-founder and CEO, Jack Dorsey, said people had taken advantage of Twitter for misinformation campaigns and other harmful and divisive activities—and that Twitter would build a framework “to help encourage more healthy debate, conversations, and critical thinking.” However, Dorsey said Twitter does not yet have a way to measure the “health” of Twitter conversations, and he offered no clues about how the company would stem the rising tide of false and misleading news, smear campaigns, targeted ads, and “computational propaganda” that uses automation and algorithms to manipulate social media. In fact, he suggested that Twitter’s new approach would focus less on removing content than in the past.

Choosing sides. Digital platforms were never designed to be level playing fields. Algorithms determine what content users see, and the managers of digital platforms are reluctant to share details about how their algorithms work—not just because that could make it easier for bad actors to game the system, but also because their business models depend on maintaining user engagement.

Managers are also loath to take sides in any dispute. They don’t want to open themselves up to accusations of bias and, again, picking sides goes against their goal of keeping the maximum number of people on their sites for the maximum amount of time. Although they have taken some baby steps to reduce user engagement with the most despicable content and even to remove some of it, they have done relatively little to address misinformation about current issues that are arguably more important and relevant to daily life than accuracy regarding historical events such as the moon landing.

All of the big platforms have humans in the loop to manage, edit, and curate content. But there is no way for them to keep up with the flood. YouTube, for example, has 5 billion video views daily—and accepts new content at a rate of about three hundred hours of video per minute. The company suggests that climate scientists who don’t like some of the videos on the site should fight fire with fire, by contributing their own videos. By that logic, YouTube should also advise scientists to buy ads, so that they can compete with ads placed by the fossil fuel industry on social media.

Although YouTube isn’t removing climate-denying videos, last year it began adding third-party information to all videos about climate change and a few other “well-established historical and scientific topics that have often been subject to misinformation, like the moon landing and the Oklahoma City bombing.” Pull up a climate-related video on YouTube and you’ll see a box beneath the video with a factual sentence about global warming and a link to the Wikipedia entry on the subject.

Facebook has also experimented with fact-checking, in partnership with third-party media organizations. Unfortunately, this approach can sometimes backfire, leading to greater engagement with the false information than with the fact-checks debunking it. And some fake-news creators seem to relish being fact-checked. It hasn’t helped matters that one of Facebook’s fact-checking partners is CheckYourFact.com, a for-profit subsidiary of The Daily Caller—a publication co-founded by Fox News host Tucker Carlson that has published climate misinformation and has received support from conservative groups that have funded attacks on climate science, including the Charles Koch Foundation.

A few months ago, CEO Mark Zuckerberg announced that Facebook would pivot to a “privacy-focused vision,” with people connecting in “the digital equivalent of the living room,” rather than in the town square. If “the future is private,” as Facebook claims, the company may conveniently relieve itself of the responsibility to ensure that content posted on the site is truthful. It remains to be seen what effect this pivot will have on Facebook’s tremendous influence over the online sharing of news, or its control over billions of advertising dollars that previously went to newspapers, magazines, television, and radio.

Facebook and a handful of other online platforms have become so huge and powerful that the US Justice Department last month launched an antitrust review of their practices; it will investigate whether these companies “are engaging in practices that have reduced competition, stifled innovation, or otherwise harmed consumers.” The review probably won’t concern itself with how the spread of climate misinformation and propaganda has harmed consumers by diverting them from urgently needed climate action—but that may be one of the worst, and most lasting, legacies of the social media tycoons.

No comments:

Post a Comment