29 June 2022

Social media sites can slow the spread of deadly misinformation with modest interventions

Anya van Wagtendonk

Social media platforms can slow the spread of misinformation if they want to — and Twitter could have better reduced the spread of bad information in the lead-up to the 2020 election, according to a new research paper released Thursday.

Combining interventions like fact-checking, pushing people to consider before reposting something and banning some misinformation super-spreaders can substantially reduce viral misinformation from spreading compared with isolated steps, researchers at the University of Washington Center for an Informed Public concluded.

Depending on how fast a fact-checked piece of misinformation is removed, its spread can be reduced by about 55 to 93 percent, the researchers found. Nudges toward more careful reposting behavior resulted in 5 percent less sharing and netted a 15 percent drop in engagement with a misinforming post. Banning verified accounts with large followings that were known to spread misinformation can reduce engagement with false posts by just under 13 percent, the researchers concluded.

Each intervention requires the others to be most effective, the paper published in the journal Nature Human Behaviour, concludes. But “even a modest combined approach can result in a 53.3% reduction in the total volume of misinformation,” the researchers report.

The researchers modeled “what-if” scenarios using a dataset of 23 million election-related posts collected between Sept. 1 and Dec. 15, 2020, which were connected to “viral events.” It’s a similar approach to how researchers study infectious disease — for example, modeling how masking and social distancing mandates interact with covid spread.

This simulation work can allow researchers to experiment with how a given content moderation policy may play out before it is implemented, the study’s lead author, Joseph Bak-Coleman, a postdoctoral fellow at the center, told Grid’s Anya van Wagtendonk.

“We can use models and data to try to understand how policies will impact misinformation spread before we apply them. This might be one of many, hopefully, that we wind up using,” he said. “Because the current thing is, we try something and see if it works … so we’re kind of fixing the problem after the fact.”

According to Twitter’s “civic integrity policy,” the company sometimes removes posts, limits their spread or adds additional context when they contain electoral misinformation. A spokesperson for Twitter did not respond to Grid’s request for comment.

Bak-Coleman and his team acknowledged that, without insight into Twitter’s algorithm and content moderation practices more broadly, they cannot account for existing practices. But by implementing a combined approach, they argued, platforms can reduce misinformation “without having to catch everything, convince most people to share better or resort to the extreme measure of account removals.”

The effectiveness of these interventions is the first part of the equation, Bak-Coleman added. From there, broader ethical questions can be considered about how and when they should be applied. He spoke with Grid about the role of this kind of research in raising those questions — and how hands-off Elon Musk can actually be if he takes over Twitter.

This interview has been edited for length and clarity.

Grid: Elon Musk is on the verge of buying Twitter. He’s made clear his interest in removing most content moderation for the platform. What does your work tell us about that approach?

Joseph Bak-Coleman: Just taking a huge step back, it’s quite scary that the decisions are gonna be made by a single individual. Because this does profoundly impact both our right to free expression and our exposure to misleading information, which can cause death through things like anti-vaccine views or whatnot. So it’s really scary that he’ll be making those ethical calls, more or less unilaterally, owning the company.

He’s talked about wanting to be very hands-off in moderation. Unfortunately, there is no hands-off moderation. Even places like 4chan and 8chan have legal requirements and things they have to remove from their sites, particularly illegal content, violations of copyright law, that sort of thing. There’s a spectrum from that to somewhere that’s heavily moderated. He’ll have to find himself somewhere on there.

The vague words like “free speech,” they sound really nice as slogans, but you have to actually code that up somehow and make decisions about hard cases. When someone makes a threat and it isn’t really a threat, do you remove that or not?

So, on one hand, I think it’s scary he’s making decisions. On the other hand, I don’t think he quite understands what’s ahead of him.

G: When it comes to the interventions described in your research, what would be the method of implementation? Is that just a platform’s responsibility? Does the government have a role, or other entities?

JBC: That’s a question that hopefully our research can spark. The research says, “This is what we could do with this thing that we see as a problem under various scenarios, and here’s a model people can play with to see how it would pan out.” But what we choose there is ultimately a societal decision, with global consequences.

Personally, I think it’d be nice if there’s a democratic process of some sort involved — the same way we make other hard calls. But the “what we should do about it” is, I think, the question we’re able to ask after having models of what could we do, how might it work.

No comments: