30 March 2021

Social Media Disinformation Discussions Are Going in Circles. Here’s How to Change That.

BY JON BATEMAN AND CRAIG NEWMARK


This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech.

On Thursday, the CEOs of Facebook, Google, and Twitter will testify before Congress about online disinformation. Even before the gavel bangs, we can predict what will happen. Some members of Congress will demand that social media platforms do more to stop viral falsehoods from damaging democracy and triggering violence. Others will warn of needlessly restricting speech and say it could even inflame fringe elements and drive them to less-governed spaces.

This same argument repeats itself after every crisis, from Christchurch to QAnon to COVID-19. Why can’t we break the impasse? Because the debate about countering disinformation can itself be a fact-free zone: long on theories, short on evidence. We need better expertise, and that means empowering experts.

Scholars have spent decades studying propaganda and other dark arts of persuasion, but online disinformation is a new twist on this old problem. After Russia’s interference in the 2016 U.S. election, the field received a huge influx of money, talent, and interest. There are now more than 460 think tanks, task forces, and other initiatives focused on the problem. Since 2016, this global community has exposed dozens of influence operations and published more than 80 reports on how society can better combat them.

We’ve learned a lot in the past four years, yet experts are the first to admit how much they still don’t know. Fact-checks have proliferated, for example, and research shows these can make a difference when presented in the right way. But the recent social media bans of former president Donald Trump show where gaps remain. The long-term effects of such “de-platforming” remain unclear. Perhaps Trump’s lies will fade away in the digital void, or perhaps his social media martyrdom will create an even more enduring mythology. Only time will tell.

How could we not know whether something so basic as banning an account actually works? Why don’t the world’s largest tech companies and top academics have clearer answers after years of focused effort? There are two underlying problems.

The first challenge is data. To untangle the complex psychological, social, and technological factors driving disinformation, we need to watch large numbers of users react to malicious content—then see what happens when countermeasures are introduced. Platforms have this data, but their internal studies can be tainted by business interests and are rarely revealed to the public. Credible research must be independently performed and openly published. Although platforms do share some data with outside researchers, leading experts say that data access remains their top challenge.

The second challenge is money. It takes time and talent to produce detailed social network maps or track the impact of platforms’ many software tweaks. But universities don’t tend to reward this kind of scholarship. That leaves researchers reliant on short-term grants from a handful of foundations and philanthropists. Without financial stability, they struggle to recruit and shy away from large-scale, long-term research. Platforms help to fund some outside work, but there is often concern about a perceived compromise of independence.

The net result is a frustrating stalemate. As misinformation and malign influence run rampant, democracies lack real facts to guide their response. Experts have offered a raft of good ideas—improving media literacy, regulating platforms—but they struggle to validate or refine their proposals.

Thankfully, there is a solution. Very similar problems have been successfully addressed before.

At the dawn of the Cold War, the U.S. government saw a need for objective, high-quality analysis of national security issues. It began to sponsor a new kind of outside research organization, run by nonprofits like the RAND Corp., MITRE, and the Center for Naval Analyses. These federally funded research and development centers received government money and classified information but operated independently. They were therefore able to recruit top-tier staff and publish credible research—much of which did not flatter their government sponsors.

Social media companies should take a page from this playbook and help set up a similar organization to study influence operations. Several platforms could pool data and money, in partnership with universities and governments. With proper resources and guaranteed independence, a new research center could credibly tackle key questions about how influence operations work and what is effective against them. The research would be public, with redactions only for legitimate concerns like user privacy—not to prevent bad publicity.

Why should platforms agree to this arrangement? Because angry regulators, advertisers, and users are ultimately bad for business. That’s why Facebook recently spent $130 million to set up an external Oversight Board for content removal and promised to follow its rulings.

Granted, critics still see the Oversight Board as too dependent on Facebook. Google’s acrimonious split with two A.I. researchers has further amplified concerns about corporate control of scholarship. So how could people trust a new research center with ties to the platforms? The first step would be ensuring that a new research center has backing not only from multiple platforms (instead of just one) but universities and governments as well.

Further protections could be legislated. There is already a growing movement to update Section 230, the federal law that gives platforms their all-important liability protections. Even Mark Zuckerberg has endorsed conditioning these protections on greater “transparency, accountability, and oversight” from tech companies. A practical step in this direction would be to require that platforms share data with an independent research center and maintain a cooperative, arms-length relationship with its researchers.

Disinformation and other influence operations are among the greatest challenges facing democracies. We can’t stand still until we fully understand this threat, but we also can’t keep flying blind forever. The battle for truth requires arming ourselves with knowledge. The time to start is now.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

No comments: