Pages

15 May 2023

There Is No Getting Ahead of Disinformation Without Moving Past It

Alicia Wanless 

I don’t believe in disinformation. And it’s time for democracies to stop focusing on it.

I’m not saying there’s no such thing or that it’s not bad. I’m saying something a little subtler: that the public focus on disinformation is not useful and causes democratic societies to miss larger problems and challenges.

Let’s start with the fundamental problem: There is no generally accepted definition of the term. What exactly is disinformation? Ask five different people, and you’ll likely get five different answers. Some will say it is false information, some will say it’s misleading, and others will say it’s both. Most will agree that it’s spread intentionally, as opposed to its unintentional cousin, misinformation. Distinguishing between the two can be tricky. What if someone believes disinformation to be true and shares it? Is that still intentional? Is it still disinformation? There’s a solution for that! Just mash the two terms together as mis-disinformation to cover it all! But what is it really? Lies? Deception? Rumor? Exaggeration? Propaganda? All of the above? Does one just know it when they see it?

The growing ranks of disinformation experts produce many examples in vaccine myths, conspiracy theories, and covert adversarial state operations. But examples come after disinformation spreads, making it a hard problem to get ahead of. And just about anyone researching disinformation will tell you how harmful, hurtful, and destructive it is for both the specific targets of it and, more broadly, democracy. And yet, despite all the concern and investments in researching and countering disinformation over the past few years, how is it that researchers and policymakers are still collectively admiring this problem? Have all of these efforts diminished the phenomenon? It’s difficult to say because the only baseline that exists is the number of research outputs themselves.

At this point any number of readers are screaming silently, “Of course nothing has changed, because big tech is the problem! It’s the algorithms, the business model, that outrage sells!” While social media undoubtedly contributes to the problem, disinformation and its purveyors are platform agnostic: Disinformation flows through cable news channels, as it does in print and in person. New technologies have always exacerbated disinformation, adding new means to produce and distribute it on top of older ones, thus creating more complexity in the system that is the information environment. Leaders have always sought to control those new technologies through laws restricting the means of content production. Yet the same old problem persists. Both misinformation and disinformation are as common a human byproduct as carbon dioxide. It’s been a problem for as long as history has been recorded.

So why focus on disinformation? Could the entire framing of the problem be wrong? Even if disinformation could be stamped out through content moderation, the very act of doing something about it is now easily politicized; trying to eradicate it would likely only aggravate these challenges. Fox News pundits are still carrying on about the Department of Homeland Security’s attempts to simply coordinate responses to disinformation across the government.

It’s time to look at the problem differently. Those attempting to address the issue should move away from attempts to regulate disinformation and toward the ecology of the information environment more generally.

Democracies around the world are backsliding into authoritarianism. And the problem of degrading the information environment is central to the slippage. Lies and influence operations are part of the authoritarian playbook. The trouble is that control of the information space in the name of controlling disinformation is also part of the authoritarian playbook. So the shift toward authoritarianism manifests itself not only with the government propagating its own lies but also with governments exerting increased control over their corner of the information environment at the same time as trust in public institutions is degraded by information pollution. This trend is happening in India, Hungary, and Ghana, once a bastion of democracy in Africa. Even well-established democracies such as the United States are slipping into decline in attempting to address problems like disinformation and foreign interference. Many of the desired interventions, such as banning bad actors and disinformation from social media platforms, resemble authoritarian approaches—albeit all done in private spaces where the First Amendment does not apply. It’s an easy slope to slip down, in any number of different directions.

Yet it isn’t the only option. Finding an alternative approach requires looking at the problem differently. Current understanding of the information environment is hyperfocused on specific threats, such as disinformation undermining U.S. election processes and officials or threat actors like Russia. This focus, in turn, emphasizes responses that aim to counter threats, with little understanding or consideration of the impact those interventions have on the broader information environment. These responses include measures like banning the use of deceptive media and “deepfakes” in advance of an election or prohibiting election officials from spreading disinformation. Often these measures are limited to foreign actors as if their activities in the information environment can easily be isolated from those of other domestic entities. In this problem-focused approach, an attempt to address issues present in the wider system—the space where people and machines process information to make sense of the world—goes missing.

The main problem is the way democratic societies are attempting to understand the information environment through its component parts or problems within it. This piecemeal approach means missing the forest not for the trees—but for the weeds. Misinformation and disinformation are just that: weeds that, if unrooted, will pop up somewhere else. Policymakers in democracies are trying to eradicate a noxious and prevalent pest that has grown alongside humans forever. It might not even be possible to remove it entirely. And yet that’s just what many democracies are attempting to do: spraying disinformation with content moderation to make it go away.

But what if this informational equivalent of spraying the forest with pesticide takes the forest (in this case, democracy itself) with it?

Authoritarian regimes and democracies alike have previously attempted to stamp out disinformation, usually with little success. As long as language has existed to communicate, people have tried to control the information produced as part of it. Leaders have both used disinformation and been threatened by it. This need to control the flow of information led to laws controlling who could produce and share what types of information. Yet neither the phenomenon of disinformation nor the beliefs pushed within it went away. Instead of stemming disinformation, the outcome would often be much worse for whoever was trying to control it.

Take the first King Charles of England in the 17th century. Like many leaders today, he was concerned with a rise in disinformation spreading in his information ecosystem. He tried to control the newly introduced printing presses and ban all rumors and slander about him, including a particularly pernicious narrative that he was part of a papal plot to bring England back to the Catholic fold. Despite Charles’s attempts, controlling who could print material in England didn’t stop the flow of propaganda from abroad or domestic politicians from using such content for their own gain. European Protestants secretly landed apocalyptic propaganda in England to provoke them into supporting their side in the Thirty Years’ War. The religious English parliamentarians who wanted Charles to reform the government seized on this discontent and the disinformation associated with it to rally apprentices and others to their cause using pamphlets, protests, and petitions.

Charles focused on the weeds, missing the wider forest in which he was operating. He failed to see the changing conditions in his information ecosystem—the rise in literacy and the growing engagement of the public in politics, or the increased speeds information was moving via the newly founded post and advances in waterway travel. He didn’t see how interconnected the English information ecosystem was with the wider information environment in Scotland, Ireland, or Europe. Had he understood this earlier, he might have realized that the English weren’t ready for a publicly practicing Catholic to be queen. He might have realized that his church reforms actually fertilized the disinformation about him and that there was little he could do to stem the printed backlash from Scotland flooding in. And he might have understood that even an absolute monarch still depends on his people and so must not only heed their concerns but also do something about them. Deeds, or the lack thereof, are a form of communication. But Charles didn’t do these things.

By the time he understood he was in an information competition with political actors demanding change and increased democracy, it was too late. Despite his attempt to control the information environment, Charles still lost his throne. And his head.

Such shifts in conditions within the information environment occur time and time again. It happened after the introduction of writing, the printing press, the telegraph, and more recently with digital media. This kind of shift tends to follow the introduction of new technologies that change how members of society process and share information. As information production and distribution speeds up, more people can engage with ideas meaningfully. New ideas emerge that challenge prevailing thinking, and communities begin competing to make their point of view the dominant one. In doing so, they escalate this competition by flooding the information environment with their views, often polluting it with low-quality information. But like Charles, leaders facing the pollution tend to miss these shifts, partly because they are too focused on the immediate threats caused by the pollution—the disinformation—and the dominant narratives it perpetuates. The result is that they fail to see how the information environment and the people within it have changed.

Modern democracies are experiencing all the same shifts that Charles did. New technology has again sped up the population’s ability to produce and share information, and, thus, information pollution is on the rise. And once again, instead of taking a step back to look at the whole environment, democracies are spraying weeds. Information competitions within democracies like the United States are moving dangerously close to open violence. Worldviews are forming among different communities that are incompatible with each other. For example, less than a third of Republicans polled believed the 2020 election was legitimate. Both Democrats and Republicans increasingly view supporters of the opposite party as immoral. Two in five Americans believe the country is likely to have a civil war in the next decade. Public acceptance of political violence in the United States is on the rise.

Social media has taken the brunt of the blame for these shifts, and Americans want increased accountability of social media companies. Other studies, however, have shown that cable news may be the bigger culprit.

The point is that the information environment is complex, and many factors can lead to disturbances. This is why more than just the sum of the information environment’s many parts must be studied more intensely. Researchers and policymakers need to understand the socioeconomic and psychological conditions that encourage people to believe conspiracies, for example, as well as the role of influencers, including those who control the means of mass communication and traditional media, among other factors.

There is real danger in leaping to “do something” about specific categories of disinformation without paying attention to how such interventions affect the broader information environment. Scholars are still learning from what happened to the information environment during a pandemic. But it is sufficient to say that attempts to stem disinformation about the coronavirus have not all gone as planned. For example, several social media platforms banned coronavirus-related disinformation during the pandemic. While moves such as banning conspiracy theorists from Facebook reduced disinformation about the virus on the specific service where they were deplatformed, it did not eradicate it. Often, followers of banned contributors continued to push their messaging for them long after the user in question was deplatformed. Moreover, research on other cases of deplatforming—including fringe political conspiracies—indicates that affected users simply migrate to new platforms with less oversight, sometimes resulting in more extreme rhetoric in posts. Meanwhile, at least one major cable media outlet in the U.S. repeatedly portrayed attempts by companies to stem pandemic-related disinformation as egregious censorship. Indeed, these intervention efforts often became misinformation. For example, according to two U.S. media outlets, user-flagging of text messages containing coronavirus disinformation to cell carriers was part of White House planning to monitor private messages. This assertion was later disproved.

In addition to the politicians’ and pundits’ politicization of interventions to disinformation, those attempting to address the issue must also note some psychological considerations. If something is perceived to be scarce, people tend to value it more. For example, suggesting that certain information is no longer available—by, say, banning it from a social media platform—may actually make the information in question more coveted by users who fear they will soon lose access to it. As one 1988 study on jurors found, telling people to disregard information doesn’t make them do so. Additionally, banning disinformation and users who violate terms and conditions is an attempted remedy to a larger problem that might be misunderstanding the reasons why people consume this content in the first place.

For example, in an uncertain situation, such as a pandemic, people will likely seek out and accept any explanation that makes sense to them, even if the supposed answer is a form of “magical thinking.” In uncertain, uncomfortable situations, people also tend to want to belong to groups, thus exacerbating feelings of an “us versus them” mentality, which increases the likelihood of labeling people who fall for certain types of disinformation as conspiracy theorists. Because of these labels, the people in question are ultimately pushed further into even darker corners of the internet, finding and connecting with others who share their views instead of judging them for it, which raises questions and concerns about the implications of pushing followers to these caverns of the internet. These conditions make people more vulnerable to manipulation by media figures who inflame fears of censorship and discrimination.

To make sense of this complex system, multiple layers within the information environment must be considered simultaneously. It’s not enough to look at narrow examples of disinformation or to use blunt instruments—such as content moderation—to stop it. Observers and policymakers have to understand how different types of actors within the information environment are going to manipulate a given situation to their financial or political benefit. Also key is understanding what types of conditions cause what outcomes among the public as a result of both interventions and responses to those interventions. This goes far beyond piecemeal approaches to studying the effectiveness of a single intervention like banning specific users from one platform to clean up disinformation on that service.

To get ahead of disinformation, the information environment must be modeled so that observers, policymakers, and moderators are able to study the interactions among people, technology, information sources, and content over time. This will enable better testing of proposed policies—including different content moderation tactics—to understand the impact, both intended and otherwise, of proposed solutions. To reach this level of proactive policy testing, the information environment should be observed and classified in similar ways to how biologists build frameworks for understanding and studying the living world. This entails evaluating the various disciplines that are currently studying aspects of the information environment (including communication studies, political science, behavioral economics, and computer science, to name a few) and systematizing what is already known into a structured study of the information environment, ultimately creating an information ecology.

At a basic level, information ecology is intended to observe and categorize the information environment over time to identify patterns. Across almost three millennia of human history, I have found the same patterns in conditions, disturbances, and entities within the information environment. After a new technology is introduced into an information ecosystem that changes how entities within it process and share information, there is a fluctuation in conditions such as the number of people who can meaningfully engage with information, the speed and distance at which information can travel, and the volume of information produced. Increases in these conditions precede common disturbances, such as information competition, where two or more communities compete for the supremacy of their idea over the wider ecosystem.

Other common disturbances include information floods, information pollution, and a disruption to feedback loops between entities within an information ecosystem. These disturbances are often exacerbated by a variety of common entity types, such as political actors, profiteers, proselytizers, and consumers. This remarkable consistency in underlying processes contradicts a misconception that the modern environment is unique and unprecedented. Modern ecological disturbances have long been part of strategic competition. Now is not the best time to intervene. This would have been clear earlier if observers had analyzed the information environment via an ecological lens. Further, democratic societies should have been understanding and addressing these challenges from the moment new technology was introduced, studying the shifts as well as raising awareness for the onslaught of information floods and pollution and how various actors would capitalize on these shifts.

It’s time to stop experimenting in the wild. The coronavirus pandemic has shown that such real-time experiments are increasingly becoming experiments on democracy. Learning from Charles’s story, policymakers should move past disinformation and toward studying the information environment to help democracy keep its proverbial head.

No comments:

Post a Comment