9 January 2024

Defending the Year of Democracy

Kat Duffy and Katie Harbath

This year, over 80 national elections are scheduled to take place, directly affecting an estimated 4.2 billion people—52 percent of the globe’s population—in the largest election cycle the world will see until 2048. In addition to the U.S. presidential election, voters will go to the polls in the European Union, India, Indonesia, Mexico, South Africa, Ukraine, the United Kingdom, and dozens of other countries. Collectively, the stakes are high. The candidates that win will have a chance to shape not only domestic policy but also global issues including artificial intelligence, cybersecurity, and Internet governance.

This year’s elections are important for reasons that go beyond their scale. They will be subject to a perfect storm of heightened threats and weakened defenses. Commercial decisions made by technology companies, the reach of global digital platforms, the complexity of the environments in which these platforms operate, the rise of generative AI tools, the growth of foreign influence operations, and the emergence of partisan domestic investigations in the United States have converged to supercharge threats to elections worldwide.

Each election will, of course, be affected by local issues, the cultural context, and the main parties’ policies. But each will also be challenged by global threats to electoral integrity and, by extension, democracy. Governments, companies, and civil society groups must invest to mitigate the risks to democracy and track the emergence of new and dangerous electoral threats. If they get to work now, then 2024 may be remembered as the year when democracy rallied.

DISTRACTED WATCHMEN

Elections take place within local contexts, in local languages, and in accordance with local norms. But the information underpinning them increasingly comes from global digital platforms such as Facebook, Google, Instagram, Telegram, TikTok, WhatsApp, and YouTube. Voters rely on these commercial platforms to communicate and receive information about electoral processes, issues, and candidates. As a result, the platforms exert a powerful sway over elections. In a recent survey by Ipsos, 87 percent of respondents across 16 countries with elections in 2024 expressed concern that disinformation and fake news could impact the results, with social media cited as the leading source of disinformation, followed by messaging apps. Although voters use these social media platforms, they are generally unable to influence the platforms’ decisions or priorities. Platforms are not obliged to fight information manipulation, protect information integrity, or monitor electoral environments equitably across the communities in which they operate. Nor are they focused on doing so.

Instead, the largest U.S. technology companies are increasingly distracted. Facing declining profits, higher compliance costs, pressure to invest in AI, and increased scrutiny from governments around the world, leading companies such as Google and Meta have shifted resources away from their trust and safety teams, which mitigate electoral threats. X (formerly known as Twitter) has gone even further, implementing massive cuts and introducing erratic policy changes that have increased the amount of hate speech and disinformation on the platform. Some platforms have begun, however, to prepare for this year’s elections. Meta, for example, has announced that it will apply certain safeguards, as will Google, both globally and in the United States. Both companies are also seeking to maximize the use of generative AI-based tools for content moderation, which may offer improvements to the speed and scale of information monitoring.

Newer platforms—such as Discord, TikTok, Twitch, and others—are beginning to formulate election-related policies and mitigation strategies, but they lack experience of operating during elections. Telegram, which is an established global platform, takes a lax approach to combating disinformation and extremism, while U.S.-centric platforms including Gab, Rumble, and Truth Social have adopted a hands-off strategy that promulgates extremism, bigotry, and conspiracy theories. Some even welcome Russian propagandists banned from other platforms. WhatsApp and other popular encrypted messaging platforms present their own unique challenges when it comes to reducing misuse because of the encrypted nature of the content being shared.

Leading companies such as Google and Meta have shifted resources away from their trust and safety teams.

Tech platforms have neither the resources nor the resolve to properly monitor and address problematic content. Every digital platform has a different process for reporting disinformation, hate speech, or harassment—as well as a varying capacity to respond to those threats. Companies will invariably be confronted with difficult tradeoffs, especially when their employees’ personal safety is at stake. At the same time, revenue constraints, technological limitations, and political prioritization will result in a vast gap between resources aimed at supporting U.S. electoral integrity and those focused on other countries’ elections. The result will be that most nations will be neglected.

No comments: