Pages

21 January 2023

Twitter, the EU, and self-regulation of disinformation

John Villasenor 

Earlier this month, Germany’s Digital and Transport Minister Volker Wissing met with Twitter CEO Elon Musk to discuss disinformation. As reported in Ars Technica, following the meeting, a ministry spokesperson said that “Federal Minister Wissing made it clear . . . that Germany expects the existing voluntary commitments against disinformation and the rules of the Digital Services Act to be observed in the future.”

Twitter is one of several dozen signatories to the European Union’s (EU) “2022 Strengthened Code of Practice on Disinformation,” a self-regulatory framework for addressing disinformation. In light of the massive staff cuts at Twitter in recent months, it’s clear that there is concern in EU governments regarding whether Twitter will be in a position to meet commitments made prior to its acquisition by Elon Musk.

The 2022 Disinformation Code contains a series of 44 “Commitments,” some of which are further subdivided into “Measures.” When a company becomes a signatory, it submits a subscription document identifying which Commitments (and, more specifically, which Measures) it is signing up for. Twitter’s June 2022 subscription document indicates that Twitter has committed, among other things, to: “defund the dissemination of disinformation and misinformation,” “prevent the misuse of advertising systems to disseminate misinformation or disinformation,” and “put in place or further bolster policies to address both misinformation and disinformation.”

Given all of the recent staffing cuts and management changes at Twitter, it is unsurprising that it is in the spotlight regarding disinformation. But all the signatories—a list that includes not just Twitter but also Google, Meta, Microsoft, and TikTok—face potential challenges in meeting their commitments under the 2022 Disinformation Code.

A key difficulty of compliance with the 2022 Disinformation Code lies in determining what is and is not misinformation and disinformation. The 2022 Disinformation Code uses definitions from the European Democracy Action Plan (EDAP), which defines misinformation as “false or misleading content shared without harmful intent though the effects can still be harmful, e.g. when people share false information with friends and family in good faith.” Disinformation is defined in EDAP as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm.”

These definitions sound simple enough. And, at the extremes, they are easy to apply. Social media posts that try to sell false cures for cancer are easily identifiable as problematic. But consider this now-deleted tweet posted in February 2020 by the then-Surgeon General of the United States: “Seriously people – STOP BUYING MASKS! They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!”

Sent in the early days of the pandemic, this tweet mixes incorrect information (the assertion that masks aren’t effective to reduce COVID-19 transmission among the general public) with correct information (the assertion that a shortage of masks for healthcare providers creates risks for them and others). With the benefit of hindsight, it’s easy to make the argument that this tweet should have been quickly subjected to some sort of content moderation, such as a label indicating that it contained inaccurate information regarding the utility of masks. But February 2020 was a time of high uncertainty regarding COVID-19, and social media companies under pressure to identify misinformation quickly don’t have the luxury of waiting until that uncertainty resolves.

To take another example, consider a hypothetical tweet sent by a political candidate on the evening of an election day alleging voting fraud in a particular jurisdiction. With the passage of time, the accuracy of that allegation can be investigated. But in the immediate time frame—that is, the very time frame when the tweet can do the most damage if it is false—there isn’t yet enough information to know that it is false.

The paradox of disinformation is that it can be harmful over the short-term time frames during which it is not yet possible to confidently label it as disinformation. This isn’t a paradox that social media companies can solve through clever AI, or that governments can resolve through regulation.

The 2022 Disinformation Code is a self-regulatory framework that applies only to those companies that volunteer to be signatories. Relatedly and more generally, companies that provide “intermediary services”—including social media companies and search engines—to people in the EU are obligated to comply with the EU’s Digital Services Act (DSA), a regulatory framework that, among other things, has extensive requirements regarding identification and handling of “illegal content.”

The DSA entered into force in November 2022 and becomes fully applicable in early 2024 for all but the largest companies. “Very Large Online Platforms” (VLOP) and “Very Large Online Search Engines” (VLOSE) face an accelerated schedule, with DSA compliance required four months after the EU makes a VLOP or VLOSE designation. That designation will likely occur in the first half of 2023, and will apply to online platforms with “a number of average monthly active recipients of the service in the Union equal to or higher than 45 million” (e.g., companies such as Alphabet, Apple, and Meta). There is also an interesting question regarding whether the European Commission will designate Twitter as a VLOP. Recent communications from the Commission have hinted that this designation may be forthcoming, though the Commission hasn’t yet formally made that decision.

The upshot is that 2023 promises to be a very active year in terms of engagement between social media companies and the EU. In 2023, the EU’s strong stance against disinformation will need to be reconciled with the inherent uncertainty that can arise when rapidly vetting social media postings for accuracy. However well that vetting is performed, there will always be some false negatives and false positives.

This in turn means there will be a degree of subjectivity in evaluating whether a social media company has complied with its obligations and/or commitments to address disinformation. In short, the real test for disinformation regulatory frameworks will lie in their application, not in their promulgation.

No comments:

Post a Comment