29 November 2020

How One Social Media App Is Beating Disinformation

Elizabeth Lange, Doowan Lee

For many Americans who work in tech, Taiwan has become a model for the fight against disinformation. Like the United States, Taiwan is sharply divided on major issues, including national identity, China policy, and the legalization of same-sex marriage. Urban-rural divisions have further split Taiwanese society, and two major parties, whose relations have grown increasingly acrimonious, dominate Taiwanese politics. Yet unlike the United States, Taiwan is getting disinformation under control.

Partly, that success is due to government crackdowns on groups that spread disinformation, Taipei’s initiatives to improve media literacy, and President Tsai Ing-wen’s decision to prioritize the problem, exemplified by her appointment of Audrey Tang, a software engineer, as digital minister in 2016. But the crux of Taipei’s approach lies elsewhere: namely, in its ability to harness the power of its civil society and tech industry through a robust public-private partnership initiative.

In looking to Taiwan, the United States and other Western countries should focus on how the country has harnessed the power of partnerships—especially those between the government, third-party fact-checkers, and social media platforms—to hold back the tide of disinformation. There’s no better example of this kind of coordinated action than Line, a private messaging app that provides a blueprint for how the government can leverage nongovernmental organizations and the tech industry to beat disinformation.

Line, arguably Taiwan’s most popular messaging app, is the main battleground of disinformation in Taiwan. Line quickly took over the Taiwanese market after it was launched in Japan in 2011 by a subsidiary of the Korean tech giant Naver Corporation. In 2019, approximately 90 percent of Taiwanese used the app, sending more than 9 billion messages per day. Like Whatsapp, Line’s design makes it easy to rapidly disseminate harmful and false content: It offers a high degree of anonymity, as user profiles often have only a name and picture, and in combining features such as its own integrated news platform and private and encrypted group chats, it encourages users to share articles within the app. Users have to take an extra step to share to other apps, and this friction point keeps users on Line.

They knew they needed to find a way to balance privacy with voluntary fact-checking—without making users leave the platform.This design has only amplified the disinformation Taiwan has long struggled with, which is primarily manufactured by content farms located in—or funded by—China. The 2018 Taiwanese elections exposed for the first time just how widely these manufactured stories had been shared within closed groups outside the purview of Line’s content moderation, which is limited to public posts, including those on timelines, blogs, or manga boards. A fake story in 2018 claimed, for instance, that Tsai spat on the ground at a ceremony honoring fallen veterans. Another convoluted campaign groundlessly alleged that Tsai’s government had failed to protect fruit farmers from price crashes by distancing the country from China.

At the time, some users were aware that they were coming across disinformation. According to a survey conducted by Line in July 2019, 46 percent of users believed that they had seen or been the recipient of suspicious content in the previous six months. However, only 33 percent stated that they fact-checked these messages, and even fewer—25 percent—then shared accurate information with others. These dispiriting figures spurred a number of civic and governmental efforts to bridge the gap, especially since private groups have reinforced the effects of filter bubbles and confirmation bias. They knew they needed to find a way to balance privacy with voluntary fact-checking—without making users leave the platform.


What they came up with amounted to a public-private partnership, which the PPP Knowledge Lab defines as a long-term contract between a private party and the government “for providing a public asset or service, in which the private party bears significant risk and management responsibility.” Traditionally, these partnerships are used in sectors that lack effective regulation and private incentive to self-regulate. This approach seeks to remedy that issue by combining the vast resources and connections available to government agencies with the technical expertise and infrastructure of private companies.

In July 2019, at the same event where Line released the findings of its disinformation survey, the company unveiled its first public-private partnership to curb disinformation: the Digital Accountability Project (DAP), which it launched as a collaboration with Taiwan’s Executive Yuan and third-party fact-checking organizations. The purpose of the DAP partnership was straightforward: to incorporate fact-checking directly into the Line app and thus give the public the tools to become more critical about their media consumption.

Individually, those fact-checking operations were useful, but not nearly enough.Perhaps the most important part of this effort was Line Fact Checker, a chatbot that allows users to submit links or statements to be analyzed and verified against content previously fact-checked by one of five third-party organizations: the Executive Yuan Real-Time News Clarification page, the Taiwan FactCheck Center, Rumor & Truth, MyGoPen, and Cofacts.

Individually, those fact-checking operations were useful, but not nearly enough. Take Cofacts, for instance: It’s a collaborative, open-source fact-checking project that developed its first Line chatbot in 2018, which is still in use. If a user sends the chatbot a message that doesn’t match any of Cofacts’ existing data, users can forward their message on for manual fact-checking. The Cofacts database also supports a chatbot, affectionately named Aunt Meiyu, which can be added to group chats and automatically responds when users share falsehoods with one another. Of course, this still requires that users take the initiative to add the bot to their chat. The hope is that by humanizing the idea and marketing it as a fun addition, users will be more willing to use it.

These are all helpful tools, but Cofacts has limited funding and depends on volunteer members. Although its database is open-source and allows all volunteers to help verify content, only about 250 messages enter its database each week.

By harnessing the power of various fact-checking operations at once, the Line Fact Checker initiative can evaluate the accuracy of a user submission in a fraction of a second. Now, the Line bot can not only give a simple one-line evaluation, but also provide links to articles by reputable news sources on the same topic. Users can also submit their content for further verification to one of the third-party fact-checking operations participating in the program.

By harnessing the power of various fact-checking operations at once, the Line Fact Checker initiative can evaluate the accuracy of a user submission in a fraction of a second.This collaborative bot model, although in its early stages, has substantial benefits. First, Line does not have to interfere directly with conversations and potentially infringe on free speech rights. While this approach may limit its reach, it also prevents the company from becoming an “arbiter of truth,” something social media platforms have shied away from. Second, it doesn’t make users leave the app to verify information—something that’s beneficial both to users’ real-time ability to discern disinformation and for Line’s bottom line, a rare win-win. Third, because the bot can aggregate submissions and verifications from millions of users and multiple platforms, the fact-checking service gets stronger each time it’s used. In this vein, Line can also collect valuable data previously unavailable to it, such as popular topics exploited by disinformation campaigns and language similarities across posts.

So far, the DAP’s effects have rippled outward. For example, Facebook, which had already been working with the Taiwan FactCheck Center, inked a deal with MyGoPen as well in March. Even though Facebook has chosen to maintain its now-familiar disinformation strategy—asking fact-checking services to evaluate content behind the scenes and demoting content with their algorithm accordingly—this is a promising development in the industry. It shows that, even beyond Line, Tsai’s government is having a positive influence on tech companies, and that Line’s public-private model is clearly a viable option for combating disinformation without stifling speech.

Fighting mass-scale disinformation is still an uphill battle. But metrics for the use of Line Fact Checker so far are promising: The bot has received more than 230,000 user submissions since its inception, and users have reported some 41,000 messages, which suggests multiple users are reporting the same false narrative as it’s circulated—an encouraging sign for media literacy. Taiwan’s success in combating disinformation about COVID-19 also suggests that the DAP is working. It is clear that by creating the DAP and offering official news clarification services, Taiwan’s government has amplified accurate content on social media.

In Taiwan’s case, the partnership with Line and third-party fact-checkers could build long-term firewalls to safeguard Taiwanese people from China’s disinformation campaigns—campaigns similar to those aimed at Americans, whether from Beijing, Moscow, or other threats, both at home and abroad. And the DAP isn’t just successful because of its fact-checking tools, but also because its participating organizations promote them to the public. For instance, the Taiwan FactCheck Center hosts events to teach everyone from university students to elementary school teachers how to use fact-checking services effectively.

Like Taiwan, the U.S. government should dramatically expand its public-private partnerships with popular social media apps. Of course, since Americans are less trusting of their government than the Taiwanese—and since disinformation sometimes comes from within the government itself, including from its head—Taipei’s model wouldn’t translate perfectly. The United States would have to be careful about not playing a role in fact-checking operations and only amplifying the work of private tech companies. That would be tough, but the 2020 U.S. election has shown that Washington can certainly be helpful in fending off foreign cyberthreats. And despite concerns of bias, disinformation is simply too difficult for either Washington’s cybersecurity professionals or the industry to combat alone. Taipei has been able to tap into two inherent strengths of democracies: the power of independent media and civil society. It’s about time the world’s oldest extant democracy followed suit.

No comments: