Pages

7 August 2019

Not Your Father’s Bots

By Sarah Kreps And Miles McCain 


Surveillance images from a U.N. sanctions report purportedly showing a North Korean vessel engaged in illegal trading United Nations Security Council / REUTERS

North Korean industry is critical to Pyongyang’s economy as international sanctions have already put a chill on its interaction with foreign investors who are traded in the market. Liberty Global Customs, which occasionally ships cargo to North Korea, stopped trading operations earlier this year because of pressure from the Justice Department, according to Rep. Ted Lieu (D-Calif.), chairman of the Congressional Foreign Trade Committee.

The paragraph above has no basis in reality. It is complete and utter garbage, intended not to be correct but to sound correct. In fact, it wasn’t written by a human at all—it was written by GPT-2, an artificial intelligence system built by OpenAI, an AI research organization based in California.

Disinformation is a serious problem. Synthetic disinformation—written not by humans but by computers—might emerge as an even bigger one. Russia already employs online “trolls” to sow discord; automating such operations could propel its disinformation efforts to new heights.

We conducted a study to see whether synthetic disinformation could generate convincing news stories about complex foreign policy issues. Our results were clear: it can.
A DANGEROUS TECHNOLOGY

While the details of GPT-2 are highly technical, it is, essentially, a program that uses artificial intelligence to synthesize new text. Given a short prompt, GPT-2 can continue the text in the same style. In some cases, the text the software generates is indistinguishable from that written by a human.

The potential for abuse is obvious. The software’s creators at OpenAI worry that bad actors could use GPT-2 to automatically generate misleading news articles, abusive social media posts, and spam. Because of this potential for misuse, OpenAI chose not to release the full version of GPT-2 to the public. Instead the group released a watered-down version that, while still useful for research, entailed fewer risks.

In its current form, GPT-2 cannot easily be configured to convey a specific point of view or make certain arguments. Configuration, however, may not be necessary: depending on the prompt, GPT-2 will adapt its tone and topic automatically. If a prompt describes a fabricated event, as a fake news article might, GPT-2 can synthesize additional details and quotes so as to make the fabricated event appear real.

In the end, the exact details of GPT-2’s output are not critical. What matters is that the reader absorbs the lede—the primary fabrication, written by a human to serve the disinformation campaign’s particular interests—and that the synthesized body text appears sufficiently topical and trustworthy to convince the reader that the fabricated event did, in fact, happen.
ALL THE FAKE NEWS THAT'S FIT TO PRINT

To test whether synthetic disinformation produced by GPT-2 could sway opinion on complex foreign policy issues of the kind that foreign powers might be especially interested in affecting, we used the publicly available version of GPT-2 to generate articles about a North Korean ship seizure.

We selected North Korea because of the long history of tension between Washington and Pyongyang and, hence, the topic’s evergreen relevance. Almost 90 percent of Americans view North Korea unfavorably, but day-to-day concern about the country ebbs and flows: 51 percent of Americans cited North Korea as the greatest enemy of the United States in 2018, but only 15 percent did in 2019. The negative but malleable nature of U.S. public opinion about North Korea made the case an ideal one through which to study the potential effect of weaponized fabricated news stories.

We envisioned a future disinformation campaign in which GPT-2 helps scale the production of fabricated news articles. Humans would write the ledes; GPT-2 would write the seemingly credible body text. We used the publicly available version of GPT-2 to generate 20 texts continuing from the first two paragraphs of a New York Times article about the seizure of a North Korean ship. From those 20, we selected the three that seemed the most convincing. (The first paragraph of this article is an excerpt from one of those three texts.)

We then put these three articles to the test in an online survey with 500 respondents. We first asked the respondents their opinions of North Korea, as Gallup frequently does. Then each respondent read the first two paragraphs of the New York Times article, followed either by one of the three synthesized stories or by the rest of the original piece.

After the respondents read the full text, we asked a series of demographic and opinion questions. Most important, we asked the respondents if they thought the treatment text was credible and whether they would share it on social media.

We chose The New York Times as the source for our prompt text because of its staid and trustworthy style. We wanted to know whether GPT-2 could imitate a Times story on a complex foreign policy issue and match the credibility score of the real article. Having the ability to convey the authority and credibility typical of the Times would prove particularly useful to a disinformation campaign.
BETTER THAN THE REAL THING?

A majority of respondents in all four groups—those who read the original article and those who read each of the three treatment texts—found their texts credible. A staggering 72 percent of respondents in one group reading a synthesized article considered it credible—less than the 83 percent that rated the original New York Times article credible, but still an overwhelming consensus. The worst-performing treatment text duped fully 58 percent of respondents.

The respondents’ interest in sharing the texts they read on social media did not vary much across the four groups. Approximately one-quarter of respondents who read the original New York Times story indicated that they would share the story on social media. Interestingly, the synthetic text rated as least credible had share rates statistically indistinguishable from the Times story.

How did our process compare with a real-world synthesized disinformation campaign? Instead of using the full version of GPT-2 that OpenAI makes available to researchers, we used the less capable, publicly available version—just as a real-world disinformation campaign would. Our text selection process also matched what a disinformation campaign might do: because the publicly available version of GPT-2 cannot produce coherent text consistently, a campaign would need to manually filter out nonsensical outputs (as we did). 

Large-scale synthesized disinformation is now possible, and its perceived credibility and potential to spread online rival those of an authentic Times article. As the technology for producing such synthetic texts improves, disinformation will become cheaper, more prevalent, and more automated. When such content floods the Internet, people may come to discount everything they read. The public will lose trust in the media and other institutions they rely on for information, including the government, exacerbating the prospects for political paralysis and polarization. Or worse: people will believe what they read, in which case foreign governments will be able to influence them at high speed and low cost—no St. Petersburg troll farm needed.

No comments:

Post a Comment