Pages

5 August 2019

How A.I. Could Be Weaponized to Spread Disinformation

By CADE METZ and SCOTT BLUMENTHAL 

In 2017, an online disinformation campaign spread against the “White Helmets,” claiming that the group of aid volunteers was serving as an arm of Western governments to sow unrest in Syria.

This false information was convincing. But the Russian organization behind the campaign ultimately gave itself away because it repeated the same text across many different fake news sites.

Now, researchers at the world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could potentially help disinformation campaigns go undetected by generating huge amounts of subtly different messages.

One of the statements below is an example from the disinformation campaign. A.I. technology created the other. Guess which one is A.I.:

The White Helmets alleged involvement in organ, child trafficking and staged events in Syria.


The White Helmets secretly videotaped the execution of a man and his 3 year old daughter in Aleppo, Syria.

Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence. As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve.

In recent months, two prominent labs — OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle — have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous.

Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.

A fake Facebook post seen by millions could, in effect, be tailored to political leanings with a simple tweak.

“The level of information pollution that could happen with systems like this a few years from now could just get bizarre,” Mr. Radford said.

This type of technology learns about the vagaries of language by analyzing vast amounts of text written by humans, including thousands of self-published books, Wikipedia articles and other internet content. After “training” on all this data, it can examine a short string of text and guess what comes next.

We wanted to see what kind of text each of the labs’ systems would generate with a simple sentence as a starting point. How would the results change if we changed the subject of the sentence and the assertion being made?

See the results of our tests below. Click on the subject or object of the sentence to to see how the output changes. Keep clicking to watch it vary.

Before your deeply ingrained faith is shaken, know one thing: the Democrats who instigated, or at least helped write, the immigration policies that sent children to illegal immigrant border facilities now know they can't take the blame. Like ghosts in the light, the Democrats present in scripted speeches why their disastrous policies can't be held accountable.

They can't take the blame because they don't really care. They care about counting votes, not about a cut in immigration flows so we can adequately care for the mothers, fathers, siblings, infants and toddlers that are now multiplying in cell cages.

The Democrats don't care, for instance, because of immigration votes in the 1950s and 60s, or that the immigration swarms of the late 1990s and early 2000s were similarly undertaken by Republican governors in their state capitols to proactively try to prevent organized immigrant groups from coming.

Of course not.

Democrats care most about something else: voting. Voters will buy those votes the only way that matters to them -- votes that harm Republicans as many as possible.

And the Democrats have mastered that strategy.

Text generated by the Allen Institute.

OpenAI and the Allen Institute made prototypes of their tools available to us to experiment with. We fed four different prompts into each system five times.

What we got back was far from flawless: The results ranged from nonsensical to moderately believable, but it’s easy to imagine that the systems will quickly improve.“The level of information pollution that could happen with systems like this a few years from now could just get bizarre,” said Alec Radford, an artificial intelligence researcher in San Francisco. Carlos Chavarría for The New York Times

Researchers have already shown that machines can generate images and sounds that are indistinguishable from the real thing, which could accelerate the creation of false and misleading information. Last month, researchers at a Canadian company, Dessa, built a system that learned to imitate the voice of the podcaster Joe Roganby analyzing audio from his old podcasts. It was a shockingly accurate imitation.

Now, something similar is happening with text. OpenAI and the Allen Institute, along with Google, lead an effort to build systems that can completely understand the natural way people write and talk. These systems are a long way from that goal, but they are rapidly improving.

“There is a real threat from unchecked text-generation systems, especially as the technology continues to mature,” said Delip Rao, vice president of research at the San Francisco start-up A.I. Foundation, who specializes in identifying false information online.

OpenAI argues the threat is imminent. When the lab’s researchers unveiled their tool this year, they theatrically said it was too dangerous to be released into the real world. The move was met with more than a little eye-rolling among other researchers. The Allen Institute sees things differently. Yejin Choi, one of the researchers on the project, said software like the tools the two labs created must be released so other researchers can learn to identify them. The Allen Institute plans to release its false news generator for this reason.

Among those making the same argument are engineers at Facebook who are trying to identify and suppress online disinformation, including Manohar Paluri, a director on the company’s applied A.I. team.

“If you have the generative model, you have the ability to fight it,” he said.

No comments:

Post a Comment