Pages

17 November 2022

Deepfakes are Russia’s new 'weapon of war'

Paul Szoldra

ABOUT A MONTH AGO, Michael McFaul, a former U.S. Ambassador to Russia, issued a strange warning. He claimed someone was impersonating him using a phone number with a Washington, D.C. area code, trying to speak with associates on a video call.

“You will see an AI-generated ‘deep fake’ that looks and talks like me,” he said on Twitter, using a term coined in 2017 to describe imagery that has been deceptively edited to alter a person’s identity. The tech has been used to place Vladimir Putin on SNL for laughs among other amusements. Scammers have pretended to be CEOs and Navy admirals to trick people out of cash. But this seemed different; it was a live video call.

“It is not me,” McFaul said. “This is a new Russian weapon of war. Be careful.”

McFaul later said he wasn’t entirely sure who was behind the calls. But given his passionate support of Ukraine in its fight to eject Russia after eight years of war, he believed it was “obviously designed to undermine Ukraine’s diplomatic and war efforts.”

Was it the work of pranksters? Russian spies? One thing was clear: It wasn’t the first time. Since the Feb. 24 invasion, several Kremlin adversaries have been targeted by deepfakes.

Photo illustration by Paul Szoldra

As Russia attacked Kyiv and multiple cities on March 2, Ukrainian intelligence warned that a deep fake “provocation” was coming. And sure enough, on March 16 a deceptively-edited video of Ukrainian President Volodymyr Zelenskyy appeared on Ukrainian national news. Lay down your arms and surrender, the fake Zelenskyy ordered in a video that was a big hit on Russian social media. The crude deepfake was quickly debunked, but one researcher suspected it was “the tip of the iceberg.” He was right.

In April, several lawmakers in Europe were duped by “individuals who appear to be using deepfake filters to imitate Russian opposition figures during video calls.” Latvian lawmaker Rihards Kols was one of those fooled, though you can hardly blame him. Even the man he thought he was speaking with, Leonid Volkov, was impressed by his computer-generated doppelgänger.

“Looks like my real face,” said Volkov, an ally of Russian opposition leader Alexei Navalny. “But how did they manage to put it on the Zoom call? Welcome to the deepfake era.”

In June, mayors of Berlin, Madrid, and Vienna all thought they were speaking with their counterpart in Kyiv, Vitali Klitschko. Instead, Berlin Mayor Franziska Giffey’s office learned they were “dealing with a deepfake,” per The Guardian:

It was only after about 15 minutes, when the supposed Kyiv mayor at the other end of the line started to talk about the problem of Ukrainian refugees cheating the German state of benefits, and appeared to call for refugees to be brought back to Ukraine for military service, that Giffey grew suspicious.

This is the part where I would love to put a happy spin on this. To tell you that deepfakes are extremely high-tech and inaccessible to most people. But that’s not the truth. It’s easy to do for novices, and nation-states with heavy computing power and big information warfare budgets are even better at it. So, we should anticipate this getting worse.

The use of deepfakes in war may have begun in Ukraine, but it won’t end there. And I can envision potential scenarios to defend against. Here are two:

Using deepfakes to harass or intimidate military service members and their families. In 2016, ISIS posted kill lists of U.S. military members online. In 2018, Russians duped Ukrainian families into giving away their loved ones’ positions via text message. These are examples of information operations meant to sow chaos and fear. In the future perhaps, a deepfake military officer will inform a family their loved one has been killed. Or maybe a faked video of a military leader issuing false orders will lead to battlefield action. At any rate, deepfakes are likely to be one of many tools in “combined arms” IO campaigns.

Using deepfakes to embarrass or blackmail those with access to classified information. That’s about 4 million Americans under threat. Hostile intelligence services have used deepfake photos to “create fake social media accounts from which they have attempted to recruit sources.” Deepfake video has also been used in sextortion rackets. And as I wrote about last week, Americans are giving away an alarming amount of biometric data—a deepfake building block—each day on popular social media apps.

While most deepfakes are easily detectable today, “the sophistication of the technology is rapidly progressing to a point at which unaided human detection will be very difficult or impossible,” according to CRS.

“It will likely be a never-ending battle,” says Dr. Thomas P. Scanlon, a senior cybersecurity engineer and researcher at Carnegie Mellon University. “Similar to malware and anti-virus software.” Indeed, the FBI warned in June of an “increase in complaints reporting the use of deepfakes…to apply for a variety of remote work and work-at-home positions.”

So, that’s the bad news. The good news is that mad scientists at DARPA are working on this problem. The agency’s Semantic Forensics (SemaFor) program aims to develop algorithms that detect various deep fakes automatically. Indeed, researchers have already made progress, as evidenced by a public showing of their research last month. Yet adversaries will continue to adapt, and the fix can’t come soon enough.

“This video was probably the first use of deepfakes in the context of war,” DARPA program manager Dr. Matt Turek said of the Zelenskyy video posted early in the war. “And if it had been more compelling, it might have changed the trajectory of the Russia-Ukraine conflict.”

No comments:

Post a Comment