Pages

27 November 2019

Internet Companies Prepare to Fight the ‘Deepfake’ Future

By Cade Metz
Source Link

SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.

By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.


For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.

Interested in All Things Tech?

The Bits newsletter will keep you updated on the latest from Silicon Valley and the technology industry.

Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.

“Even with current technology, it is hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.
On ‘The Weekly,’ A.I. Engineers Create a Deepfake Video

[HIGH-PITCHED NOTE] “You know when a person is working on something and it’s good, but it’s not perfect? And he just tries for perfection? That’s me in a nutshell.” [MUFFLED SPEECH] “I just want to recreate humans.” “O.K. But why?” “I don’t know. I mean, it’s that feeling you get when you achieve something big. (ECHOING) “It’s really interesting. You hear these words coming out in your voice, but you never said them.” “Let’s try again.” “We’ve been working to make a convincing total deepfake. The bar we’re setting is very high.” “So you can see, it’s not perfect.” “We’re trying to make it so the population would totally believe this video.” “Give this guy an Oscar.” [LAUGHTER] “There are definitely people doing it at Google, Samsung, Microsoft. The technology moves super fast.” “Somebody else will beat you to it if you wait a year.” “Someone else will. And that will hurt.” “O.K., let’s try again.” “Just make it natural, right?” “It’s hard to be natural.” “It’s hard to be natural when you’re faking it.” “O.K.” “What are you up to these days?” “Today, I’m announcing my candidacy for the presidency of the United States.” [LAUGHTER] “And I would like to announce my very special running mate, the most famous chimp in the world, Bubbles Jackson. Are we good?” “People do not realize how close this is to happen. Fingers crossed. It’s going to happen, like, in the upcoming months. Yeah, the world is going to change.” “I squint my eyes.” “Yeah.” “Look, this is how we got into the mess we’re in today with technology, right? A bunch of idealistic young people thinking, we’re going to change the world.” “It’s weird to see his face on it.” [LAUGHTER] “I wondered what you would say to these engineers.” “I would say, I hope you’re putting as much thought into how we deal with the consequences of this as you are into the realization of it. This is a Pandora’s box you’re opening.” [THEME MUSIC]

Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

The change from the actual image can be subtle or drastic, depending on the other actor used to create it.Credit...Google

They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.

Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.

The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”

The question is: Which side will improve more quickly?

Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.

Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.

Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.

Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.

“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

No comments:

Post a Comment