21 February 2019

Revolutionary AI Fake Text Generator Is ‘Too Dangerous’ To Release; Project Backed By Elon Musk Won’t Publish Its Research For Fear Of Its Misuse; Creative Machines Will Be The Next Weapon In Our Fake News/Video Wars


Yuan Ren posted an article on the February 14, 2019 edition of the DailyMail.com, with the title above. He writes that “a project backed by billionaire and visionary Elon Musk has been so successful its developers are not releasing it to the public – for feared it might be misused. Research Group Open AI developed a ‘large-scale, unsupervised language model,’ that is able to generate news stories from a simple headline. But,” Mr. Ren adds, “the group insists it will not be releasing details of the program; and instead, has unveiled a much smaller version for research purposes. Its developers claim the technology is poised to rapidly advance in the coming years; and, the full specification and details of the project will only be released when the negative applications have been discussed by researchers.”

The researchers said: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead, releasing a much smaller model for researchers to experiment with.” Dario Amodei, OpenAI’s Research Director told the DailyMail: “We are not at a stage yet, where we’re saying this is a danger. We’re trying to make people aware of these issues, and start a real conversation.”


“The new model, known as GPT-2, has been developed using technology that already allows computers to write short news reports from press releases,” Mr. Ren wrote. “The program produced a nine paragraph piece based on a two line manual insert about scientists discovering unicorns. Language models let computers read and write, and are ‘trained’ for specific tasks, such as; translating languages, answering questions, or summarizing text. Researchers found the model is able to read and write longer passages more easily than expected, and with little human intervention.These general purpose language models could write longer blocks of information by using text openly available on the Internet.”

Sam Bowman, an assistant professor at New York University, who reviewed the research said: “We’re within a couple of years of this being something that an enthusiastic hobbyist could do at home reasonably easily. It’s already something a well-funded hobbyist, with an advanced degree could put together with a lot of work.”

If You Thought Russian Disinformation & Fake News Was Bad — ‘You Ain’t Seen Nothing Yet’

Roger Highfield posted an April 2018 article on WIRED.com, “Creative Machines Will Be The Next Weapon In Our Fake News War,” warning that “machine-made images and videos will accelerate the spread of fake content online.” “When it comes to artificial intelligence (AI), we need to be a lot more concerned with ‘machine creativity,”

Mr. Highfield’s warning came from a workshop held at New York University last spring, “Neuroscience And Artificial Intelligence: Shaping The Future Together.” One theme running throughout the workshop was: “while much of the public debate has focused on the threat to humanity of AI, the rise of creative AI will add a new, and more immediate dimension to the post truth era by tapping into the abilities of human imagination, which is able to construct fictitious mental scenarios by recombining familiar elements in novel ways.”

As Mr. Highland noted, “fake images are nothing new. Well known examples include the Cottingley Fairies photographs, which date to 1917, when two girls returned home with what they claimed were photographs of real fairies. Stalin, was notorious for routinely airbrushing his enemies out of photographs.”

“Now,” Mr. Highland wrote, “images can be synthesized more convincingly than ever, and by machine. And, in 2017 he wrote, “a machine learning app called “DeepFake was launched which could create fake pornographic videos by manipulating images and videos of a person’s face; and making them fit onto the original footage. What alarmed one delegate was the rise of these technologies at a time when ‘public shaming can bring people down in minutes and destroy them.” And, even if these videos and images are later proved to have been fake, so much personal damage has been done to the individual that they may never be able to fully recover their personal and financial losses.

And, as Tim Simonite wrote on the December 2018 edition of WIRED.com, “Is This Photo Real? Artificial Intelligence Gets Better At Faking Images,” First algorithms figured out how to decipher images. That’s why you can unlock an iPhone with your face. More recently, machine learning has become capable of generating and altering images and video.” 

He adds that In 2018, researchers and artists took Artificial Intelligence (AI)-made and enhanced visuals to another level,” Mr. Simonite wrote. Indeed, “software developed at the University of California Berkeley, can transfer the movements of one person, captured on video…to another,” he notes. “The process begins with two source clips — one showing the movement to be transferred; and, another showing a sample of the person to be transformed. One part of the software extracts the body positions from both clips; another learns how to create a realistic image of the subject for any given body position. It can then generate video of the subject performing more or less any set of movements. In its initial version, the system needs 20 minutes of input video before it can map new moves onto your body.”

“The end result,” Mr. Simonite wrote, “is a similar to a trick often used in Hollywood. Superheroes, aliens, and simians in Planet Of The Apes movies are animated by placing markers on actors’ faces and bodies so they can be tracked in 3-D by special cameras. The Berkeley project suggests machine learning algorithms could make those production values and much more accessible.”

And, “AI-enhanced imagery has become practical enough to carry in your pocket,” Mr. Simonite notes.

“The Night Sight feature of Google’s Pixel phones, launched in October 2018, uses a suite of algorithmic tricks to turn night into day,” Mr. Simonite explained. “One is to combine multiple photos to create each final image; comparing them allows software to identify and remove random noise, which is more of a problem in low-light shots. The cleaner composite image that comes out of that process gets enhanced further with help from machine learning. Google engineers trained software to fix the lighting and color images taken at night using a collection of dark images paired with versions corrected by photo excerpts.”

Imaginary Friends

Mr. Simonite/WIRED then displayed a series of photos showing “people, cars, and cats that don’t exist — the images were generated by software developed at chip-maker Nvidia, whose graphics chips have become crucial to machine learning projects. “

“The fake images were made using a trick first conceived in a Montreal pub in 2014 by AI researcher Ian Goodfellow, who is now at Google,” Mr. Simonite wrote. “He figured out how to get neural networks, the webs of math powering the current AI boom, to teach themselves to generate images.The versions Goodfellow invented to make images are called generative adversarial networks, or GANs. They involve a kind of duel between two neural networks with access to the same collection of images. One network is tasked with generating fake images that could blend in with the collection, while the other tries to spot the fakes. Over many rounds of competition, the faker — and the fakes — get better and better.”

AI Art

“In a scene from the experimental short film, Proxy,by Australian composer Nicholas Gardiner, footage of Donald Trump threatening North Korea with “fire and fury,” is modified so that the U.S. president has the features of his Chinese counterpart, Xi Jinping,” Mr. Simonite wrote. “Gardnier made his film using a technique initially popularized by an unknown programmer using the online handle Deepfakes. In late 2017, a Reddit account with that name began posting pornographic videos that appeared to star Hollywood names such as Gal Godot. The videos were made using GANs to swap the faces in video clips. The Deepdakes account later released its software for anyone to use, creating a whole new genre of online porn — and worries the tool and easy-to-use derivations of it might be used to create fake news that could manipulate elections.”

“Deepfakes software has proved popular with people uninterested in porn,” Mr. Simonite wrote. Gardiner and others say it provides them a powerful new tool for artistic exploration. In Proxy, Gardiner used Deepflakes package circulating online to make a commentary on geopolitics in which world leaders such as Trump, Vladimir Putin, and Kim Jong-il swap facial features.”

Really Unreal

“Generative adversarial networks usually have to be trained to create one category of images at a time, such as faces or cars,” Mr. Simonite wrote. “BigGAN was trained on a giant database of 14 million varied images scraped from the Internet, spanning thousands of categories, in an effort that required hundreds of Google’s specialized TPU machine learning processors. That broad experience of the visual world means the software can synthesize many different kinds of highly realistic looking images.”

IBM’s “DeepMind released a version of its models for others to experiment with,” Mr. Simonite wrote. “Some people, exploring the “latent space” inside — essentially testing the different imagery it can generate — share the dazzling and eerie images and video they discover on Twitter under the hashtag #BigGAN. AI artist Mario Klingemann has devised a way to generate BigGAN videos using music.”

DeepFake Imaging, Enhanced By AI, Is Going To Become A Big Problem In 2019

As cyber security guru Bruce Schneier has written, “there is an arms race between those creating fake images and videos; and, those trying to detect them.” In a blog post last year, “Detecting Fake Videos,” Mr. Schneier wrote: “These fakes, while convincing if you watch for a few seconds on a phone screen, aren’t perfect (yet). They contain tells, like creepily, ever-open eyes, from flaws in their creation process. In looking into DeepFake’s guts, Lyu realized that the images that the program learned from, didn’t include many with closed eyes (after all you wouldn’t keep a selfie where you were blinking, would you?). “This becomes a bias,” he said. “The neural network doesn’t get blinking. Programs also might miss other “psychological signals intrinsic to human beings,” according to a paper Mr. Lyu wrote, “on the phenomena such as breathing at a normal rate, or having a pulse.” “While this research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even in a large set of snapshots might not adequately capture the physical human experience; and so, any software trained on those images may be found lacking,” Mr. Schneier wrote.

“Lyu’s blinking revealed a lot of fakes,” Mr. Schneier wrote. “But, a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos, whose stars opened and closed their eyes more normally. The fake content creators had evolved.”

Mr. Schneier concludes, “I do not know who will win this arms race, if there ever will be a winner. But, the problem with fake videos goes deeper; they affect people even if they are later told they are fake; and, there will always be people that believe they are real, despite any evidence to the contrary.”

Huge Implications For The Intelligence Community — For Espionage, Clever/Sophisticated Spoofing

Obviously, there are huge implications here for the Intelligence Community, espionage and clever/sophisticated spoofing. Deception is an underappreciated talent and technique that can pay big dividends. But, the darker angels of our nature are also going to increasingly employ this emerging technology in clever and devious ways we don’t expect or understand very well. We have to expect that as AI matures, along with 3-D algorithmic imaging, and machine learning, the bad guys may have the upper hand. This genre is going to be a problem in 2019, and likely beyond. But, we can also use this technology in clever ways that could provide us with critical intelligence, or undermine our adversary.

Russia, China, cyber mafias, etc. and the darker digital angels of our nature will also no doubt bring this kind of nefarious AI use to an art form, and with potential lethal consequences.

In closing, I am reminded of the musician Pete Townsend/The Who’s 1978 album and signature song, “Who Are You, Who, Who Are You………I really want to know.” Stay tuned. RCP,

No comments: