Pages

2 September 2017

Even Artificial Neural Networks Can Have Expoitable Backdoors; AI-Enhanced Malware Likely To Emerge By The End Of 2017 — Potential Disastrous Consequences For Businesses, Consumers, Intelligence, Military, and Law Enforcement


You just had to figure it. Anything and everything that is connected to the Internet, or the Internet of Things (IoT). Now comes a report on WIRED.com that “even artificial neural networks can have exploitable back-doors,” and thus — be hacked, or a hidden vulnerability. Tim Simonite writes in the August 25, 2017 edition of WIRED, that “in early August, New York University (NYU) professor Siddharth Garg checked for traffic; and then, put a yellow Post-it [note] on a stop sign outside the Brooklyn building where he works. When he and two colleagues showed a photo of the scene to their road-sign detector software, it was 95 percent sure the stop sign in fact displayed the speed limit.”

“The stunt,” Mr. Simonite wrote, “demonstrated a potential security headache for engineers working with machine learning software. The researchers showed it’s possible to embed silent, nasty, surprises into artificial neural networks, the type of learning software used for tasks such as recognizing speech or understanding photos.” 

“Malicious actors can design [artificial neural networks] so that kind of behavior will emerge only in response to emerge only in response to a very specific, secret signal, as in the case of Garg’s Post-it,” Mr. Simonite wrote. “Such “back-doors” could be a problem for companies that want to [or have to, due to a lack of expertise] outsource work on neural networks to third parties. or build products on top of freely available neural networks online. Both approaches have become more common, as interest in machine learning grows inside, and outside the tech industry,” he noted. “In general, it seems that no one is thinking about this issue,” said Dolan-Gavitt, an NYU professor who works with Garg.

“Stop signs have become a favorite target of researchers trying to hack neural networks,” Mr. Simonite notes. In July, he writes, “another team of researchers showed that adding stickers to signs could confuse an image recognition system. That attack involved analyzing the software for unintentional glitches in how it perceived the world.” Professor Gavitt told WIRED, “the backdoor attack is more powerful, and pernicious, because it’s possible to choose the exact trigger; and, it’s effect on the system’s decision.” 

“Potential, real-world systems that rely on image recognition include surveillance systems and autonomous vehicles,” among a growing number of other domains such as virtual reality, and modeling and simulation, Mr. Simonite wrote. “The NYU researchers plan to demonstrate how a backdoor could blind a facial recognition to the features of one specific system, allowing them to escape detection. Nor do back-doors necessarily have to affect image recognition. The team is working to demonstrate a speech-recognition system bobby-trapped to replace certain words with others, if they are uttered in a particular voice, or in a particular accent.” Software using machine learning for military, or surveillance applications, such as footage from drones, might be an especially juicy target for such attacks,” said Jamie Blasco, Chief Scientist at the security company, Alien Vault, told WIRED that “Companies that are using deep neural networks, should definitely include these scenarios in their attack surface and supply chain analysis.” “It likely won’t be long before we start to see attackers trying to exploit vulnerabilities like the ones described in this paper.” 

“The NYU researchers describe a test of two different kinds of [a artificial neural] back-doors,” in a research paper released this week, Mr. Simonite wrote. “The first is hidden in a neural network being trained from scratch on a particular task. The stop sign trick was an example of that [kind of] attack, which could be sprung when a company asks a third party to build it a machine learning system. The second kind of back-door,” he adds, “targets they way engineers sometimes take a neural network trained by someone else; and retrain it slightly for the task at hand. The NYU researchers showed that back-doors built in to their road sign detector, remained active — even after the system was retrained to identify Swedish road signs instead of their U.S. counterparts. Any time the retrained system saw a yellow rectangle like a Brooklyn Post-it on a sign, its performance plunged by 25 percent,” he noted.

“Security researchers are paid to be paranoid,” Mr. Simonite notes. “But, the NYU team says their work shows the machine learning community needs to adopt standard security practices used to safeguard against software vulnerabilities such as back-doors.” While “the NYU researchers are thinking about how to make tools that would allow coders to [secretly?] peer inside a neural network from a third party, and spot any hidden behavior,” Mr. Simonite warns that “In the meantime, Buyer beware.”

Nothing surprises me anymore with respect to what can be hacked; and, the many ingenious, devious, and sick/twisted ways cyber thieves, intelligence services, law enforcement and other digital sleuths find a vulnerability to exploit. One had better assume that the trusted insider who is tech savvy, will also find a way to use artificial neural networks to pilfer and abscond with the digital loot. 

But, perhaps even more frightening, is the over-the-horizon threat of artificially enhanced stealth malware. it would seem inevitable, and likely sooner than we think and want — that AI-enhanced malware is the next big, looming, cyber threat. David Palmer, Director of Technology at the Cyber Security firm, Darktrace, told Business Insider last September during an interview at the Financial Times Cyber Security Summit in London, that “AI will inevitably find its way into malware — with potentially disastrous consequences, both for businesses, and the individuals that hackers [using this technology] target.” Mr. Palmer believes that “Smart viruses will hold industrial equipment to ransom; malware will learn to mimic people you know [as a means to trick you into sending, or opening an email/attachment; and, the worst hacks won’t be the most noticeable ones.

How long before we are likely to see our first, known, AI-enhanced malware attack? Mr. Palmer told Business Insider that he thinks “you could train a neural network in the next 12 months [this was Sept. 2016 when he made this prediction] that would be smart enough to [carry out a trust attack] in a rudimentary way. And, if you look at the progress people at Google DeepMind are making on natural speech and language tools is in the next couple of years.”

Fasten your digital seat-belt; and, think twice before installing an Internet-of-Things architecture for your home. Remember the best cyber thieves haven’t been caught yet; and, it is usually the second digital mouse — that you don’t see — that always gets the cheese

No comments:

Post a Comment