Pages

12 November 2016

Battle of the Bots: How AI Is Taking Over the World of Cybersecurity


Google has built machine learning systems that can create their own cryptographic algorithms — the latest success for AI’s use in cybersecurity. But what are the implications of our digital security increasingly being handed over to intelligent machines?

Google Brain, the company’s California-based AI unit, managed the recent feat by pitting neural networks against each other. Two systems, called Bob and Alice, were tasked with keeping their messages secret from a third, called Eve. None were told how to encrypt messages, but Bob and Alice were given a shared security key that Eve didn’t have access too.

In the majority of tests the pair fairly quickly worked out a way to communicate securely without Eve being able to crack the code. Interestingly, the machines used some pretty unusual approaches you wouldn’t normally see in human generated cryptographic systems, according to TechCrunch.

In all likelihood these approaches are far less sophisticated than the best human-devised methods, but it’s difficult to work out how they operate due to the opaque way neural nets solve problems.

This raises questions about how easy it would be for humans to crack computer-generated encryption. But it also makes it hard to give any guarantees on how secure the system might be, which is likely to limit practical applications for the time being, according to New Scientist. Eve’s performance also seems to confirm people’s suspicions that neural networks are not going to be great at cracking encryption.

But the researchers say in their preprint paper neural nets may be quite effective in making sense of communications metadata and for traffic analysis on computer networks. This is the kind of area where many think machine learning has a lot to offer cybersecurity because modern AI’s are great at spotting patterns, and they can process far more data than humans.

And with the ever-growing deficit of cybersecurity experts there’s a big market.

At the Black Hat hacking conference this summer Security firm SparkCognition unveiled its DeepArmor antivirus software, which uses a battery of AI technologies to constantly learn new malware behaviors and recognize how viruses may mutate to try and get around security systems.

According to TechCrunch, current machine learning still tends to throw out too many false positives though, and it misses some of the more nuanced attacks initiated by human hackers.

That’s why many approaches are focused on getting AIs and humans to work together.

Both Finnish security vendor F-Secure and MIT’s Computer Science and Artificial Intelligence Lab have developed machine learning cyber triage systems. They filter the vast amounts of information passing through a network for anything suspicious, drastically cutting down the number of potential threats human experts have to deal with.

IBM wants to go a step further and exploit the natural language processing abilities of its Watson AI to learn from the reams of threat reports, research papers and blog posts that give human cybersecurity experts a head start over machines. They hope this will allow them to provide a cloud service later this year that would effectively be an expert assistant for human professionals.

Fully automated systems are making progress as well though.

In August, DARPA held the inaugural Cyber Grand Challenge — the first hacking contest to pit machines against each other. These bots used a variety of AI approaches to automatically detect software vulnerabilities and either patch or exploit them without any help from humans.

They were far from perfect, with one going dormant for large parts of the contest and another accidentally crippling the machine it was trying to protect. The winning bot Mayhem went on to compete against humans at the DEFCON hacking convention where it finished last with a single point. But the results were impressive nonetheless and many observers were surprised at the level of sophistication the bots showedand also the super-human speeds at which they operated.

It’s an arms race though and hackers are also exploiting these new technologies.

Roman Yampolskiy, a University of Louisville associate professor who specializes in AI and cybersecurity, told TechEmergence he has come across programs using AI to automate the process of finding cracks in systems’ security or predicting passwords.

“We’re starting to see very intelligent computer viruses, capable of modifying drone code, changing their behavior, penetrating targets,” he said.

Another strength of AI is the ability to mimic humans.

Dave Palmer, director of technology at cybersecurity firm Darktrace, told Business Insider he thinks it won’t be long until intelligent malware uses your emails to learn how you communicate and create infected messages that appear to come from you. According to the Financial Times, gangs are getting around systems designed to catch automated attempts to gain access to websites by simulating how humans log on with fake mouse movements or varying typing speeds.

After the Darpa Grand Challenge, the Electronic Frontier Foundation (EFF) warned that automating the process of exploiting vulnerabilities could lead to serious unintended consequences and researchers need to agree on a code of conduct to head off these threats.

If these systems escape the control of their creators, there’s no guarantee automated defenses could effectively counter them, they say, as not all computers are easily patched — particularly often inaccessible or not easily updated Internet of Things devices.

They say researchers need to consider questions like how easily a defensive tool could be repurposed into an offensive one; what systems is it targeting, and how vulnerable are they; and what is the worst-case scenario if they lost control of the tool?

Writing in Engadget, Violet Blue said the EFF was likely jumping the gun, predicting a true AI hacker is still 30 years off and noting that over-regulation is simply going to slow down the good guys. As the Mayhem team noted in a Reddit AMA, “If you focus only on defense, you'll find you are always playing catch-up and can never be ahead of attackers."

But as cybersecurity increasingly becomes a battle between opposing forces of bots, it may be prudent to begin thinking about fail-safes sooner rather than later. After all, as the technology advances and the way the bots operate becomes increasingly inscrutable, we might start losing sight of whose side they’re on.

No comments:

Post a Comment