Pages

23 August 2018

​How weaponized AI creates a new breed of cyber-attacks

By Dan Patterson
Source Link

TechRepublic's Dan Patterson sat down with Jiyoung Jang, Research Scientist, CCSI Group at IBM Research, Marc Ph. Stoecklin, Principal RSM & Manager, CCSI Group at IBM Research and Dhilung Kirat, Research Scientist, CCSI Group at IBM Research. The researchers have discovered invasive and targeted artificial intelligence-powered cyber-attacks triggered by geolocation and facial recognition. The following is an edited transcript of the conversation.


Jiyoung Jang, Marc Ph. Stoecklin, and Dhilung Kirat : IBM Research, and specifically our team, has a long tradition in analyzing technology shifts out there, and how they impact the security landscape out there. Then we understand how to counter these attacks, and how to give recommendations to organizations.

Now, what happened in the last few years with AI (Artificial Intelligence) becoming very much democratized, and very widely used, was that the attackers also started to study up on it, and use it to their advantage, and weaponize it.

At IBM Research, we developed Deep Locker, basically to demonstrate how existing AI technologies already out there in the open source can be easily combined with malware powered attacks, which are also being seen in the wild very frequently, to create entirely new breeds of attacks.

Deep Locker is using AI to conceal the malicious intent in benign unsuspicious looking applications, and only triggers the malicious behavior once it reaches a very specific target, who uses an AI model to conceal the information, and then derive a key to decide when and how to unlock the malicious behavior.

First of all, it can be any kind of feature that an AI can pick up. It could be a voice recognition system. We've shown a visual recognition system. We can also use geolocation, or features on a computer system that are identifying a certain victim. And then these indicators, we can choose whatever indicators there is, can be fed into the AI model, from which then the key is derived, and basically the decision is made on whether to attack or not.

This is really where many of these AI powered attacks are heading to take a complexity— bring a new complexity to the attacks. When we're studying how AI can be weaponized by attackers, we see that their number of characteristics are changing compared to traditional attacks.

On the one hand, AI can make attacks very evasive, very targeted, and then they also bring an entire new scale and speed to attacks, with reasoning, and with autonomous approaches that can be built into attacks to work completely independently from the attackers.

Lastly, we see a lot of adaptability that is possible with AI; AI can learn and retrain on-the-fly what worked, what didn't work in the past, and get passed existing defenses. The security industry and the security community needs to understand how these AI powered attacks are being created, and what their capabilities are.

I'd like to compare this to a medical example, where we have a disease, and it's mutating again, that we have AI powered attacks this time, and we need to understand what is the virus, what is the mutations, and where are its weak points and limitations in order then to come up with the cure or the vaccine to it.

No comments:

Post a Comment