Pages

11 February 2020

Adversarial artificial intelligence: winning the cyber security battle


Artificial intelligence (AI) has come a long way since its humble beginnings. Once thought to be a technology that would struggle to find its place in the real world, it is now all around us. It’s in our phones, our cars, and our homes. It can influence the ads we see, the purchases we make and the television we watch. It’s also fast becoming firmly embedded in our working lives — particularly in the world of cyber security.

The Capgemini Research Institute recently found that one in five organisations used AI cyber security pre-2019, with almost two-thirds planning to implement it by 2020. The technology is used across the board in the detection and response to cyber attacks.

But as with any advancement in technology, AI is not only used for good. Just as cyber security teams are utilising machine learning to ward off threats, so too are bad actors weaponising the technology to increase the speed, effectiveness and impact of those threats.


We now find ourselves in an arms race. One that we can only win by embracing this rapidly evolving technology as part of a broad, deep defence.

AI in cyber security: a necessity or too early to introduce?

Artificial intelligence in cyber security — defence

There’s no doubt that the cyber security industry is convinced of the worth of artificial intelligence. The AI cyber security market is already valued at $8.8 billion and expected to top $38 billion by 2026.

What started out with fairly simple yet effective use cases, such as the email spam filter, has now expanded across every function of the cyber security team.

Today, AI is a vital line of defence against a wide range of threats, including people-centric attacks such as phishing. Every phishing email leaves behind it a trail of data. This data can be collected and analysed by machine learning algorithms to calculate the risk of potentially harmful emails by checking for known malicious hallmarks.

The level of analysis can also extend to scanning attached files and URLs within the body of a message – and even, thanks to a type of machine learning known as computer vision, to detecting websites that impersonate the login pages of major phishing targets.

The same machine learning model can also be applied to other common threats such as malware – which grows and evolves over time and often does considerable damage before an organisation knows what it’s up against.

Cyber security defences that employ AI can combat such threats with greater speed, relying on data and learnings from previous, similar attacks to predict and prevent its spread. As the technology continues to develop, so too will its prevalence within cyber security defence. Over 70% of organisations are currently testing use cases for AI cyber security for everything from fraud and intrusion detection to risk scoring and user/machine behavioural analysis.

Perhaps the biggest benefit of AI, however, is its speed. Machine learning algorithms can quickly apply complex pattern recognition techniques to spot and thwart attacks much faster than any human.

Darktrace unveils the Cyber AI Analyst: a faster response to threats

Artificial intelligence in cyber security — attack

Unfortunately, while AI is making great strides in defending against common threats, it’s making it far easier for cybercriminals to execute them too.

Take phishing: AI has the potential to supercharge this threat, increasing the ease, speed and surface of an attack. Even rudimentary machine learning algorithms can monitor correspondence and credentials within a compromised account. Before long, the AI could mimic the correspondence style of the victim to spread malicious emails far and wide, repeating the attack again and again.

When it comes to malware, AI can facilitate the delivery of highly-targeted, undetectable attacks. IBM’s AI-powered malware proof of concept, DeepLocker, is able to leverage publicly available data to conceal itself from cyber security tools, lying dormant until it reaches its intended target. Once it detects the target — either via facial or voice recognition — it executes its malicious payload.

Are there solutions to the AI threats facing businesses?

AI’s speed will also likely prove to be a major boon for cybercriminals, as it is for those of us defending against it. Machine learning could be deployed to circumnavigate and break through cyber security defences faster than most prevention or detection tools could keep up.

And AI will not only exacerbate existing threats – it’s already creating new ones. Sophisticated machine learning techniques can mimic and distort audio and video to facilitate cyber attacks. We have already seen this technology, known as DeepFakes, in the wild. In March of this year, an unknown hacking group used this approach to defraud a UK-based energy subsidiary of over £200,000. The group impersonated the parent company’s CEO to convince the subsidiary managing director to make an urgent transfer to a Hungarian supplier. Convinced he was talking to his boss, the he complied with the request and the money was successfully stolen.

As AI becomes ever-more convincing in its ability to ape human communication, attacks of this nature are likely to become increasingly common.

The comprehensive IT security guide for CIOs and CTOs

Winning the AI arms race

When you find yourself in an arms race, the only way to win is to stay ahead. For the cyber security industry, this is nothing new. While the tactics and technologies may have changed, the battle to stay in front has raged for decades.

In this latest standoff, to keep pace with AI-powered threats, we must embrace AI-powered defence. That being said, AI should not be considered the universal panacea.

There’s no doubt that machine learning technology is both sophisticated and incredibly powerful, but it is just one piece of the puzzle.

When it comes to successfully defending against modern cyber attacks, there is no silver bullet – AI or otherwise. A strong defence must be deep, multifaceted and, despite the ‘rise of the machines’, people-centric.

Regardless of what is attacking them, it is still ultimately your people that are being attacked. That’s why – along with the latest tools and protections – your cyber defence must include regular and comprehensive employee education around attack methods, threat detection and threat prevention.

There is no doubt that artificial intelligence is now a hugely important line of cyber defence. But it cannot and should not replace all previous techniques. Instead, we must add it to an increasingly sophisticated toolkit, designed to protect against rapidly evolving threats.

No comments:

Post a Comment