20 March 2023

AI will step in as human workers struggle to cope with evolving cyber threats, says expert

Damien Black

Artificial intelligence (AI) could end up being the ultimate job creator, both for itself and human beings. While one expert believes intelligent automation is creating better cybersecurity solutions that only it can effectively manage, he also says it will soon open the industry’s doors to less skilled workers.

If this sounds a bit ambiguous, bear with me. Ricardo Villadiego, an advocate of AI automation in cybersecurity and CEO of Lumu, has high hopes for machine learning in his sector, all the more so because he believes that what the technology is already doing – flagging more system anomaly alerts than human beings ever could – means that either more workers will be needed to sift through them, or more AI.

And at a time when Big Tech has been laying off sapiens left and right, as global economic crisis driven by inflation forces company bosses to tighten their purse strings, my money would be on the latter. Here’s another way of looking at it: the cybersecurity industry is already short by an estimated 3.5 million jobs, with retention levels of existing staff at subpar, as highly skilled workers increasingly complain of burnout and demoralization and quit.

If what Villadiego says about AI essentially creating more work – albeit in a beneficial way – for cybersecurity workers is true, it looks like intelligent machines might also be the best tool for eliminating more onerous tasks, while opening up less demanding job roles to people needed to work with the tech in the near future.

Villadiego took time out to talk to Cybernews and help us make sense of an increasingly nuanced digital security landscape.

This interview was edited for length and clarity.

You talk about using automation to maximize workforce efficiency instead of focusing on job cuts. What does that entail exactly?

Cybersecurity teams of organizations are seeing the number of alerts going up. And that is happening because of two main reasons. Number one, the attack surface is expanding. More devices are connected to the network, and every one of them, whether an endpoint or an IoT device, is an attack surface [for cybercriminals].

That produces more network metadata that has to be analyzed, hence more alerts that have to be managed. Now, on top of that, all that data being generated that traditionally was being assessed using deterministic techniques is now assessed using heuristic or AI techniques. And guess what the end result is? It's going to produce more alerts for the same guys to analyze. So unless you automate the process of analyzing those and removing [work] cycles from the teams, they have to do that job manually. It's just unsustainable. That's why automation is more needed than ever.

This is sounding like an ‘in for a penny, in for a pound’ situation: if you're going to use AI to increase the number of alerts, you're also going to have to use it to decide what to do with them?

Exactly. Most of the traditional cybersecurity technologies trigger an alert to the end-user, an anomaly. And it is the SecOps [Security Operations] team who has to assess if the anomaly is indeed an attack or a false positive. That is no longer viable. We have to use additional mechanisms, built upon the foundation of ML [machine learning] that help us assess those anomalies.

At Lumu we have multiple stages of detection. Initially we do correlation with known threat actors, and that removes everything that is known. But there is another stage of detection: anomaly detection. Contrary to what the market is doing, which is putting those anomalies in front of security analysts, we feed them into another AI tool that we call “deep correlation.” It measures the distance between anomalies and known threat actors.

"When you create confidence in automation, you're effectively removing cycles from the security operation so they can focus on what's more important, which is to identify strategies to better defend the organization."Ricardo Villadiego, CEO of AI-based cybersecurity company Lumu

And using those capabilities, we can produce more accurate alerts. That doesn't require a human to identify if it is indeed an error or false positive. The benefit of using this approach, that we call “continuous compromise assessment,” is that when you have this level of accuracy you can automate with confidence. Right now you can submit those alerts to the data, and change a policy in your firewalls and your EDR [endpoint detection and response], because you know what you're blocking is something bad.

When you create this confidence in automation, you're effectively removing cycles from the security operation so that they can focus on what's more important to them, which is to identify what strategies they need to deploy to better defend the organization.


Obviously there's going to be added security layers and complexity involved in trusting machines to ward off human threat actors. Have you foreseen any potential issues around that?

There is no doubt that criminals will continue to evolve their attacks: they have more tools than ever to do so. ChatGPT is one of the things that we are seeing bad actors use to evolve and scale up their capabilities. The adversary is going to use AI to be better, just like the good guys. So there's no doubt that it's happening and will continue to increase.

But I think we have to change, from the defender side, the mindset of cybersecurity operations. Typically the alerts that those guys want to manage are very important [but] they dismiss a lot of what are perceived as ‘small’ alerts. Like a guy clicking on a phishing malvertising link, right? That's small. They like to act when the alert is huge, when there is an alert that says: “if you don't do something in five minutes, I'm going to start encrypting your assets.” And that mindset of operation is creating all the issues that we have.

"We have to change the mindset of cybersecurity operations. Typically the alerts that those guys want to manage are very important [but] they dismiss a lot of what are perceived as 'small' alerts."Villadiego, Lumu

On those alerts, the ransomware is already in, the network already has multiple compromised assets. Your chances to win that battle are minimal. It is better to act when those alerts are small, because then you control those problems before they evolve into something more significant.

So that's the mindset that has to change when it comes to SecOps. But if you're going to do that for every alert, there's not going to be any time in the world that is going to give you the ability to assess all of those. You have to automate the process of making those determinations and responding.

So talk me through that a bit more. How does an automated AI machine do this differently from a human being, in terms of unpicking the big from the small and deciding maybe size isn't everything and what is really important?

So today there's a human doing SecOps, right? And that human wants to understand exactly what the alert is and the impact that may have on the organization. That's just how we humans behave. But when you involve AI capabilities to identify alerts, in many cases you don't know why the tool is highlighting the alert: you don't have the context of why this is going to cause harm in the future.

If you wait to understand the context and why it's going to cause harm, it will be too late to act. You know that something is bad [because it has been] automatically qualified [as such] by the AI machine. The SecOps teams need to become comfortable acting, even when they don't really know the potential catastrophic outcome in the company three years from now.


"If you wait to understand the context and why it's going to cause harm, it will be too late to act. SecOps [security operations] teams need to become comfortable acting, even when they don't really know the potential catastrophic outcome in the company three years from now."Villadiego, Lumu

I’ll give you a very simple example. The AI tools in your organization identify that you're contacting [internet protocol] IP address X, Y, Z out of Iran. They said it's bad: you can't add their IP address. The SecOps team identified that alert. Today, with the current mindset, what they do – as opposed to stopping contact – is try to identify why the alert is bad. And they spend a significant amount of time doing so.

And there may not yet be data to back up why that is bad for your network. Because that's the whole purpose of predictive AI: to identify that something is bad, even when you don't know exactly why it is bad. They spend that time, and may even conclude that it is a false positive – because they don't have the data.

A simple question: does my company have a business reason to contact an IP address out of Iran? There's no business justification, so I stop those contacts. That's the safe bet. And that's the mindset shift that we need to see in SecOps if we want to become better at defending organizations with these new tools that exist today.

But is there a danger that this could go too far the other way, that the AI could be overzealous and actually end up giving too many red flags? I take it you have full faith in this technology…

I do. But again, it has to come with a change in the mindset of SecOps. For the past 25 years, teams have been used to getting all the context of an alert. That is no longer viable. It is changing dramatically, fast. The context is not necessarily as readily available. SecOps have to trust the tools and take action.

It's like when you're in a self-driving car: at first, you don't trust the car, right? Then, little by little, you gain confidence. And at some point, you start trusting the machine, you're there to see if there's something bad going on. When you fly in a plane, you know, 90% of the flight is on autopilot.

"It's like when you're in a self-driving car: at first, you don't trust the car, right? Then, little by little, you gain confidence. And at some point, you start trusting the machine."Villadiego, Lumu

I get that everyone wants to do important things, but the relevance of this shift in mindset of SecOps comes from winning battles that are easy to win. If you always do that, you'll never have a battle that becomes harder in your organization. And that's what AI enables companies to do – when they trust the platform.

The other shift that I've seen is the market in defense strategy. In the past 25 years, SecOps teams have been used to making decisions based on the noise of the market. So the new trend is zero trust, the new trend is XDR [extended detection and response], the new trend is next generation. Let's buy those tools, let's buy those gadgets, and that way we're going to be better prepared. I don't believe that’s sustainable, and the recession is putting a hard stop on that. Now CFOs [chief financial officers] are asking security teams to be more strategic in the way they make decisions.

What are the threats that your company is facing? Based upon those, prioritize the tools that you need to deploy. Make the changes in your cybersecurity stack – because there may be tools you're still paying for that are not adding any value.

We're hearing that the cybersecurity industry is short by more than three million jobs, though not all of those will be at the front end of security. But there are retention issues as well, with security professionals leaving due to burnout. Will automation and AI be able to plug at least some of those gaps?

I think the retention issues in the industry come from the fact that CISOs [chief information security officers] are becoming less and less convinced that they have the ability to win the battle. Why? Because every device is now connected to the internet, increasing the attack surface. Hence there's more alerts, and if I need to manage them the way I've been traditionally managing them, well, I need way more people. And my CFO is saying to me: “You cannot spend that much money.”

If I don't change the dynamics of the game, I don't have the ability to be successful in the job that my CEO [chief executive] is giving me. That's why you see the ‘bombed-out’ effect: people leaving cyber because they're not feeling they are in a winning position. If you want to tackle the problem using the same techniques, you need 3.2 million new security analysts.

But if you attack the problem from a different angle – we're seeing companies that have deployed Lumu and they eliminated the third party that was doing SecOps for them. They gained the conviction that they have the ability to do it by themselves. When you have a third party doing security operations for you, that third party doesn't truly know the impact of an incident on your company.

Traditionally, every new attack requires a new tool – and that trend is unsustainable. The common denominator of 99.99% of the attacks is that they have to use the network. You already have controls in the network to block threat actors, [but] did you seek to orchestrate those controls so they do a better job? If CISOs don't change the mindset of SecOps tools, they're going to continue to feel they don't have the ability to win. That's critical to avoid.

So you believe that a lot of the reason for burnout and people leaving is not so much long hours or sheer fatigue, it's the psychological factor – because they're demoralized?

Think of it this way: you're giving me the job to defend these organizations, but I don't have the tools to do my job effectively. So what am I doing here? It's the same if you're an F1 [Formula One] driver: you want to win your races. If you give me a car that doesn’t provide me with the ability, at least show me a path towards improvement so I can see some light.

That is required for mature but also less mature companies, because adversaries are maximizing their ability to spread those attacks to all types. It's no longer the case that only Bank of America and Capital and HSBC are facing attacks. For each one of those large companies, there are thousands of smaller ones that have also been affected by ransomware incidents.

Obviously automation is a word we're coming back to a lot. But you have, I believe, over 100 employees: this is very much a collaboration, isn't it, humans working with machines? Is AI going to create new jobs and maybe cut some old ones that weren't being filled anyway?

I think it's going to enable a new generation of jobs in cybersecurity. You probably are familiar with [AI text-to-art generator] Canva, it empowers anyone to do beautiful designs: things that in the past were relegated to a sector of people that understood how to use very niche tools. Technology is like art. Deep AI and automation helps companies feel that they can do cybersecurity by themselves: someone out of college can operate cybersecurity for that company, as opposed to someone with 15 years of experience that is super-specialized. If you had to do a Master's degree to drive a car, there would be less cars on the market for sure.

So you think AI will lower the bar to entry in cybersecurity and encourage more people to get involved?

That's what's going to happen – 18 to 36 months is what I see. When I started the company in 2019, people called me crazy. You know: “That's not doable, it's impossible to operate cybersecurity in a way that's 90% automated. You need a security operations team.” And in my mind was the analogy of the aircraft, that 90% of the flight is autopilot. Why in cybersecurity does a human have to make every little decision? I don't understand that.

"When I started the company in 2019, people called me crazy. You know: 'That's not doable.' And in my mind was the analogy of the aircraft, that 90% of the flight is autopilot."Villadiego, Lumu

Now you start seeing the industry analysts Gartner and Forrester talking about shifting the paradigm of security operations, how automation can be embraced by companies to do a better job at cybersecurity. And it has been, what, three years? That is the hardest part: that evangelization that it is doable. You see more customers replacing traditional SecOps with tools like Lumu. We are chasing to get massive adoption of this approach in the market.

So do you foresee that in future we will have cybercriminals running their offensive AI programs, while the defenders pick their defensive ones: essentially a battle of automated networks, with humans actors on both sides spectators as machines fight it out on their behalf?

Definitely what we're seeing on the bad side is the increasing ability to automate these attacks by a significant multiple. So if the good guys don't automate, it's going to create a mess. There is a certain level of AI vs AI we're going to see in the next 24 to 36 months.

I do believe that humans will play an important role, but it's going to be different. It's how to better understand the way your AI works, the ability to identify attacks in an automated fashion and move the parameters of your AI models to do so.

"If you look at the skill set, it has evolved over the past 25 years. In the future, you're going to see entry-level individuals that use a tool like Lumu to operate their cybersecurity."Villadiego, Lumu

If you look at the cybersecurity skill set, it has evolved over the past 25 years, from that guy that was able to deploy tools, connect cables, and make configurations. Today a good security guy has to do a little bit of programming, because you have to integrate those tools better. In the future, you're going to see entry-level individuals that use a tool like Lumu to operate their cybersecurity. There's going to be less of these highly qualified guys of today.

I must say your honesty is quite refreshing, because obviously there will be people fearing for their jobs who want to protect their expertise. But from what you're saying, this is pretty much an inevitable development and not necessarily a bad one, looking at the bigger picture?

I think we have to embrace change. And I believe it's going to play well for the industry as a whole: if we move into automation training for cybersecurity, two years from now companies are going to be way more secure than they are today. It's going to make a natural adjustment to the type of people that are doing the jobs. That's happening all over. It's just an evolution of how the world adapts and embraces knowledge.

No comments: