Pages

27 May 2023

The Risks And Rewards Of Artificial Intelligence In Cybersecurity

Dr Fene Osakwe

When I think about artificial intelligence (AI), the movie Terminator 3: Rise of the Machines comes to mind. The ability of a system or systems to simulate human intelligence in a given task and produce a particular outcome is artificial intelligence in probably the most basic definition.

With innovations in 2023 such as ChatGPT and other AI solutions becoming readily available, there is now a lot more awareness and acceptance of the use of artificial intelligence in achieving various objectives. One such objective is securing personal or organizational information assets, largely referred to as cybersecurity.

How AI Can Be Used For Good

According to NIST, the Cybersecurity Framework's Five Functions are "Identify, Protect, Detect, Respond, Recover." How can AI be rewarding to the security professional in practical terms for any of these domains?

These are what I call the rewards of AI from a cybersecurity perspective. There are several examples of this, but for the purpose of this article, I will expound on two of these practical areas where we can leverage AI.

• Identify And Protect

The ability to identify cyber threats in our modern age has become increasingly difficult, especially with zero-day attacks. These are system vulnerabilities discovered by attackers before the vendor or system designer has become aware of them and consequently can be exploited for malicious purposes.

I recall that one organization I worked with had constant service interruptions around mid-day every day for over two weeks. Their antimalware solution was just not detecting anything. We eventually got an antimalware solution that had the ability to learn the environment during stable hours and create a baseline.

After a while, we noticed the operational services had not gone down for some time. Following further analysis, we identified it had been a denial-of-service attack that the previous antimalware did not have the signatures for—at the time.

• Respond

One of the actions that security operations teams take to respond to a cyber threat is to block the originating IP address of the identified threat. These IP addresses are sometimes obtained from email headers, threat intelligence tools and platforms. AI can automate this process by identifying, analyzing and blocking confirmed malicious IP addresses. This can greatly improve the efficiency of the operations teams.
Risks Of AI To Keep In Mind

It is also important to point out here that risks are associated with the use of AI in cybersecurity. It may be impossible at this stage to create a comprehensive catalog of specific AI risks, but McKinsey recommends that organizations focus on six areas of AI risk: privacy, security, fairness, transparency and explainability, safety and performance and third-party risks.

For the purpose of this article, I will highlight three of these and share my experiences with them.
• Transparency And Explainability

I have had to defend certain decisions made based on intelligence received from technology tools to my executives at different points in my career. One of the questions always asked is, "Why were you certain enough to take XYZ decision?"

In cases like this, you then need to walk through how you went from intelligence to insight to action so they can come to the same conclusion you did, given the same input. One of the challenges with many AI models and solutions is the lack of transparency around how a model was developed. This also includes how datasets fed into a model were combined.
• Fairness

AI systems are only as insightful as the data used in training these models. It is possible to encode bias inadvertently and unintentionally into them. This could result in wrong analysis and ultimately wrong decisions with severe consequences. In particular, if certain vulnerability trends were not picked, then the organization could suffer a major attack.

The key to solving the bias challenge is the word “representative sample.” According to Investopedia, "A representative sample is a subset of a population that seeks to accurately reflect the characteristics of the larger group."

For example, a company may experience one million cyber threats daily. These threats can include SQL injections, command and control, phishing attacks, ransomware and so on. If an AI model is trained to respond to these threats, the data used for such training must be representative of the entire population to the degree to which each threat made up the population.
• Security

What happens when the gatekeeper, whose role it is to secure the main gate entrance to the house from external attackers, becomes your weakest link? Say they leave the gate open and fall asleep or simply collude with the criminals. This is what happens when AI technology that is employed for cyber defense activities is not managed with cyber hygiene rules such as regular updates, security by design at the building phase and testing.

I did penetration testing for a client a few years back. My team and I tried all the regular methods and approaches to find a vulnerability to exploit on the network but identified none. Eventually, we identified a vulnerability with a security solution used by the security team in-house to protect the organization. We exploited it, and that was our in-road into lateral movements within the organization until we got to the domain controller.
Conclusion

AI risks are indeed still emerging—as with every technology that we have seen as "new" in the past decade, from big data and the cloud to quantum computing and more. Embracing the technology necessitates accepting the risk. Therefore, it is important, as we embrace the opportunities and rewards of AI, that we adopt concrete, dynamic frameworks to manage AI risks.

No comments:

Post a Comment