21 July 2023

Mitigating AI-Based Cyberattacks

Carlo Tortora Brayda

The enemies have the machine gun while most of us are still hurling rocks. That’s how it feels today in the tech teams of many—if not most—mid-sized companies.

Artificial intelligence (AI) is advancing at an exponentially accelerated pace, and there is growing evidence and concern about its use in offensive cyberattacks. The prospect of cybercriminals or even nation-states wielding lightning-fast AI-powered penetration sequences to breach networks, steal data and cause damage is a sobering one. However, it is possible to mitigate the dangers of AI in offensive cyberattacks through proactive measures, continuous monitoring and ongoing development.

AI can also analyze data more effectively, giving attackers greater insight into vulnerabilities and allowing them to identify breach paths more accurately. This has created a whole new layer of complexity in the life of a CISO, as traditional cybersecurity solutions no longer suffice. Multifactor authentication (MFA), regular patching, endpoint security and well-implemented firewalls are the basics.

Fight AI With AI

But true AI-based solutions should be considered because you can only battle AI with AI. A human-only-based security operations center (SOC) no longer cuts it. Regular flesh and bones folks do not have the speed to observe, detect, identify and respond fast enough.

Humans remain critically essential but are now only part of the solution, and to make that part effective, training in the latest technologies and blue, red and purple teaming protocols are becoming increasingly important. Using cyber ranges needs to become the norm. Just recently, for example, AI cyberattacks and real-time response strategies are being tested to the limits in Estonia’s capital, Tallinn, at the CR14 NATO Cyber Range. However, these training and proving grounds must become available and appealing to the larger community of businesses and institutions, especially in critical infrastructure.

AI's Vulnerabilities

Implementing AI defenses comes with its intrinsic vulnerabilities—in particular, data poisoning. This happens when attackers manipulate data used to train or in the ongoing learning patterns of AI. This can cause the AI to give false positives or, worse, ignore attacker intrusion. Adversarial attacks can also affect AI input systems to produce the wrong outputs. For instance, biometric recognition can be bypassed this way.

And then there is model stealing—a battlefield all of its own. In model stealing, the malicious actor can access the AI model through several potential entry routes or by actively querying the model (black-box model extraction) by studying the inputs and outputs, stealing the model to study its weaknesses and find loopholes or using the technology to further their own nefarious plans against other victims. Deep neural networks are susceptible to model stealing, and universities and tech giants are studying methods of “deep defense” to safeguard AI models. The School of Cybersecurity in Seoul, South Korea, is an example.
Mitigation Methods

The advent of offensive AI raises the stakes also with phishing, smishing and vishing. Legacy anti-phishing training is no longer adequate. It needs to level up fast.

Training staff on preventing phishing and spear phishing, in particular, is vital. AI can now easily impersonate key executives and their style and tone of emails, so its ability to fool employees at target organizations is increasing. Again, there are AI tools that can help see through the stratagem, and those should be deployed.

Another important mitigation factor is ongoing monitoring. This includes traditional tracking, such as network traffic analysis, and advanced monitoring, such as AI behavioral analysis. Any user behavior that appears unusual needs to trigger a protocol to isolate the potential threat. Behavioral analytics technology can monitor network activity, keystrokes, email traffic and user-linguistic expression patterns.

To combat these threats effectively, cybersecurity experts must be at the forefront of developing new AI-based security protocols. This will require continued investment in research and development by both public and private entities.
Conclusion

The dangers of AI in offensive cyberattacks cannot be overemphasized. However, it is possible to mitigate these threats through a combination of proactive measures, such as mainstream security strategies, AI-focused security systems, training for cybersecurity staff (and the wider corporation), continuous monitoring and ongoing development to create new AI-based security protocols. By focusing on these critical points, we can ensure that our networks remain as secure as possible in the face of increasingly sophisticated cyberattacks.

No comments: