10 July 2023

AI versus AI is the next battle for security

DAN SCHIAPPA

The rapid advance of generative artificial intelligence has cybersecurity professionals and regulators bracing for the worst. Recently, federal lawmakers have called OpenAI CEO Sam Altman in to testify about the safety and risks that AI poses to national security and the economy, while select Silicon Valley business leaders have argued that all AI research should be paused until more safeguards can be enforced.

With machine learning fueling new visual, auditory, and textual methods of trickery, the longstanding “arms race” between cyber attackers and cybersecurity practitioners has left both sides with new opportunities to act at the speed of data to identify new vulnerabilities.

With this, well-known attack vectors cybersecurity professionals know how to defend against, like social media manipulation or intrusive malware will likely become more precise and difficult to thwart with standard detection software.

The good news is that just as quickly as threat actors come up with new AI-generated attacks, good guys are using the technology to bolster their defenses and recognize patterns or anomalies in their environment that wouldn’t register without AI.

For IT leaders handling day-to-day security, the best strategy to place your security environment in a good position is to stay on top of the latest AI-based cybersecurity news and alerts coming out of reputable publications and threat-sharing resources.

Doubling down on basic cyber hygiene, such as patching known vulnerabilities, flagging emails or messages that appear to be out of the ordinary, hovering over links before clicking them and confirming with colleagues in person or over Zoom that they did, in fact, ask you for that data will go a long way toward helping IT leaders secure their organization in the midst of AI’s advancement.

Proactively putting AI-based threat-detection tools in the hands of security teams is the next best thing that IT leaders can do for their team. Recent research has even noted that organizations using AI to help defend themselves resolved breaches nearly two-and-a-half months quicker than organizations not using AI or automation, and saved $3 million more in breach costs than those not using the technology.

With AI on the rise, below are three attack vectors that cybersecurity professionals can expect to look quite different in the near future.

AI-GENERATED MALWARE

Cyber practitioners should expect attackers to leverage the technology to create new types of malware and malicious tools that are more effective at evading detection by many stock technologies like endpoint protection platforms and endpoint detection and response tools. There is already evidence that this is beginning to come to fruition, and early security research shows the possibilities of this.

For example, Black Mamba, an AI-generated malware created as an experiment by HYAS researchers has already evaded detection against “industry-leading endpoint detection and response” software. That AI-generated malware like this is “virtually undetectable” by today’s security standards should be a major cause for concern for organizations leveraging outdated security solutions. When security inevitably comes down to AI versus AI, exclusively human-driven engineering attempts will not be able to keep up with the volume and sophistication of AI-generated attacks like this.

AI-GENERATED PHISHING

With generative AI’s ability to pull information from every corner of the web, threat actors are able to craft messages with a specific tone unique to an organization or public figure, making them difficult to distinguish from legitimate emails or texts.

AI-generated text is already being used to create highly realistic messages, but it’s the volume and scale of these campaigns that will set them apart from current phishing techniques soon. Automating highly effective phishing emails, or using chatbots, will enable attackers to spend a little time targeting many victims.

Advances in generative AI will also make it easier to create convincing audio and video deepfakes, which will be used to impersonate individuals or spread false information online. The possibility of deepfakes is already making life confusing for officials, as Elon Musks’s lawyers last month argued—unsuccessfully—in court that recordings and videos taken of him in 2016 could have been deepfakes. As with many aspects of AI, federal and state law in the United States is poorly equipped to deal with deepfakes or hyper-realistic messages.

AI-ASSISTED SOCIAL ENGINEERING

Creating realistic-looking fake accounts that spread false or misleading information is well within the possibility of image-based generative AI platforms, as are realistic-looking news stories that sway public opinion one way or another. Fake news is hardly a new phenomenon, but as with AI-based phishing scams, it’s the sheer volume of fake news that automation will allow bad actors to create that separates pre-AI fake news from post-AI fake news. China recently arrested a man in a first-of-its-kind scenario for using ChatGPT to create a fake news story of a train derailment, and he likely won’t be the last person to use the technology to create chaos.

AI has been in the hands of security professionals for a long time, and now that it its becoming more democratized, it’s more important than ever to remember threat actors’ ability to adapt and scale. The bottom line is that attack vectors security professionals have been studying for years are about to be flipped upside down as AI tools become more advanced and widespread, and it’s through further investment and development with AI that the security industry can stay one step ahead of the threat landscape.

No comments: