Pages

9 January 2023

Defensive vs. offensive AI: Why security teams are losing the AI war

Louis Columbus

Weaponizing artificial intelligence (AI) to attack understaffed enterprises that lack AI and machine learning (ML) expertise is giving bad actors the edge in the ongoing AI cyberwar.

Innovating at faster speeds than the most efficient enterprise, capable of recruiting talent to create new malware and test attack techniques, and using AI to alter attack strategies in real time, threat actors have a significant advantage over most enterprises.

“AI is already being used by criminals to overcome some of the world’s cybersecurity measures,” warns Johan Gerber, executive vice president of security and cyber innovation at MasterCard. “But AI has to be part of our future, of how we attack and address cybersecurity.”

Enterprises are willing to spend on AI-based solutions, evidenced by an AI and cybersecurity forecast from CEPS that they will grow at a compound annual growth rate (CAGR) of 23.6% from 2020 to 2027 to reach a market value of $46.3 billion by 2027.

Nation-states and cybercriminal gangs share a goal: To weaponize AI

Eighty-eight percent of CISOs and security leaders say that weaponized AI attacks are inevitable, and with good reason. Just 24% of cybersecurity teams are fully prepared to manage an AI-related attack, according to a recent Gartner survey. Nation-states and cybercriminal gangs know that enterprises are understaffed, and that many lack AI and ML expertise and tools to defend against such attacks. In Q3 2022, out of a pool of 53,760 cybersecurity applicants, only 1% had AI skills.

Major firms are aware of the cybersecurity skills crisis and are attempting to address it. Microsoft, for example, has an ongoing campaign to help community colleges expand the industry’s workforce.

There’s a sharp contrast between, on the one hand, enterprises’ ability to attract and retain cybersecurity experts with AI and ML expertise and, on the other, with how fast nation-state actors and cybercriminal gangs are growing their AI and ML teams. Members of the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, number approximately 6,800 cyberwarriors, according to the New York Times, with 1,700 hackers in seven different units and 5,100 technical support personnel.

AP News learned this week that North Korea’s elite team had stolen an estimated $1.2 billion in cryptocurrency and other virtual assets in the past five years, more than half of it this year alone, according to South Korea’s spy agency. North Korea has also weaponized open-source software in its social engineering campaigns aimed at companies worldwide since June 2022.

North Korea’s active AI and ML recruitment and training programs look to create new techniques and technologies that weaponize AI and ML in part to keep financing the country’s nuclear weapons programs.

In a recent Economist Intelligence Unit (EIU) survey, nearly half of respondents (48.9%) cited AI and ML as the emerging technologies that would be best deployed to counter nation-state cyberattacks directed toward private organizations.

Cybercriminal gangs are just as aggressively focused on their enterprise targets as the North Korean Army’s Department 121 is. Current tools, techniques and technologies in cybercriminal gangs’ AI and ML arsenal include automated phishing email campaigns, malware distribution, AI-powered bots that continually scan an enterprise’s endpoints for vulnerabilities and unprotected servers, credit card fraud, insurance fraud, generating deepfake identities, money laundering and more.

Attacking the vulnerabilities of AI and ML models that are designed to identify and thwart breach attempts is an increasingly common strategy used by cybercriminal gangs and nation-states. Data poisoning is one of the fastest-growing techniques they are using to reduce the effectiveness of AI models designed to predict and stop data exfiltration, malware delivery and more.

AI-enabled and AI-enhanced attacks are continually being fine-tuned to launch undetected at multiple threat surfaces simultaneously. The graphic below is a high-level roadmap of how cybercriminals and nation-states manage AI and ML devops activity.

Cybercriminals recruit AI and ML experts to balance attacks on ML models with developing new AI-enabled techniques and technologies to lead attacks. Source: Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI, January 2022 IEEE Access

“Businesses must implement cyber AI for defense before offensive AI becomes mainstream. When it becomes a war of algorithms against algorithms, only autonomous response will be able to fight back at machine speeds to stop AI-augmented attacks,” said Max Heinemeyer, director of threat hunting at Darktrace.

Attackers targeting employee and customer identities

Cybersecurity leaders tell VentureBeat that the digital footprint and signature of an offensive attack using AI and ML are becoming easier to identify. First, these attacks often execute millions of transactions across multiple threat surfaces in just minutes. Second, attacks go after endpoints and surfaces that can be compromised with minimal digital exhaust or evidence.

Cybercriminal gangs often target Active Directory, Identity Access Management (IAM) and Privileged Access Management (PAM) systems. Their immediate goal is to gain access to any system that can provide privileged access credentials so they can quickly take control of thousands of identities at once and replicate their own at will without ever being detected. “Eighty percent of the attacks, or the compromises that we see, use some form of identity/credential theft,” said George Kurtz, CrowdStrike’s cofounder and CEO, during his keynote address at the company’s Fal.Con customer conference.

CISOs tell VentureBeat the AI and ML-based attacks they have experienced have ranged from overcoming CAPTCHA and multifactor authentication on remote devices to data poisoning efforts aimed at rendering security algorithms inoperable.

Using ML to impersonate their CEOs’ voice and likeness and asking for tens of thousands of dollars in withdrawals from corporate accounts is commonplace. Deepfake phishing is a disaster waiting to happen. Whale phishing is commonplace due primarily to attackers’ increased use of AI- and ML-based technologies. Cybercriminals, hacker groups and nation-states use generative adversarial network (GAN) techniques to create realistic-looking deepfakes used in social engineering attacks on enterprises and governments.

A GAN is designed to force two AI algorithms against each other to create entirely new, synthesized images based on the two inputs. One algorithm, the generator of the image, is fed random data to create an initial pass. The second algorithm, the discriminator, checks the image and data to see if it corresponds with known data. The battle between the two algorithms forces the generator to create realistic images that attempt to fool the discriminator algorithm. GANs are widely used in automated phishing and social engineering attack strategies.

How a GAN creates deepfakes so realistically that they are successfully used in AI-automated phishing and CEO impersonation attacks. Source: CEPS Task Force Report, Artificial Intelligence, and Cybersecurity. Technology, Governance and Policy Challenges, Centre for European Policy Studies (CEPS). Brussels. May 2021

Natural language generation techniques are another AI- and ML-based method that cybercriminal gangs and nation-states routinely use to attack global enterprises through multilingual phishing. AI and ML are extensively used to improve malware so that it’s undetectable by legacy endpoint protection systems.

In 2022, cybercriminal gangs also improved malware design and delivery techniques using ML, as first reported in CrowdStrike’s Falcon OverWatch threat hunting report. The research discovered that malware-free intrusion activity now accounts for 71% of all detections indexed by CrowdStrike’s Threat Graph. Malware-free intrusions are difficult for perimeter-based systems and tech stacks that are based on implicit trust to identify and stop.

Threat actors are also developing and fine-tuning AI-powered bots designed to launch distributed denial of service (DDoS) and other attacks at scale. Bot swarms, for example, have used algorithms to analyze network traffic patterns and identify vulnerabilities that could be exploited to launch a DDoS attack. Cyberattackers then train the AI system to generate and send large volumes of malicious traffic to the targeted website or network, overwhelming it and causing it to become unavailable to legitimate users.

How enterprises are defending themselves with AI and ML

Defending an enterprise successfully with AI and ML must start by identifying the obstacles to achieving real-time telemetry data across every endpoint in an enterprise. “What we need to do is to be ahead of the bad guys. We can evaluate a massive amount of data at lightning speed, so we can detect and quickly respond to anything that may happen,” says Monique Shivanandan, CISO at HSBC. Most IT executives (93%) are already using or considering implementing AI and ML to strengthen their cybersecurity tech stacks.

CISOs and their teams are particularly concerned about machine-based cyberattacks because such attacks can adapt faster than enterprises’ defensive AI can react. According to a study by BCG, 43% of executives have reported increased awareness of machine-speed attacks. Many executives believe they cannot effectively respond to or prevent advanced cyberattacks without using AI and ML.

With the balance of power in AI and ML attack techniques leaning toward cybercriminals and nation-states, enterprises rely on their cybersecurity providers to fast-track AI and ML next-gen solutions. The goal is to use AI and ML to defend enterprises while ensuring the technologies deliver business value and are feasible. Here are the defensive areas where CISOs are most interested in seeing progress:

Opting for transaction fraud detection early when adopting AI and ML to defend against automated attacks

CISOs have told VentureBeat that the impact of economic uncertainty and supply chain shortages has led to an increase in the use of AI- and ML-based transaction fraud detection systems. These systems use machine learning techniques to monitor real-time payment transactions and identify anomalies or potentially fraudulent activity. AI and ML are also used to identify login processes and prevent account takeovers, a common form of online retail fraud.

Fraud detection and identity spoofing are becoming related as CISOs and CIOs seek a single, scalable platform to protect all transactions using AI. Leading vendors in this field include Accertify, Akamai, Arkose Labs, BAE Systems, Cybersource, IBM, LexisNexis Risk Solutions, Microsoft and NICE Actimize.

Defending against ransomware, a continuing high priority

CISOs tell VentureBeat their goal is to use AI and ML to achieve a multilayered security approach that includes a combination of technical controls, employee education and data backup. Required capabilities for AL- and ML-based product suites include identifying ransomware, blocking malicious traffic, identifying vulnerable systems, and providing real-time analytics based on telemetry data captured from diverse systems.

Leading vendors include Absolute Software, VMWare Carbon Black, CrowdStrike, Darktrace, F-Secure and Sophos. Absolute Software has analyzed the anatomy of ransomware attacks and provided critical insights in its study, How to Boost Resilience Against Ransomware Attacks.

Absolute Software’s analysis of ransomware attacks highlights the importance of implementing cybersecurity training, regularly updating antivirus and antimalware software, and backing up data to a separate, non-connected environment to prevent such attacks. Source: Absolute Software, How to Boost Resilience Against Ransomware Attacks
Implementing AI- and ML-based systems that improve behavioral analytics and authentication accuracy

Endpoint protection platform (EPP), endpoint detection and response (EDR), and unified endpoint management (UEM) systems, as well as some public cloud providers such as Amazon AWS, Google Cloud Platform and Microsoft Azure, are using AI and ML to improve security personalization and enforce least privileged access.

These systems use predictive AI and ML to analyze patterns in user behavior and adapt security policies and roles in real time, based on factors such as login location and time, device type and configuration, and other variables. This approach has improved security and reduced the risk of unauthorized access.

Combining ML and natural language processing (NLP) to discover and protect endpoints

Attack service management (ASM) systems are designed to help organizations manage and secure their digital attack surface, which is the sum of all the vulnerabilities and potential entry points attackers use for gaining network access. ASM systems typically use various technologies, including AI and ML, to analyze an organization’s assets, identify vulnerabilities and provide recommendations for addressing them.

Gartner’s 2022 Innovation Insight for Attack Surface Management report explains that attack surface management (ASM) consists of external attack surface management (EASM), cyberasset attack surface management (CAASM) and digital risk protection services (DRPS). The report also predicts that by 2026, 20% of companies (versus 1% in 2022) will have a high level of visibility (95% or more) of all their assets, prioritized by risk and control coverage, through implementing CAASM functionality.

Leading vendors in this area are combining ML algorithms and NLP techniques to discover, map and define endpoint security plans to protect every endpoint in an organization.

Automating indicators of attack (IOAs) using AI and ML to thwart intrusion and breach attempts

AI-based indicators of attack (IOA) systems strengthen existing defenses by using cloud-based ML and real-time threat intelligence to analyze events as they occur and dynamically issue IOAs to the sensor. The sensor then compares the AI-generated IOAs (behavioral event data) with local and file data to determine whether they are malicious.

According to CrowdStrike, its AI-based IOAs operate alongside other layers of sensor defense, such as sensor-based ML and existing IOAs. They are based on a common platform developed by the company over a decade ago. These IOAs have effectively identified and prevented real-time intrusion and breach attempts based on adversary behavior.

These AI-powered IOAs use ML models trained with telemetry data from CrowdStrike Security Cloud and expertise from the company’s threat-hunting teams to analyze events in real time and identify potential threats. These IOAs are analyzed using AI and ML at machine speed, providing the accuracy, speed and scale organizations need to prevent breaches.One of the key features of CrowdStrike’s use of AI in IOAs is the ability to collect, analyze and report on a network’s telemetry data in real time, providing a continuously recorded view of all network activity. This has proven an effective approach to identifying potential threats. Source: CrowdStrike.
Relying on AI and ML to improve UEM protection for every device and machine identity

UEM systems rely on AI, ML and advanced algorithms to manage machine identities and endpoints in real time, enabling the installation of updates and patches necessary to keep each endpoint secure.

Absolute Software’s Resilience platform, the industry’s first self-healing zero-trust platform, is notable for its asset management, device and application control, endpoint intelligence, incident reporting and compliance, according to G2 Crowd’s ratings.

Containing the AI and ML cybersecurity threat in the future

Enterprises are losing the AI war because cybercriminal gangs and nation-states are faster to innovate and quicker to capitalize on longstanding enterprise weaknesses, starting with unprotected or overconfigured endpoints. CISOs tell VentureBeat they’re working with their top cybersecurity partners to fast-track new AI- and ML-based systems and platforms to meet the challenge. With the balance of power leaning toward attackers and cybercriminal gangs, cybersecurity vendors need to accelerate roadmaps and provide next-generation AI and ML tools soon.

Kevin Mandia, CEO of Mandiant, observed that the cybersecurity industry has a unique and valuable role to play in national defense. He observed that while the government protects the air, land and sea, private industry should see itself as essential to protecting the cyberdomain of the free world.

“I always like to leave people with that sense of obligation that we are on the front lines, and if there is a modern war that impacts the nation where you’re from, you’re going to find yourself in a room during that conflict, figuring out how to best protect your nation,” Mandia said during a “fireside chat” with George Kurtz at CrowdStrike’s Fal.Con conference earlier this year. “I’ve been amazed at the ingenuity when someone has six months to plan their attack on your company. So always be vigilant.”

No comments:

Post a Comment