Pages

20 September 2023

AI Models Under Attack: Protecting Your Business From AI Cyberthreats

Stu Sjouwerman

Machine learning (ML) and artificial intelligence (AI) are rapidly unlocking new business opportunities and influencing every industry. That said, AI comes with its own set of risks and threat actors are known to employ a range of novel techniques to exploit weaknesses in AI models, security standards and processes, to dangerous effect. Organizations keen on leveraging AI must be aware of these risks so they can build a more robust AI that is resilient to cyberattacks.

How Cybercriminals Attack AI Models

The relatively new MITRE security framework dubbed “ATLAS” (short for Adversarial Threat Landscape for AI Systems) describes a number of tactics and techniques cybercriminals can use to exploit AI. Some of the top methods include:

1. Data Poisoning

Machine learning and AI models work by analyzing vast amounts of data to learn patterns and then make predictions or decisions. This underlying mechanism creates opportunities for innovative attacks on AI and ML-based systems. If attackers inject malicious training data, ML models learn incorrect information and subsequently make faulty, fraudulent or malicious predictions. For instance, an attacker could poison credit card fraud detection algorithms with fake transactions, resulting in the AI-powered scanner ignoring or bypassing fraudulent transactions.

2. Evasion Attacks

Evasion attacks are a method where cybercriminals fool or circumvent AI systems using vulnerabilities in the model’s algorithm or detection mechanism. If bad actors somehow gain an understanding of how an AI model works and the functions it uses to reach a decision, they can easily evade or fool these features. Attackers can simply evade AI-based facial recognition systems by donning a T-shirt with a face on it.

3. Model Theft

Imagine a situation where attackers can somehow steal the source code or obtain the AI model itself. Once they have access to this model, they can learn its responses to various inputs, design malicious prompts by identifying its weaknesses and even reverse engineer it (a.k.a. model inversion attacks). For example, if attackers steal the source code of a stock trading algorithm, they could hijack it to order unauthorized trades. Similarly, an attacker who has access to an AI model that predicts heart disease could conceivably use the model inversion technique to predict a person’s medical history.

4. Supply Chain Compromise

Like most technology, AI has a supply chain. The core of AI is composed of algorithms. These algorithms compose code libraries that are built by multiple AI developers who develop the AI models. The MLOps teams are responsible for ensuring ML models are scalable and deployed in a securely reliable way. If any of the developers or testers get social engineered or their credentials are stolen, the ML code can get infected or exposed. If the organization is large, the AI supply chain can include multiple teams, cloud services and third-party partners. If any of these associates are attacked or infiltrated, AI models can be compromised.

5. Backdoor AI Model

An AI backdoor is another method adversaries use to maliciously modify or update an AI algorithm. Let’s say criminals gain access to the model-storing server. They upload a trojan model that appears identically the same but is trained on a different type of data. As a result, an image classification model that was originally designed to detect a certain type of weapon does not detect it, eventually resulting in a severe security catastrophe.
How Can Businesses Protect Their AI Systems?

About 20% of businesses have suffered an attack on their AI models in the past 12 months. (This is most likely only the beginning.) There are protocols organizations can adopt to mitigate the risk of cyberattacks on AI systems including:

1. Subject Models To Adversarial Training

By injecting normal as well as adversarial samples into the training set (a.k.a. adversarial training), AI can be taught to identify malicious prompts, abuse and brute force attempts. This boosts the model's ability to detect and block malicious activity.

2. Secure Virtual Repositories

Most AI models are deployed in cloud platforms so that devices and applications can query them easily. However, cloud storage can easily be attacked or hijacked by cybercriminals. This is why it's crucial to host AI models in a secure environment using strict access controls. If feasible, apply digital signatures (watermarking) on trainer and pre-trained models using public key cryptography to preserve the authenticity and integrity of the model.

3. Train Employees

Educate employees about the consequences of cyberattacks on AI models and the offensives used by threat actors to compromise systems. Run phishing simulation exercises combined with classroom training to teach employees how to identify social engineering attempts and prevent a breach and theft of credentials. Create formal policies around AI that clearly spell out what is acceptable and what is not.

4. Mask Your AI Models

Using clever techniques such as model obfuscation, AI developers can mask or hide key information (neural structure, format, attributes, etc.) about the AI model. This reduces an attacker’s ability to parse or extract the model’s inner workings, preventing hackers from reverse engineering the algorithm.

5. Use Multilayered Threat Detection

Deploy multilayered cybersecurity controls to help block threats at every stage of the ATLAS life cycle. For example, intrusion prevention and detection (IDS) systems can detect threats in real time, while anomaly detection tools can flag atypical patterns, activities and behaviors, prompting further investigation.

6. Embed Security And Privacy

It is critical that AI models have privacy and security baked in by default. Leverage secure coding practices, and encrypt sensitive data using sophisticated algorithms like AES-256. Preserve the privacy of users by handling data securely throughout its life cycle. Audit and monitor AI models continuously to proactively identify instances of abuse, breach or manipulation.

AI models share an attack surface just like unpatched software and human error. Anything subject to manipulation should bring mindful awareness to the need for applying technical controls and security education. Adapting new protocols is critical to building a resilient defense against attacks on AI models.

No comments:

Post a Comment