Pages

12 December 2021

Poison in the Well Securing the Shared Resources of Machine Learning

Andrew Lohn

Executive Summary

Progress in machine learning depends on trust. Researchers often place their advances in a public well of shared resources, and developers draw on those to save enormous amounts of time and money. Coders use the code of others, harnessing common tools rather than reinventing the wheel. Engineers use systems developed by others as a basis for their own creations. Data scientists draw on large public datasets to train machines to carry out routine tasks, such as image recognition, autonomous driving, and text analysis. Machine learning has accelerated so quickly and proliferated so widely largely because of this shared well of tools and data.

But the trust that so many place in these common resources is a security weakness. Poison in this well can spread, affecting the products that draw from it. Right now, it is hard to verify that the well of machine learning is free from malicious interference. In fact, there are good reasons to be worried. Attackers can poison the well’s three main resources—machine learning tools, pretrained machine learning models, and datasets for training—in ways that are extremely difficult to detect.

Machine learning tools

These tools—which handle tasks like laying out neural networks and preprocessing images—consist of millions of lines of incredibly complex code. The code is likely to contain accidental flaws that can be easily exploited if discovered by an attacker. There is plenty of opportunity for malicious contributors to intentionally introduce their own vulnerabilities, too, as these tools are created by thousands of contributors around the world. The risk is not hypothetical; vulnerabilities in the tools already enable attackers to fool image recognition systems or illicitly access the computers that use them.

Pretrained machine learning models

It is becoming standard practice for researchers to share systems that have been trained on data from real-world examples, enabling the systems to perform a particular task. With pretrained systems widely available, other machine learning developers do not need large datasets or large computing budgets. They can simply download those models and immediately achieve state-of-the-art performance and use those capabilities as a foundation for training even more capable machine learning systems. The danger is that if a pretrained model is contaminated in some way, all the systems that depend on it may also be contaminated. Such poison in a system is easy to hide and hard to spot.

Datasets for training

Researchers who have gathered many examples useful for training a machine to carry out a particular task—such as millions of labeled pictures to train image recognition systems—regularly share their work with others. Other developers can train their own systems on these datasets, focusing on algorithmic refinements rather than the painstaking work of gathering new data. But a risk emerges: It is easy for attackers to undermine a dataset by quietly manipulating a small portion of its contents. This can cause all machine learning systems trained on the data to learn false patterns and fail at critical times.

Machine learning has become a battleground among great powers. Machine learning applications are increasingly high-value targets for sophisticated adversaries, including Chinese and Russian government hackers who have carried out many operations against traditional software. Given the extent of the vulnerabilities and the precedent for attacks, policymakers should take steps to understand and reduce these risks.

Understand the risk:

Find the attacks before they find you: The defense and intelligence communities should continue to invest in research to find new ways to attack these resources.

Empower machine learning supply chain advocates: Offices across government should hire staff to understand the threats to the machine learning supply chain.

Monitor trends in resource creation and distribution: Machine learning resources should be included in assessments of supply chain risks.

Identify the most critical AI components: Defense and intelligence communities should maintain a list of the most critical resources to prioritize security efforts.

Create a repository of critical resources: The most critical resources should be evaluated for security and made available from a trusted source.

Reduce the risk:

Create detection challenges and datasets: Federal departments and agencies should consider competitions with associated datasets to generate new solutions to secure shared resources and detect their compromise.

Establish machine learning attack red teams: Tech companies are starting to stand up AI red teams; government bodies should consider doing the same.

Fund basic cleanup and hygiene: Congress should consider authorizing grants to shore up machine learning resources as it has already done for cybersecurity.

Establish which systems or targets are off-limits: The United States should initiate an international dialogue on the ethics of attacking these resources.

Maintain dominance in creation and distribution of resources: The U.S. government should continue to support domestic tech companies and the culture of sharing innovations while refraining from aggressions that would drive rival nations off these sharing platforms.

No comments:

Post a Comment