17 June 2023

Advancing Equitable Decisionmaking for the Department of Defense Through Fairness in Machine Learning

Irineo Cabreros, Joshua Snoke, Osonde A. Osoba, Inez Khan, Marc N. Elliott

Research QuestionsWhat are DoD's equity goals?

Which developing ML technologies for personnel management interact with these goals?

How do these equity goals compare with technical definitions of equity found in the literature?

How can DoD develop algorithms that meet both DoD's equity goals and broader institutional objectives?

The U.S. Department of Defense (DoD) places a high priority on promoting diversity, equity, and inclusion at all levels throughout the organization. Simultaneously, it is actively supporting the development of machine learning (ML) technologies to assist in decisionmaking for personnel management. There has been heightened concern about algorithmic bias in many non-DoD settings, whereby ML-assisted decisions have been found to perpetuate or, in some cases, exacerbate inequities.

This report is an attempt to equip both policymakers and developers of ML algorithms for DoD with the tools and guidance necessary to avoid algorithmic bias when using ML to aid human decisions. The authors first provide an overview of DoD's equity priorities, which typically are centered on issues of representation and equal opportunity within personnel. They then provide a framework to enable ML developers to develop equitable tools. This framework emphasizes that there are inherent trade-offs to enforcing equity that must be considered when developing equitable ML algorithms.

The authors enable the process of weighing these trade-offs by providing a software tool, called the RAND Algorithmic Equity Tool, that can be applied to common classification ML algorithms that are used to support binary decisions. This tool allows users to audit the equity properties of their algorithms, modify algorithms to attain equity priorities, and weigh the costs of attaining equity on other, non-equity priorities. The authors demonstrate this tool on a hypothetical ML algorithm used to influence promotion selection decisions, which serves as an instructive case study.

The tool the team developed in the course of completing this research is available on GitHub: https://github.com/RANDCorporation/algorithmic-equity-tool.

Key Findings

With respect to DoD's equity goals, the authors identify three principles that may be linked to mathematical notions of equity: (1) career entry and progression should be free of discrimination with respect to protected attributes, including race, religion, or sex, (2) career placement and progression within DoD should be based on merit, and (3) DoD should represent the demographics of the country it serves.

Although DoD uses ML technologies in several sectors (e.g., in intelligence and surveillance), the authors find that DoD does not currently rely on ML technologies in personnel management. However, DoD has started investing significantly in researching ML algorithms for personnel management. Developing a framework and tools for considering equity now can help mitigate potential pitfalls in the deployment of ML algorithms for personnel management.

There are three important lessons from the algorithmic fairness literature that guide the design of our framework and the RAND Algorithmic Equity Tool for developing equitable ML algorithms. First, there are many definitions of equity, each with subtly different implications. Second, it is generally not possible to attain multiple types of equity simultaneously. It is typically the case that attaining one form of equity necessitates violating another. Finally, enforcing an algorithm to behave equitably typically comes at the cost of other performance priorities, such as overall predictive accuracy.

Recommendations

DoD should audit algorithms that pose an equity risk. Algorithms that are used to aid high-stakes decisions about individuals must be audited to ensure they are meeting the equity goals for their particular application. This includes auditing both the performance properties of algorithms and the data used to train them.

DoD should increase the specificity of equity priorities. Both auditing and enforcing equity priorities in ML algorithms necessitate translating those priorities into concrete, mathematical concepts. Current DoD equity policies typically lack adequate specificity to perform this translation. The authors recommend that DoD consider moving toward more concrete language in specifying its equity goals. To do so, DoD should consider adopting equity definitions developed by the algorithmic fairness literature.

DoD should consider using ML as an aid to human personnel management decisions. Although ML algorithms threaten to introduce algorithmic bias, the authors do not believe that the alternative of human-only decisions is preferable. The ability to both audit and constrain an ML algorithm to meet equity priorities is a considerable strength over a human-only decision process.

No comments: