Pages

13 February 2021

The Department of Defense's Posture for Artificial Intelligence

by Danielle C. Tarraf

DoD should adapt AI governance structures that align authorities and resources with its mission of scaling AI.

The JAIC should develop a five-year strategic roadmap that is backed by baselines and metrics.

Each centralized AI service organization should develop a five-year strategic roadmap also backed by baselines and metrics.

There should be annual or biannual portfolio reviews of DoD investments in AI.

The JAIC should organize an annual or biannual workshop that showcases DoD's AI programs.

DoD should advance the science and practice of verification, validation, testing, and evaluation (VVT&E) of AI systems.

All funded AI efforts should include a budget for AI VVT&E.

All agencies within DoD should create or strengthen mechanisms for connecting AI researchers, technology developers, and operators.

DoD should recognize data as critical resources.

The chief data officer should make some DoD data sets available to the AI community.

DoD should embrace permeability — and an appropriate level of openness — as a means of enhancing DoD's access to AI talent.

Once again, artificial intelligence (AI) is at the forefront of our collective imaginations, offering promises of what it can do to solve our most challenging problems. As the news headlines suggest, the U.S. Department of Defense (DoD) is no exception when it comes to falling under the AI spell. But is DoD ready to leverage AI technologies and take advantage of the potential associated with them, or does it need to take major steps to position itself to use those technologies effectively and safely and scale up their use? This is a question that Congress, in its 2019 National Defense Authorization Act (NDAA), and the Director of DoD's Joint Artificial Intelligence Center (JAIC) asked RAND Corporation researchers to help them answer. This research brief summarizes that report.

Artificial Intelligence and DoD

The term artificial intelligence was first coined in 1956 at a conference at Dartmouth College that showcased a program designed to mimic human thinking skills.[1] Almost instantaneously, the Defense Advanced Research Projects Agency (DARPA) (then known as the Advanced Research Projects Agency [ARPA]), the research arm of the military, initiated several lines of research aimed at applying AI principles to defense challenges (see Figure 1). Since the 1950s, AI — and its subdiscipline, machine learning (ML)[2] — has come to mean many different things to different people: For example, the 2019 NDAA cited as many as five definitions of AI, and no consensus emerged on a common definition from the dozens of interviews conducted by the RAND team for its report to Congress.

To remain as flexible as possible, the RAND study was not bound by precise definitions, asking instead, "How well is DoD positioned to build or acquire, test, transition, and sustain — at scale — a set of technologies broadly falling under the AI umbrella"? And if those technologies fall short, what would DoD need to do to get there?

No comments:

Post a Comment