21 October 2020

Will Commanders Trust Their New AI Weapons and Tools?

BY MARGARITA KONAEV

As the Department of Defense races to develop AI-enabled tools and systems, there are outstanding questions about exactly where their investments are going, and what benefits and risks might result. One key unknown: will commanders and troops trust their new tools enough to make them worth the effort?

Drawing on publicly available budgetary data about the DOD science and technology program, the Center for Security and Emerging Technology, or CSET, examined the range of autonomy and AI-related research and development efforts advanced by the U.S. Army, Navy, Marines, Air Force, and DARPA. Among the main research lines are programs dedicated to increasing automation, autonomy, and AI capabilities in air, ground, surface, and undersea unmanned vehicles and systems; increasing the speed and accuracy of information processing and decision making; and increasing precision and lethality throughout the targeting processes. We recently released a two-part analysis of the scope and implications of these efforts. One of the most consistent themes is an emphasis on human-machine collaboration and teaming.

Indeed, while in the public imagination the integration of AI into military systems foretells a future where machines replace humans on the battlefield and wartime decisions are made without human input or control, in our assessment of U.S. military research on emerging technologies, humans remain very much in the loop. 

The U.S. Army’s flagship Next Generation Combat Vehicle research program is a good example of human-machine teaming cutting across different use cases and applications of autonomy and AI. One of its research activities develops AI and ML software and algorithms to “team with soldiers in support fully autonomous maneuver of NGCV and other autonomous systems, both physical and non-embodied.” This effort is meant to develop capabilities for NGCV that will “increase autonomy, unburdening the soldier operator, with a high degree of survivability and lethality in a highly contested environment.” Another applied NGCV research initiative looks to new “ML and reinforcement learning methods” to enable “joint human-intelligent agent decision making, optimizing the strengths of each in the decision process and creating an adaptive, agile team.” In other words, this research envisions intelligent technologies that interact, collaborate, and integrate with soldiers, optimizing performance for both humans and human-machine teams in different environments and missions.

DARPA’s applied research on “Artificial Intelligence and Human Machine Symbiosis,” however, takes the human-machine teaming concept a step further, aiming to develop technologies that “enable machines to function not only as tools that facilitate human action but as trusted partners to human operators.” 

Trust is central to human-machine teaming because it affects the willingness of humans to use autonomous and AI-enabled systems and accept their recommendations. CSET research, however, identifies important gaps in U.S. military research on the role of trust in human-machine teams and gaps in investment toward building trustworthy AI systems. In fact, the ‘Budgetary Assessment’ report which analyzes the 2020 U.S. military science and technology programs finds that only 18 out of the 789 research components related to autonomy and 11 out of the 287 research components related to AI mention the word “trust.” 

As the Department of Defense AI ethics principles dictate, humans are ultimately responsible for the development, use, and outcomes of AI systems in both combat and non-combat situations. If left unaddressed, gaps in research on the role of trust in human-machine teams could hinder the development and fielding of AI systems in operational environments. 

For instance, AI-enabled decision support technologies in particular are meant to help commanders process and assess more options for action and make better and faster decisions in complex and time-sensitive situations. Advances in real-time analytics and recommender systems technologies could help warfighters adapt and gain the initiative in dynamic operational settings. But if human operators don’t trust the system, they would be reluctant to follow its recommendations. Thus, without trust in human-machine teams, the U.S. military may not be able to capitalize on the advantages in speed and precision AI promises to deliver.

Understanding the role of trust in human-machine teams is also important for effective collaboration with allies. Some research suggests that trust in AI varies by country of origin. If commanders from some allied countries are less willing to trust and use AI-enabled systems during multinational operations, such divergence could undermine coordination, interoperability, and overall effectiveness.

Finally, in addition to research into human attitudes toward technology, progress toward advanced human-machine teaming will also depend on making the AI systems themselves trustworthy, in part by making them more transparent, explainable, auditable, reliable and resilient. 

The U.S. military has several S&T programs focused explicitly on increasing AI system robustness and resilience, strengthening security against deceptive and adversarial attacks, and developing systems that behave reliably in operational settings. DARPA’s “AI Next Campaign” is a prominent effort. Yet as previously noted, in our assessment, most of the research programs related to autonomy and AI don’t mention those safety and security attributes that are so pertinent to trustworthiness. Now, beyond hampering progress toward advanced human-machine teaming, our “Strategic Assessment” report also finds that failing to consistently invest in reliable, trustworthy, and resilient AI systems could increase the risk from miscalculations, misuse, and unintended escalation, especially in contested environments such as the South China Sea. 

Today’s research and development investments will set the course for the future of AI in national security. By directing more attention to understanding the role of trust in human-machine teams, the U.S. military could accelerate the development and fielding of intelligent agents and systems as trusted partners to human operators across different missions and environments. Furthermore, by prioritizing reliable and secure AI systems, the United States will be able to ensure long-term military, technological, and strategic advantages.

No comments: