Pages

16 December 2020

Artificial intelligence in war: Human judgment as an organizational strength and a strategic liability

Avi Goldfarb and Jon Lindsay

Artificial intelligence has the potential to change the conduct of war. Recent excitement about AI is driven by advances in the ability to infer predictions from data. Yet this does not necessarily mean that machines can replace human decisionmakers. The effectiveness of AI depends not only on the sophistication of the technology but also on the ways in which organizations use it for particular tasks. In cases where decision problems are well-defined and plentiful relevant data is available, it may indeed be possible for machines to replace humans. In the military context, however, such situations are rare. Military problems tend to be more ambiguous while reliable data is sparse. Therefore, we expect AI to enhance the need for military personnel to determine which data to collect, which predictions to make, and which decisions to take.

The complementarity of machine prediction and human judgment has important implications for military organizations and strategy. If AI systems will depend heavily on human values and interpretations, then even junior personnel will need to be able to make sense of political considerations and the local context to guide AI in dynamic operational situations. Yet this in turn will generate incentives for adversaries to counter or undermine the human competencies that underwrite AI-enabled military advantages. If AI becomes good at predicting the solution to a given problem, for instance, a savvy adversary will attempt to change the problem. As such, AI-enabled conflicts have the potential to drag on with ambiguous results, embroiled in controversy and plagued by crises of legitimacy. For all of these reasons, we expect that greater reliance on AI for military power will make the human element in war even more important, not less.

No comments:

Post a Comment