William J. Barry and Blair Wilcox
The US Army is at an inflection point. Geostrategic and technological shifts are requiring rapid adaptation. On May 1, the secretary of the Army and the Army chief of staff published a letter to the force recognizing several initiatives to deliver warfighting capabilities, optimize force structure, and eliminate waste. Among the guidance to increase warfighting lethality, the Army’s seniormost civilian and uniformed leaders noted the requirement to shift toward capability-based portfolios that integrate AI into command-and-control nodes to accelerate decision-making and preserve the initiative. At the US Army War College, new approaches to AI capabilities are both a concept and a reality. The neocentaur model, which describes human-hybrid intelligence across the levels of war, has been tested in the classroom and in strategic wargaming. Furthermore, our ongoing research presents a technical solution, presenting deterministic AI capabilities that are more suitable for military use when lives are on the line. To maintain military superiority, the United States must adopt a human-hybrid approach—the neocentaur model—that leverages deterministic models, rather than purely generative, to mitigate the risks of cognitive atrophy and formulaic decision-making.
The Problem: Machine limitations and Military Decision-Making
Current research on the impacts of generative AI and critical thinking should cause military leaders some pause. Cognitive off-loading to autonomous agents, for example, may deprive staff officers of the “routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared.” Survey research of 319 “knowledge workers” funded by Microsoft determined that generative AI solutions reinforce shifts in critical thinking away “from information gathering to information verification,” “from problem-solving to AI response integration,” and “from task execution to task stewardship.” Generative AI course-of-action development tools, for example, appealing in their ability to reduce cognitive load on a staff and potentially free manpower, may have unintended consequences. To be fair, the thinking required to edit a 70 percent solution from a generative AI course-of-action tool may require some degree of creativity. However, the automation bias inherent in human psychology will likely accept machine solutions, particularly under the duress of combat operations. This concept is further reinforced by David Hume’s hypothesis that people favor what is already established, “imbu[ing] the status quo with an unearned quality of goodness, in the absence of deliberative thought.” What was intended, therefore, as a tool to augment human intellect begins, instead, to direct human cognition. This automation bias, or natural proclivity for a minotaur relationship—call it a minotaur drift—is a persistent threat with generative AI solutions. It must be recognized and avoided at the strategic level and permitted at the operational and tactical level only by deliberate fiat.
No comments:
Post a Comment