28 October 2019

“AUGMENTED INTELLIGENCE WARRIOR”: AN ARTIFICIAL-INTELLIGENCE AND MACHINE-LEARNING ROADMAP FOR THE MILITARY

Scott Humr 

Changes in military technological paradigms have a way of sneaking up on us. Complacency, rooted in confirmation bias, can always encourage the belief that new technologies will not change the character or war. Yet, the ascent of artificial intelligence and machine-learning technologies have the potential to upend the current status quo character of war. This is particularly problematic for the United States military, since the status quo has meant American warfighting preeminence. To be sure, these two capabilities (AI and ML) will make the security challenges of the new millennium more nuanced and inscrutable, especially as these technologies become actors themselves on the battlefield. Yet even as we acknowledge this to be true, the problem will be compounded by the type of overconfidence that is perhaps an almost inevitable byproduct of decades of US military dominance—overconfidence, for instance, that eternally envisions Americans overcoming any and all obstacles by harkening back to past successes, but that fails to realize that the conditions that drove this recent historical primacy no longer hold true in the era of accelerating AI and ML progress. If we succumb to it, such overconfidence also risks underwriting a sort of complacency that glosses over the changing conditions of warfare, which will require more than just the indomitable human spirit (something Americans don’t hold a monopoly over) to fight effectively, let alone win. As it has with similar military miscalculations of the past, arrogance in US invincibility will deliver disastrous results.


Superior weapons and technology once gave the United States a military advantage, but AI and ML are leveling the playing field by allowing adversaries to achieve parity, and in some case superiority, in specific capabilities. With the proliferation of inexpensive drone technology, ubiquitous sensors, and the Internet of Things, the complex security challenges of this new era will make many military encounters even more difficult than those of the recent past. Not only have these technologies lowered the barriers to entry, they are quickening the speed at which decisions are made and are set to outpace the limits of human cognition. To be sure, AI and ML applications are also a centerpiece of the Department of Defense’s “third offset” strategy, which aims to keep the United States ahead of its adversaries in the coming years. But more can be done. Specifically, DoD can face these intractable twenty-first-century challenges head on by adopting new approaches to fast-track AI and ML development. Expediting the use of these technologies by a larger user-base establishes processes for allowing every servicemember to contribute to training AI and ML applications, and ultimately laying the groundwork for a DoD-wide program to help tackle the approaching security challenges of this new era.

Future security challenges will defy easy solutions and will not limit themselves to any particular unit, battlefield, or area of operation. They will also require new security postures and demand more resources to address an increasing variety of attack vectors the United States has not traditionally had to confront. From swarming technologies and hacking autonomous vehicles to poisoning important databases, such realities mandate that more than just a few warfighting laboratories and secretive testing branches have access to AI and ML technologies. In reality, these technologies are already available to anybody with an Internet connection and access to data. Moreover, state-of-the-art AI and ML no longer require a PhD in Bayesian networks to begin extracting benefits from these technologies. For instance, Google has already made its flagship ML technology, TensorFlow, open source. Amazon Web Services offers AI and ML stacks for developing insights into an organization’s data. Massive open online courses also offer free instruction on AI and ML technologies, making them available to anybody. Likewise, these and other cloud-based technologies have already met stringent US government requirements for security and are available to DoD organizations. Therefore, DoD can purchase access to AI and ML technologies through cloud-based services in such a way that could allow more servicemembers at all levels the opportunity to begin experimenting and finding insights in their organizations’ vast amounts of data and information that could contribute to warfighting effectiveness.

The Department of Defense cannot wait for the glacial pace of traditional acquisitions to deliver this technology. By adopting a lean-startup mentality and pursuing a low-risk path that emphasizes zero-cost of ownership, DoD can avoid delivering a perfect—but perfectly outdated—system five to ten years from now. Moreover, soldiers, sailors, airmen, and Marines will have gained increased confidence in using these technologies on a regular basis and stand better postured for its arrival on the future battlefield. For example, analyzing vast amounts of social media, human intelligence reports, and organic sensor data, we will one day use AI and ML technologies to extract key insights for understanding the operating environment that may elude human perception. Yet, if we do not begin learning these technologies today, we will find ourselves far behind our adversaries. DoD should not only rely on a few experts or a cadre of contracted support specialists for the skills the future battlefield will demand.

Projected future threats will exact an increasing cost in terms of human cognitive capacities dedicated toward understanding complex security problems. By limiting these challenges to only a few dedicated planners, developers, or units, DoD will miss remarkable opportunities to leverage the contributions of the larger force. Instead, we should work toward a future where all members contribute to refining AI and ML applications through supervision and feedback mechanisms that improve outcomes of these technologies. Metcalfe’s Law—which states that the value of a network is proportional to the square of the number of networked users of the system—applies here, meaning the right approach can yield exponential improvements. Cloud-based applications would allow personnel from anywhere to contribute their unique domain expertise toward improving AI and ML outcomes. For example, an ML application designed to optimize patrol or convoy routes through a particular megacity could take inputs from Marines in the barracks who have played a similar scenario within the Instrumented-Tactical Engagement Simulation System. Merged with up-to-date sensor inputs, tactical decision games, and lessons-learned databases, ML applications can provide a virtual version of The Defence of Duffer’s Drift, whereby optimization produces better outcomes as more members contribute to the overall performance of the system—a sort of Reachback 2.0. Marines could therefore directly enhance outcomes in complex environments they may never experience personally.

Access to such technologies would also have second- and third-order effects toward improving personnel performance and organizational-knowledge capture. These technologies could provide an opportunity for servicemembers, in their off-time and regardless of MOS, to work complex scenarios that are facilitated through gamified ML applications, while also providing an opportunity to obtain important experiences that contribute to warfighting effectiveness. A global leaderboard could even be implemented to incentivize participation, harness healthy competition, and provide a novel way of comparing performance of similar personnel from other organizations across DoD. Equally importantly, such behaviors help capture the critical organizational knowledge and combat experience that traditionally lies untapped in the minds of many. Such unused knowledge, gained from years of experience, need not disappear after servicemembers exit the armed forces. Rather, every member can have the opportunity to leave an indelible mark in shaping AI and ML outcomes through their virtual profiles. Looking even farther forward, these profiles would construct a sort of “virtual warrior avatar,” and when combined with a servicemember’s individual training records, could generate insights into understanding the unique combinations of background, education, experiences, and training that produce the best types of problem solvers and active contributors. Overall, this endeavor should seek to construct a portfolio of AI and ML applications that will function well within a variety of different domains. It might not necessarily produce a virtual Clausewitz, but it envisions a future where these technologies might one day come together to ultimately develop something that might be called the “augmented intelligence warrior,” or AIW.

AIW would become the guiding vision of how to tackle the complex security issues of the new millennium. It would sit atop the many domain-specific AI and ML applications, interacting with them through application programing interfaces—more commonly called APIs—to allow members across the services to interact with it or query it for information. AIW would fuse the best domain-specific AI applications to augment the user and provide novel insights into problems. AIW could also take on many different personas depending on the requirements of each problem. If a unit does not have a particular subject-matter expert, it could query AIW for recommendations (and provide feedback to improve future results). If a small unit in a remote location needs a cultural adviser, AIW could fill that role through a secure smartphone app.

Similar to the way virtual assistants have improved as more and more people use them—their training is essentially democratized—AIW could also be brought into hundreds of different conference groups across the various war colleges where military and civilian students help AI and ML applications learn, participate in group discussions across the joint community to enhance its own capabilities and provide potential insights to forward-deployed commanders, or become a virtual professor. Furthermore, AIW would capture exercise data from students conducting planning exercises and course-of-action wargaming with each other, against AI, or in AI-versus-AI scenarios. The end product could provide better advice to leaders faced with various challenges, who could be confident that an optimal course of action has been tested against thousands or even millions of different permutations. AI and ML technologies stand to help solve complex security issues by connecting the collective intelligence of the entire force, but will not stop there.

AIW would also become a virtual sentinel and adviser of servicemembers’ online activities. While notions of complete privacy are practically impossible today due to the ubiquity of cameras, smartphones, satellite imagery, and other formal and informal surveillance devices, AIW could become a virtual guardian angel of all online personas. AIW could also curate personal records and scan social media information to provide alerts and updates that would help ensure servicemembers are protected against violations of operational security, phishing attempts, fake news, and scams that directly affect the readiness of the force. AIW, in combination with each servicemember’s virtual warrior avatar, would work to provide recommended courses, training, books, and a host of other activities that have the objective of improving a individual readiness. These technologies could provide advice in areas such as physical fitness, specific knowledge, or cultural understanding in areas that AIW and a virtual warrior avatar have determined a servicemember is deficient or that are helpful for upcoming future assignments. Most importantly, these technologies would work together to improve the outcomes in complex security environments by targeting training of individual standards often not covered in unit-level or service-wide training, which will increase the overall readiness of future forces for the widest possible range of environments.

Adopting these recommendations pose many challenges, and even discussing them highlights the uncertainties of AI and ML technologies. And to be sure, there are important considerations that must be addressed. But unless we truly believe that these technologies will play no role in future military activities, on the battlefield or off, we have a professional obligation to have an honest conversation about them and begin to consider the best ways to harness their capabilities. Early AI and ML applications in technologies such as self-driving cars, search results, and stock-trading algorithms have demonstrated significant failings. However, many self-driving cars have driven millions of accident-free miles, a feat that few human drivers could ever claim. Potential breaches of personally identifiable information also raise concerns. But training data sets that consist of sanitized data can mitigate concern of PII and be used for experimentation until adequate encryption and security measures meet rigorous DoD standards. There are also general (but mostly inapplicable) fears to contend with—that AI and ML are taking away occupations—and apocalyptic ones—that they pose an existential risk to human existence. Certainly, AI and ML are taking over jobs that once belonged to humans, but most of those occupations were routine, predictable, or monotonous. AI and ML applications have also created new opportunities to perform higher-level tasks that provide more meaningful employment for humans. AI and ML track records show that performance over time is truly unmatched.

Unfortunately, these simplistic criticisms will take hold stop DoD from taking advantage of these technologies’ power if it does not create the environment for advancing innovative AI and ML applications that will assist in meeting the challenges of the twenty-first century. In order to break free from the current prison of conventional wisdom, DoD needs to experiment with these technologies—today. By expanding servicemembers’ access to AI and ML technologies for experimentation, DoD can take an important step toward harnessing these technologies, develop future applications that marshal the collective knowledge of the entire force, and stand postured for the difficult road that lies ahead.

Maj. Scott A. Humr is an officer in the United States Marine Corps. He holds a master’s degree in information technology from the Naval Postgraduate School and a master’s degree in military studies from the Marine Corps Command and Staff College. He is currently serving with Task Force 51/5th Marine Expeditionary Brigade.

No comments: