9 July 2015

Will Humans Matter In The Wars Of 2030? – Analysis

By Andrew Herr
July 7, 2015

Much of the future-looking discussion in national security circles today focuses on autonomous systems and cyber weapons. Largely missing from this discussion is a place for humans on the battlefield. Do today’s emerging and potentially disruptive technologies mean that humans will no longer be important in future warfare? A look at historical military operations and current technologies suggests the proper response is that, to paraphrase Mark Twain, reports of man’s obsolescence have been exaggerated.
Back to the Future?

This is not the first time analysts have argued that human performance would be significantly less important in future combat. Stepping back to the 1960s, Navy and Air Force planners saw the radar and air-to-air missile age as forcing humans to take a backseat to technology. Missiles were the unmanned aerial vehicles (UAVs) of their day—unmanned, high-tech systems to match the speed and technology of advanced warfare. In their proponents’ vision, fighters would not get close enough to each other for dogfighting skills to matter, so the U.S. military largely discontinued specialized air-combat tactics training and even purchased the F-4 fighter without an internal gun.

The Vietnam War proved to be a rude awakening for the aviation community. The Navy and Air Force expected to have a major advantage over the North Vietnamese air force, but both Services were losing one plane for every two they destroyed in the first half of the air war. By 1969, both had serious initiatives to improve their performance. The Air Force diagnosed a failure of technology, and it spent its resources on improving missile and aircraft performance. In contrast, the Navy identified a failure in training. This led the Navy to establish the Navy Fighter Weapons School (better known as TOPGUN), which gave pilots realistic air combat training. The results speak for themselves. From 1970 to 1973, the Navy was killing more than 12 North Vietnamese planes for every loss, while the Air Force had not improved at all.1

While this demonstrates the importance of humans in the context of 1970s technology, will 2030s technologies change this calculus?
Insights from Future-Looking Wargames

Some potential answers to this question flow from a series of recent wargames sponsored by the Department of Defense (DOD) Rapid Reaction Technology Office. To identify what DOD should watch closely, the NeXTech wargames focused on technology trends by examining how the United States and competitors might use them (and might use them differently), their potential impact, and the legal, ethical, and policy issues these technologies could generate.

First and foremost, the structure of the wargames shows some areas where we rely on humans and are likely to continue doing so. While focused on future technologies, the wargames did not look anything like the futuristic military environment. Participants gathered in conference rooms to discuss scenarios outlined on paper. Although some wargames use computer simulations and sophisticated data presentation, the NeXTech environment is representative of the majority of wargames conducted for DOD. This is not intended to be a criticism; the structure made sense because the focus was on extracting ideas and judgments from people, not computer simulations. We still rely on human expertise because computers simply cannot match it.

The same is true of intelligence analysis. While analysts use software and other tools to aid their work, the final judgment lies in the hands of people. The story of Palantir Technologies, a high-flying provider of software to the U.S. national security community, highlights this. The story begins in the early days of PayPal. The Russian mafia and other criminal organizations were stealing so much money through fraudulent transactions that PayPal was in danger of failing. As a Silicon Valley–based company, PayPal’s management hired top computer scientists coming out of Stanford to design an automated system to catch fraudulent transactions, but initial attempts failed. PayPal succeeded only when the programmers changed course and designed a system whose purpose was not to solve the problem, but to help humans sort through large amounts of data to identify fraud. This software and the approach behind it gave birth to Palantir. If the growth of Palantir Technologies within the national security and commercial space is any measure, myriad organizations agree.

Google’s autonomous cars also demonstrate the value of human input to computers. Image recognition systems cannot effectively pick out a stoplight while driving down a street, but once programmers give the location of street lights to a computer, it is a trivial job to identify whether it is red, yellow, or green.2 Thus, today, humans are instrumental, and a broader lesson appears: there are tasks where humans excel and those where computers exceed human capabilities, and computers appear unlikely to close many of these gaps by 2030, even with research on cognitive computing and the structure of the brain progressing.

In a 2012 paper, even a group of leading scientists in neuroscience and biology argued that we are still in the early days of this work. Researchers still principally focus on single neurotransmitters (which act to carry certain messages in the brain) and a few neurons at a time, while there are approximately 100 neurotransmitters and 100 billion neurons that interact in ways that create emergent properties. Multiple highly funded research projects are starting or have recently started to develop a more holistic understanding of the brain. These will advance the field, but as Santiago Ramon y Cajal, one of the fathers of neuroscience, described, the neurons and the synapses can be like “impenetrable jungles where many investigators have lost themselves.”3

This is not to say that we should not be vigilant for unanticipated, nonlinear advances in science and technology, but today’s scientific and technological landscape suggests that the human brain will still substantially outperform computers in the highest level cognitive tasks in 2030. Furthermore, the competition is not simply between the brain and computers, but rather between computers and humans augmented by computers.
Humans or Computers? Both.

Gary Kasparov and the world of chess provide a valuable insight into the human-computer relationship. After decades of humans easily beating computers, Kasparov barely beat IBM’s Deep Blue machine in 1996, and a year later, the IBM computer won. The enormous computational power of computers could outmatch the best humans. This is not, however, the end of the story. Fascinated by the power of computers, but still recognizing the strengths of the human brain, Kasparov began to organize what he called Advanced Chess, games where human-computer teams competed against one another. Even as chess computers advanced, humans with relatively simple chess programs dominated chess-specific supercomputers. Perhaps even more interestingly, the winners are not necessarily grandmasters with high-end computers. In early tournaments, the organizers were surprised to find that chess novices who were expert at manipulating the computers beat the grandmasters with their computers.4 Thus, while the type of skills required changed, the human brain still gave a major advantage.

New approaches to computer algorithms and interface design will continue to enhance the joint performance of humans and computers, so for autonomous computers to reach primacy, their development will have to outpace not only humans, but also the advancing performance of human-computer teams. Taken together, these examples strongly suggest that areas such as operational planning, intelligence analysis, and command will almost certainly stay within the human realm.
Stuck at the Back Making Decisions?

While planning, command, and intelligence analysis are all crucial aspects of war, they only represent a fraction of the roles military personnel fill today, and they might be pushed to the rear if autonomous systems controlled the battlefield. However, as long as humans have an advantage in the areas of creativity and judgment, we will have a major role at the frontlines. Today’s special operations missions are one example: when missions have a significant degree of uncertainty, require the ability to adapt on the fly, and have the chance for major reversals, the adaptability of humans is invaluable.

Consider the complexity of the Osama bin Laden raid. Almost immediately upon arrival, one of the helicopters crashed. Once the special operators entered the compound, they needed to protect themselves (just as machines would need to), but they did not want to kill unarmed women and children, so they had to operate based on a combination of tactical and ethical inputs. Then people from the neighborhood started to approach the compound, and the team needed to handle an additional potential threat. Meanwhile, the mission not only required the identification and killing or capturing of bin Laden, but it also proceeded to an intelligence collection mission, collecting computers and files.

While it is possible to program some of these activities and contingencies into autonomous systems, this is no simple task, and we are still far from a world where autonomous systems can face the essentially unlimited complexity of the modern battlefield with the skill of humans. It appears that, for some time into the future, humans will continue to excel in diverse missions such as this one. Certainly, the bin Laden raid was special in terms of importance and sensitivity, but all military missions require multiple judgment calls and adaptations throughout their length, whether or not they are undertaken by special operators. To some extent, commanders could direct systems remotely, but the human brain is tailored to operate in conjunction with our senses, so not being present may rob humans—and thus, our human-computer teams—of part of our effectiveness. Being on the battlefield also enables human-human interaction, which is important for interaction with local populaces and, to some extent, with enemy forces, such as captured soldiers.

Furthermore, remote control requires connectivity, and this is not guaranteed on the battlefield of today or tomorrow. The issue of connectivity and the value of having military personnel in the midst of operations are highlighted by some of the very same technology trends that commentators suggest have the potential to replace traditional human roles. The simultaneous belief in the future effectiveness of autonomous systems and effective cyber tools is striking.

During one scenario played out in the NeXTech wargames, a fictional naval force sailed toward an island chain that the wargamers were assigned to defend. To do so, they chose to deploy cyber tools against the ships’ command and control systems to wreak havoc with their defensive systems and disable their engines in a sort of “on demand” Stuxnet attack. If the United States—or potential adversaries—is able to achieve this level of effectiveness with cyber tools, autonomous systems may be especially vulnerable because of the lack of humans in the loop who might be able to override certain commands or at least recognize that something is amiss. This creates a cyber-autonomy paradox: powerful cyber tools can turn autonomous systems, usually an asset, into a liability.

Humans are in no way perfect, of course, but our ability to identify patterns and integrate information holistically is superior to computers in many situations and is a tool that can help maintain situational awareness. Furthermore, without humans in the loop, it may be difficult for commanders to know when systems have been compromised, as feedback from a compromised system may not accurately represent its status, location, or activities. Humans will not be able to intervene against all types of attacks—shutting down an engine on an aircraft would still be catastrophic—but we may be able to intervene against misleading signals from sensors and other challenges.

The value of this is highlighted by a number of stories from the past few years that demonstrate that not all aspects of military systems are protected. In 2009, the media reported that Iraqi insurgents were viewing the video recorded by Predator UAVs in Iraq using $26 software because the signals transmitting the video to personnel on the ground were not encrypted.5 This particular weakness might not make the systems vulnerable, but it shows the difficulty of mitigating all potential weaknesses. Furthermore, it is worth remembering Joy’s Law (named after the founder of Sun Microsystems, Bill Joy), which states that in all cases, the majority of the best people work for someone else. No matter how good our systems are, the majority of the best cyber operators and hackers will always be outside DOD.

Thus, while humans are hardly a cure-all for cyber attacks—we often enable the attacks by clicking on the wrong link or using flash drives—people may be able to mitigate the impact of certain types of attacks, such as inaccurate location information being fed into systems. We may also be able to communicate the problem so that commanders can engage defensive teams and systems to mitigate the effects of attacks. This does not mean that humans need to be on every platform, but it does suggest that it will be important to have humans near the frontlines.

The value of keeping humans in the loop to respond to erroneous data is perhaps best illustrated by the story of Stanislav Petrov. Then a lieutenant colonel in the Soviet Air Defense Forces, he was the duty officer overseeing the Soviet early warning satellite system in September 1983 when he was alerted that the United States had launched a handful of intercontinental ballistic missiles. Tensions were high at the moment; the Soviet Union had shot down a South Korean airliner only weeks before, and the United States was about to begin major military exercises, which included nuclear weapons. However, Petrov did not believe the system. He figured that the United States would not launch a small number of missiles in a first strike. Ground radars did not corroborate the report, and he recognized the potential for the new satellite sensors and computer system to make a mistake. He declared it a false alarm, and in doing so, he prevented the alarm from potentially leading Soviet leaders to order nuclear retaliation. The cause of the false alarm was sunlight reflecting off high-altitude clouds.6
The Value(s) Proposition

Finally, cost, cost effectiveness, and bureaucracy will influence human roles. Humans are expensive because of the cost to train, house, feed, clothe, pay, treat, and insure military personnel, but machines cost money, too. For states or organizations without substantial resources, using humans is practical because it does not require the often very large, upfront, fixed cost of additional hardware. Furthermore, like humans, machines have ongoing costs for development, testing, upgrades, fuel, and maintenance. This means that humans are often more cost effective, even for well-funded military organizations, in positions where the technological solution is expensive or not yet mature. Looking at today’s technology, this still covers the vast majority of positions humans fill, and this appears likely to continue to 2030. Even if there is no longer a pilot in the cockpit of many drones, there are still hundreds of humans supporting each mission, from analysis to maintenance.

The issue of cost effectiveness is also influenced by bureaucratic tendencies. When looking at DOD, it is clear that there is a preference for more capable, more expensive technological systems. A graph often circulated in defense circles—Norman Augustine’s Law #16—shows that each successive aircraft DOD purchases is more expensive than the last and that we buy fewer units. A trend line on the graph points to a future where we will procure one aircraft, which will consume the entire defense budget. This tendency will push the United States away from cheaper disposable systems, which will likely further delay the day in which robots are more cost effective than humans in a range of roles.

The role of humans is also influenced by cultural factors within military organizations. The ethos of the warfighter is central to the culture of the military Services. While there are variations to each—pilots, submariners, Marines, and myriad others have their own mythologies—human traits such as bravery, skill, and honor are integral to their culture. So even as technology changes, cultures, which tend to change slowly without severe outside shocks, would have to change as well to significantly dislodge humans from the conduct of warfare.
Beyond Effectiveness: Social and Ethical Issues

A unique aspect of the NeXTech wargame series was the composition of the participants and the focus of one of the events on the ethical, legal, and policy implications of emerging technology. Almost all DOD wargames include military personnel and technical experts, but the NeXTech series also included journalists, lawyers, philosophers, and ethicists. As some of these participants have written about in other fora, autonomous technologies challenge our legal and ethical requirements to protect noncombatants and act discriminately.

In a scenario where a North Atlantic Treaty Organization–like force had to liberate a city from a conventional opposing force, participants debated how to approach the use of autonomous systems when targets were in close proximity to civilians. One participant asked, “If an autonomous system [accidentally] kills a civilian, is the commander responsible? The company that built the system? The individual who wrote the software code?” DOD has acknowledged this challenge at the highest levels, and it released special policy guidance on the development of lethal autonomous systems in a memorandum from the Deputy Secretary of Defense in November 2012.7

This is not to say that humans are free of mistakes but rather that we have accepted ethical, legal, and policy constructs to handle human error. This suggests that, even with the option to employ hypothetical highly effective military systems, we expect to continue to rely on humans in situations characterized by uncertainty for sociocultural reasons in addition to operational reasons. Looking to 2030, it seems unlikely that we will successfully be able to design, build, and trust autonomous systems with ethics and strategy hardcoded into them across the wide range of missions necessary to largely replace humans. Science fiction provides a number of insights into the challenges to doing so effectively.
But How Will We Keep Up?

While humans are likely to play a crucial role in the military operations of 2030, technologies will change the types of performance militaries require, and they may also change humans. To better handle the amount of data that sensors and systems provide about the battlefield, we will develop software and hardware systems to improve commanders’ and operators’ situational awareness—an example of human-plus-computer teams described above. For example, the F-35 pilot interface does not primarily rely on a heads-up display. Rather, the information display is built into the helmet so that wherever the pilot physically looks the system provides information. Even looking down provides a view of the ground from cameras with information overlaid on the visual, such as waypoints and enemy and friendly systems. While rife with problems throughout its development, by integrating multiple data feeds into the visual picture, the final version will hopefully enable the pilot to make better tactical decisions.

As is clear from the TOPGUN and Advanced Chess examples, training individuals to use technology will play a key role in enhancing effectiveness. As such, it will be important for militaries to invest in new simulation and training techniques, as well as to measure the effectiveness of these approaches. Measuring learning is only one aspect—measuring the effect of that learning is harder and almost certainly more important. At present, this is an area of weakness for the U.S. military, as performance is only rarely assessed in the context of how inputs such as training influence it, especially in realistic operational scenarios. While appropriate training can better enable military personnel to use technology, it will also be important to equip military personnel with the skills necessary to operate in the absence of certain systems—in line with the earlier discussion about the cyber-autonomy paradox. The need for navigation, air-traffic control, and myriad other areas in which military forces currently rely on technological systems will not cease due to digital disruption. Rather, operating in a technology-denied environment may be the critical skillset in future wars between sides that both possess high-end capabilities.

While these systems are likely to help, the amount of information, even if provided through well-designed systems, will require high levels of concentration and mental energy. For units operating even semi-autonomous systems from the battlefield, huge amounts of data, requirements for decisions, and self-protection responsibilities will pose major cognitive challenges. At the same time, physical exertion, sleep deprivation, and the psychological stressors of battlefield operations, including uncertainty and the potential for injury or death, will layer over this to only enhance challenges.

While mental energy is often used colloquially, studies suggest that this is a real concept. The vigilance decrement (vigilance is the scientific term for sustained attention) and decision fatigue are well-documented phenomena whereby humans lose effectiveness at paying attention and making complicated decisions over time in taxing situations. In a recent Air Force study, researchers asked Servicemembers to perform a task that required them to monitor a computer screen to identify whether small icons representing planes were flying toward or away from each other. Compared to the first 10-minute period, accuracy fell approximately 5 percent for each additional 10 minutes on task until it ended at 40 minutes—with the individuals at only 85 percent performance.8 This is mirrored in today’s operational force. Despite piloting the aircraft from air conditioned rooms in the United States, today’s unmanned aerial vehicle operators can only operate for a limited amount of time before taking a break to recover mentally.

Thus, while analytical systems, decision-support software, and other cognitive aids will help humans, this picture of future operations suggests that they will strain human capabilities; however, another set of emerging technologies has the potential to improve the ability of humans instead of simply helping us use our existing capabilities. Proven and emerging technologies in the field of human performance modification have the potential to enhance the military performance of personnel on the future battlefield. The U.S. military has used stimulants, such as amphetamine “go pills” and newer versions such as the cognitive stimulant modafinil for decades, but new technologies show the potential for more targeted and varied enhancement.

Returning to the Air Force study on vigilance, the group whose mental performance declined with time was the control group. Two other groups used a technology called transcranial direct-current stimulation (tDCS), which is widely used in academic laboratories and to date has a clean safety profile. tDCS passes a weak electrical current through the skull using electrodes taped to the forehead. The electrical current changes how easy it is for nerve cells in the brain to fire. In the Air Force study, tDCS positioned over areas of the brain involved in attention enabled the personnel to focus with no dip in performance throughout the whole 40-minute study. In other studies, researchers have demonstrated that tDCS can enhance the speed of learning (including in militarily relevant tasks, such as radar returns) and improve threat detection.

tDCS is only one of a range of technologies that show the potential to enhance human performance. For example, research taking place in the U.S. military and in academia has identified hormones and neurotransmitters in the blood that are associated with the ability of special operators to perform at high levels despite extraordinary physical and mental demands and highly stressful environments.9 If the relationship is causal, this research suggests a potential route through which performance could be enhanced or maintained over long missions.

Returning to the NeXTech wargames, the organizers specifically tasked one group with examining applications of human performance modification technologies. Commensurate with this article’s vision of the human role in future warfare, participants did not focus primarily on traditional types of physical enhancement. Rather, to improve the ability of a hypothetical American force, participants were most interested in enhancing cognitive traits. They wanted more perceptive individuals with the ability to stay clear headed under stress and who needed minimal sleep to operate at high levels of effectiveness.

This vision of the future soldier is far from the berserkers of many science fiction depictions, and participants had good reason to steer away from old conceptions of super soldiers; in most cases, they would be counterproductive from the U.S. point of view. Indiscriminate killing would go against both the laws of war and good tactics and operational art, as local populaces often play an important role in achieving long-term objectives. The value of performance enhancement technologies will only be emphasized by the fact that each Soldier, Marine, Sailor, and Airman is likely to play an even more important role in future conflicts. To destroy a target in World War II took thousands of individuals manning hundreds of bombers. Today, one pilot can achieve the same destruction. Tomorrow, one individual may control tens or hundreds of partially autonomous systems.

While this technology area has substantial promise, there are important ethical questions surrounding military use, many of which are summarized in a report by Dr. Patrick Lin of the California Polytechnic State University.10 A key factor is that demonstrating the effectiveness of human performance technologies in military environments will require testing in military populations. At the same time, governments, including the U.S. military, have historical records of conducting unethical research, especially for national security purposes. Even today with strict controls in place, conducting ethical research in military environments is challenging because the chain of command is inherently—and necessarily—coercive (military personnel must follow orders for the system to function properly). Informed consent is the cornerstone of modern research ethics, but this environment makes it difficult to separate true consent from the influence of the chain of command, although ongoing research overseen by review boards shows that it is possible to gain true informed consent. There is also the possibility that enhancements inadvertently harm individuals, affect others’ perceptions of those who take them, give some individuals a leg up on others, and may affect reintegration into society. These are important questions deserving of careful consideration, but likewise, we should also ask whether we have an obligation to provide enhancements that make our military personnel less likely to be injured or killed on the battlefield.

These and other issues will affect interest in performance enhancers and the willingness of DOD to provide them to military personnel. While analyzing these issues, we must also be cognizant of the fact that from the individual military operator’s point of view, there is substantial interest. In a recent survey of Army personnel, more than 50 percent take supplements weekly, and based on 5 years of discussions with military personnel on the topic, I can say comfortably that interest in performance enhancement is very high.11 Nonetheless—and somewhat ironically—the same ethical factors that are likely to keep humans on the battlefield will also push some countries to limit the ways in which they enhance warfighters’ capabilities.

Not all actors abide by the same ethical boundaries, though, so this is also an area of potential asymmetry going forward. Nonstate actors, especially terrorist groups, may have the least compunction about using these technologies. If an organization is willing to conduct suicide attacks, then it probably would not care about long-term damage from an enhancement: news reports suggest that the terrorists who carried out the 2008 attacks in Mumbai used stimulants such as cocaine to stay up for long periods of time.12
Stepping Back

A confluence of technical, tactical, operational, strategic, and ethical reasons strongly suggests that humans will still play crucial roles in all aspects of warfare over the next two decades—and probably much longer. As highlighted above, we must be vigilant for nonlinear advancements in science and technology that could change the way states and other actors conduct military operations. But we should also be cognizant of the emerging tools to enhance human-computer interactions and human performance directly, which may shift the balance even more toward humans. The interactions between humans, human-computer teams, and autonomous systems on the battlefield of the future and how to optimize these are little-studied areas, but as the TOPGUN and other examples above demonstrate, we must work to find the right balance because it will likely provide a considerable advantage—and when we find this balance, human performance will continue to drive a large part of military effectiveness.

Source:

This article was originally published in the Joint Force Quarterly 77, which is published by theNational Defense University.


Notes: 
Joe Braddock and Ralph Chatham, Report of the Defense Science Board Task Force on Training Superiority & Training Surprise (Washington, DC: Defense Science Board, January 2001), available at . 
Adam Fisher, “Google’s Road Map to Global Domination,” The New York Times, December 11, 2013, sec. Magazine, available at . 
A. Paul Alivisatos et al., “The Brain Activity Map Project and the Challenge of Functional Connectomics,” Neuron 74, no. 6 (June 2012), 970–974. 
Gary Kasparov, “The Chess Master and the Computer,” The New York Review of Books, February 11, 2010, available at . 
Stuart Fox, “Insurgents Hack Predator Video Feed With $26 Software,” Popular Science, December 17, 2009, available at . 
Pavel Aksenov, “The Man Who May Have Saved the World,” BBC News, September 26, 2013, available at . 
Department of Defense Directive 3000.09, “Autonomy in Weapon Systems,” November 21, 2012, available at . 
Jeremy T. Nelson et al., “Enhancing Vigilance in Operators with Prefrontal Cortex Transcranial Direct Current Stimulation (tDCS),” NeuroImage 85, part 3 (January 15, 2014), 909–917. 
Charles A. Morgan et al., “Relationships among Plasma Dehydroepiandrosterone Sulfate and Cortisollevels, Symptoms of Dissociation, and Objective Performance in Humans Exposed to Acute Stress,” Archives of General Psychiatry 61, no. 8 (August 2004), 819–825; and Charles A. Morgan et al., “Relationship among Plasma Cortisol, Catecholamines, Neuropeptide Y, and Human Performance during Exposure to Uncontrollable Stress,”Psychosomatic Medicine 63, no. 3 (2001), 412–422. 
Patrick Lin, Maxwell J. Mehlman, and Keith Abney, Enhanced Warfighters: Risk, Ethics, and Policy (New York: The Greenwall Foundation, January 1, 2013), available at . 
Harris R. Lieberman et al., “Use of Dietary Supplements among Active-Duty U.S. Army Soldiers,” The American Journal of Clinical Nutrition 92, no. 4 (October 1, 2010), 985–995. 
Damien McElroy, “Mumbai Attacks: Terrorists Took Cocaine to Stay Awake during Assault,” The Telegraph (London), December 2, 2008, available at .

No comments: