Pages

8 January 2019

Healthy Skepticism About The Future Of Disruptive Technology And Modern War – Foreign Policy Research Institute

by Frank G. Hoffman

This blog is based on Dr. Hoffman’s opening remarks at the Modern Warfare Institute’s annual conference, held at the U.S. Military Academy, West Point, November 4, 2018.

There are many candidates for examining the most salient changes in the emerging strategic environment. Many perceive the emerging era of great power competition as a mandate to prepare for large-scale, conventional wars. Others will examine smaller changes in context like urban warfare, the influence of social media or its weaponization, or potentially disruptive new technologies.

Some scholars are skeptical about our ability to think intelligently about the future. Sir Lawrence Freedman, in his latest book, is one of the skeptics, having seen too much optimism and too little humility in futurology. But I hope everyone here recognizes that it would be irresponsible to suppose that we can afford to stand pat with today’s legacy capabilities, outdated or stovepiped doctrines, and rigid mental paradigms. Sir Michael Howard noted that military organizations must conceive of themselves like ships moving forward into the fog of time with occasional glimpses at navigational aids––real world events and battles–– that permit them to adjust course in their doctrines and capabilities. To do otherwise, to stand still on the shore, would be standard practice for some armed forces, but it would also be strategically shortsighted.

As former Secretary of Defense James Mattis wrote in his 2018 National Defense Strategy,

We must anticipate the implications of new technologies on the battlefield, rigorously define the military problems anticipated in future conflict, and foster a culture of experimentation and calculated risk-taking. We must anticipate how competitors and adversaries will employ new operational concepts and technologies to attempt to defeat us.

Our path forward starts by looking backward to history. It is only with the study of the past that we can best anticipate how the evolving character of conflict of the 21st century will impact us. Our best tools to illuminate what appears to be another consequential era are informed foresight and critical historical study. What follows in this talk is my own thought experiment into the unknowable future that I call the 7th Military Revolution or the Age of Autonomy.

Over 25 years ago, Manuel De Landa wrote in War in the Age of Intelligent Machines that when we move past cruise missiles that merely hit their intended targets to the day when “autonomous weapons begin to select their own targets, the moment the responsibility of establishing whether a human is friend or foe is given to the machine, we will have crossed a threshold and a new era will have begun.”

We are now entering that new era. The major technological breakthroughs that are now occurring in Robotics, Informational and Cognitive Sciences, and material sciences are by themselves truly revolutionary. Their convergence magnifies their potential application. Rather than a Second Machine Age or a Fourth Industrial Revolution, I use the construct Wick Murray and Bernard Knox called a Military Revolution. These eras “recast society and the state as well as military organizations. They alter the capacity of states to create and project military power.” Knox and Murray identified five historical cases, and recognized the ongoing sixth revolution, the Information Age. These revolutions have been additive, never entirely displacing the past. The seventh, the Autonomous Revolution, looms ahead of us enshrouded in fog and mist. This era will merge the Industrial and Information Revolutions, combining machines and computers in ways we envision now through science fiction. Of particular salience in this new era are developments in Artificial Intelligence (AI), especially Machine Learning, combined with unmanned systems and robotics.

Autonomy will recast society and the state, as well its armed forces. AI-enabled systems and autonomous weapons will, per Murray’s definition, “alter the capacity of states to create and project military power.”

Autonomous systems are not new. Today, the U.S. Navy and U.S. Army field defensive missile systems like Aegis and the Patriot system with degrees of autonomy built into their controls. We should expect further developments as the technology matures. Every future trends report, that of the NIC, the UK, and the Joint Chiefs of Staff, identified this area as a critical trend.

Yet, our appreciation of the implications of the 7th Military Revolution are weak. To explore these implications, there is no better framework than Clausewitz’s Trinitarian concept to examine the impact of the convergence of robotics and artificial intelligence. This analytical tool is central to his study of war’s most fundamental relationships, and has enduring value.

Clausewitz defined the trinity around the interaction of three sets of forces: irrational forces (“primordial violence, hatred, and enmity”); non-rational forces (“the play of chance and probability” and the genius of the commander); and purely rational forces (war’s subordination to policy and reason). These interactive elements influence the violence that lies at the center of war. Clausewitz insisted that these elements were variable in their relationship to one another. Each element or tendency is impacted in the emerging revolution, and each is impacted by Artificial Intelligence.

Passion/Enmity often is associated with the Population. Domestic policy leaders may find AI conducive to targeted cyber and social media strategies that suppress or inflame populations. Of course, the adversary may try to do the same. This is not a new element in war, but its impact can now be felt faster and with greater frequency. Because of the public’s growing use of social media and internet as a principal source of information, these technologies become an ideal vector for automated information attacks and influence tactics. More automated methods supported by algorithms will increase the mass scale, frequency, and customized tailoring of messages.

Next, extensive use of robots and unmanned systems could reduce public interest much less support for its armed services. The population may feel less engaged or tied to national policy actions if robotic forces are employed against the Nation’s sons and daughters. At the same time, Cabinet wars that entail few core national interests may be more likely, since they may be perceived as politically low risk.

The populace may ultimately come to see the need to send humans into combat or human casualties itself as an indication of policy failure. Critical to the profession’s mission and domain, the infusion of machinery, the reduction of human decision-making, and the rise of remote stand-off warfare could erode the identity of the military as professionals with a unique social responsibility involving risk and danger. This erosion of risk and responsibility might undercut the ideal of the profession of arms, accelerating a “post-heroic” age where the State’s security forces are even further distant from the society they serve.

With regard to non-rational forces and human factors, the introduction of new information-based technologies and robotic systems will not reduce strategic friction or eliminate the potential for chance. At the strategic and operational levels, AI is expected to enhance the clarity of intelligence, assess small changes in big databases, and reduce human biases in plans and decisions. Some improvement in quality of decision-making can be expected. Yet, one potential effect is a higher chance for miscalculation by decision makers or headquarters whose databases are compromised.

New sources of friction will be introduced by mechanical failure, algorithmic degradation, learning, and adapting in a way inconsistent with “intent.” Both sides, even fully autonomous, will contain flaws and vulnerabilities, with avenues for opponents to inject uncertainty.

Another possible change may influence Clausewitz’s ideal for intuition and coup d’oeil, “the quick recognition of a truth that the mind would ordinarily miss or would perceive only after long study and reflection.” He observed, “This type of knowledge, cannot be forcibly produced by an apparatus of scientific formulas and mechanics.” Our Prussian sage may be proven wrong over time. Clausewitz argued that the talent of a military commander could be gained without experiential learning “through the medium of reflection, study and thought.” Will Deep Learning programs provide that rapid recognition, the discernment of “truth,” and augment or focus that deep study and reflection? Former Deputy Secretary of Defense Robert Work thinks so, when he pointed out “learning machines are going to give more and more commanders coup d’oeil.”

Perhaps, this does not go far enough. Instead, the developed coup d’oeil of the human could be augmented by a data-infused cyber d’oeil that supports human decision-making. Rather than a bifurcated conception of human decision-making, leveraging Kahneman’s “System 1” thinking, which are intuitive or gut decisions, and “System 2,” or deliberative cognitive processes, we may exploit man/machine teaming to maximize both with what I term “System 3.” We need to know a lot more about AI-enabled cognition and how to educate warriors to leverage AI without misusing it.

We can expect decision-making at this level to be perhaps more challenged by the blurring modes of warfare and the speed of events. Cyber and hypersonic missile attacks will compress decision-making timelines for both strategic and operational leaders. The necessity for preplanned delegation and engagement authorities is clear. Analysts have for several decades been aware that the role of human decision-making will be increasingly challenged by advanced technologies that speed up weapons or decision-making OODA loops (observe–orient–decide–act). Retired General John Allen, now leading the Brookings Institution, has talked recently about the advent of a concept he calls “hyperwar.” This concept accounts for the expected speed of decision-making required in high-intensity operations in cyber-space and in the employment of missiles and unmanned vehicles moving at velocities greater than the speed of light. This burgeoning need for speed raises important questions: does this radically sped-up decision-making take civilians and policy out of the conflict, and thus is political direction and operational leadership simply delegated to machines? Is a “man on the loop” a nice ethical artifact that is fatal in future contests?

To sum up, greater automation and autonomy will change the nature and character of war in several ways. First, it could weaken the role of political direction forcing delegation to lower levels to respond to faster forms of attack. It may divorce the professional advice of military leaders to civilian policymakers more inclined to the calculations of their preferred algorithm assistant. It can lessen the ability of governments to gain the support and legitimacy of their own populations, while making it easier for foreign governments to manipulate them.

While we remain at least a decade or two away from autonomy beyond narrow task-specific applications becoming a reality, we should recognize its significant impact. The most significant elements of war: violence, politics, and chance will certainly remain. So, too, will the great continuities of fog and friction. Despite brilliant machines, we can count on human contingency. Certainly, the relationship of these elements will be altered as Clausewitz foretold. War’s essence as politically directed violence will remain its most enduring aspect, even if more machines are involved at every level.

The involvement of humans is central to war’s nature as well. Some speculate about war without humans, at least tactically. I do not anticipate battles devoid of human contestants, with swarms of robots directed by their own superior intelligence. Nor should we expect our governments to delegate strategic issues, like the decision to go to war, or to accept battle, to an algorithm. As long as humans are responsible for directing war, for writing code, and for fielding and maintaining machines, warfare will remain an instrument of policy and the province of warriors. Those warriors may have machine augmentation, be advised by algorithms that synthesize and sort faster; they may delegate decisions to cyber assistants, and operate more remotely; but a human will be directing the fight at some level.

Yet, there is little doubt that the age of greater autonomy will dramatically impact the ever evolving character of war. It will impact each combat function to some degree whether it is computer defense or precision attack. Surely, there will still be human combatants and non-combatants mixed in the dense urban canyons where most battles will be fought. But that should not stop us from striving to imagine a potential disruptive change like autonomous weapons. Per Freedman’s wonderful retrospective history, The Future of War, A History, we must do so if only to understand the choices we may make and the choices that may be available to our competitors. These choices need to be vetted as Sir Lawrence recommends—skeptically but seriously.

As we proceed forward into the misty fog of a dimly lit future, we must sail forward and think with both imagination and intellectual rigor. Hype and hesitancy should be displaced by curiosity and hypotheses for testing. We should understand what this revolution can and cannot deliver.

No comments:

Post a Comment