10 February 2023

Does Artificial Intelligence Change the Nature of War?


Baptiste Alloui-Cros is a professional wargame designer and founder of the Strand Simulations Group. He earned an MA in War Studies from King’s College London, as well as a BA in Political Science and an MA in International Security from Sciences Po Paris. His Master's thesis, entitled “How can Artificial Intelligence provide new insights to modern Strategic Thought? Using wargames as a bridge between machines and strategists”, is awaiting publication in an academic journal. His main research interests lay at the intersection of strategy, artificial intelligence, and wargaming.

In his book, ‘Men against Fire’, the American General S.L.A. Marshall designated the battlefield as ‘the epitome of war’[i], where everything that characterises the deep essence of war, as theorised by Clausewitz, comes into action. Violence, passions, opposition of wills, frictions: whatever the war, this blunt reality is always reached, at one point or another. This makes war a human activity before everything else.

And yet, the battlefield seems to slowly give way to non-human elements. The rise of automated weapons based upon Artificial Intelligence (AI), such as autonomous drones, raises questions about the human character of the battlefield. It even interrogates the validity of the concept of the battlefield itself, as AI weapon systems are programmed to act or react over long distances at fantastic speeds, far out of human reach. This displacement of warfare toward new dimensions of time and space seems to challenge the monopoly humans traditionally own over the conduct of war and the use of force. Where has the ‘epitome of war’ gone then? Does the rise of AI truly challenge the nature of war itself?

This piece argues that although AI alters the character of war in significant ways, it does not change its nature. Rather, it has the contrary effect. It emphasises the essential element behind the deep nature of war: its human component. Psychology, ethics, politics, passions and the proximity of pain and death are what war is all about. A ‘trinity of violence, chance and politics’.[ii] By departing from all this, by showing how relative other elements are, by handling all the practical details, AI enables us to focus on what matters most. It is this contrast that reminds us that war is a very intimate expression of our humanity, and something we cannot delegate.

This essay treats the influence of Artificial Intelligence on the nature of war through three different layers.

First, it examines how AI changes the strategic landscape, the physical world, and shows that its impact is not disproportionate or fundamentally different from other technological military revolutions. Then, it looks at what AI brings to strategic decision-making, how it influences the mind, and argues that when used right, it does not alienate decision-makers but rather allows them to make the most of their own potential.

Finally, it interrogates the possibility that AI may eventually make war obsolete and concludes with the observation that while AI can affect both the realms of the physical world and the mind, it can never handle the affairs of the heart. Yet this is where war starts, ends, and where it is mostly conducted. Thus, AI will never emancipate us from it on its own, and neither will it change its deep nature.
The impact of AI on the strategic landscape

Science and War have always been intertwined, and technological innovation is one of the primary drivers of warfare and civilisation at large. One can easily argue that some innovations were decisive in the way some conflicts resulted. From Archimedes defending Syracuse to Vauban’s fortifications during the wars of Louis XIV, from Jean Bureau’s guns in the hundred years war to Gribeauval’s artillery system that was a cornerstone of Napoleon’s campaigns, from British ships of the line to German U boats, most major conflicts are characterised by such innovations.[iii] But do they fundamentally change the nature of war?

Arguably not. There is a step between affecting the conduct of war and changing the nature of war. Although the appearance of gunpowder or nuclear weapons considerably shaped the way we fight those wars, they did not diminish the importance of politics, passions, uncertainty, and frictions in general. On the contrary, the increase in firepower and potential for destruction contributed to raising the stakes around warfare and, therefore, brought even more importance to human decision making and the role of psychology and morality, as the Cuban missile crisis of 1962 tends to prove.

The apparition of Artificial Intelligence on the battlefield, in the narrow sense of the term, i.e., systems that excel at a single task, remains in line with this progression of the conduct of war. Besides, their most specific feature, the fact they are being automated, is not especially new. Landmines, for instance, carry out their task without the need for human intervention. It is rather the amount and the complexity of tasks that can be automated thanks to AI that represent a novelty. And it is the limited level of agency that humans have once these processes are started that can be worrisome.[iv] The following quote by sociologist Ted Nelson summarises the dilemma of the tactical use of AI perfectly:

“The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do.” [v]

As such, AIs are nothing else but tools, and can be as efficient as detrimental to their users. Like any other weapon, they require appropriate use and proper incorporation into military doctrines to be useful. Growth in the use and scale of AIs on the battlefield brings another concern for the military: the fear that combat becomes too fast-paced for humans to remain engaged. Indeed, the automation of weapon systems, such as missile systems, or other military agents, makes the timeframe available to respond to incoming threats lower and lower, to the point only pre-programmed AIs could react in time.

This compression of space and time, however, is a logical corollary of the evolution of warfare over time.[vi] From swords and close combat to bows, javelins and muskets all the way to ballistic missiles, technological innovations accompanied the need for adding more distance with the adversary and striking him faster to reduce the threat he causes to ourselves. One can even argue the annihilation of space and time is the aim of technologies in general, whether it is the railroad, the telegraph, or the internet.[vii] It also led to increasing confusion between the scales of war, from strategy to tactics, as the physical dimensions of war seem to merge. And yet, for Jomini, the specificity of the strategist consists in his ability to master space and time.[viii]

Yet, this concern is not enough to assess if it fundamentally changes the nature of war. From a concrete point of view, AIs are not going to monopolise the conduct of war. They are severely limited and cannot achieve just any kind of tactical task nor be used efficiently in any kind of war. Just like any other kind of weapon, they have counters, and it would not be surprising to see them regularly backfiring against their users.

Indeed, their main limitation was induced by the quote by Ted Nelson given earlier on: they do what they are told to do. It means that once their logical rules of engagement are understood, it becomes very easy for the enemy to predict their behaviour and take advantage of it. On the other hand, leaping away from this rigidity and making more flexible AIs requires another trade-off: decreasing human control and increasing chances of failure. Indeed, in the words of Alan Turing, “if a machine is expected to be infallible, it cannot also be intelligent”.[ix] This is a logical trade-off, and the most critical one that lies on the road from narrow AI to general AI. A logical system can produce perfect answers to a limited number of questions but not to all possible questions. The more questions this system is asked, the lower the rate of correct answers will be. Hence, extending an AI to make it more flexible and complex implies decreasing its effectiveness at doing individual tasks.[x]

Friedrich the Great, as related by Henri de Catt in his ‘Memoirs’, asserted “but in war, as in everything else, a man does what he can and seldom what he desires.”[xi] The same goes for the machine. It is programmed to specific ends, but since it is impossible to predict every obstacle that it might encounter, it is never guaranteed to succeed. Hence, AI does not represent a particularly different form of technological innovation tactically speaking. Since its use is limited, it will not fully replace human beings on the ground. Rather, it will probably be used to carry out very specific tactical tasks in coordination with other weapon systems and military personnel. As this paper shall now argue there is, however, one very important difference between AI and previous innovations in warfare: AI does not only have kinetic use, it can also affect the mind by assisting decision-making.
AI and the formulation of decision in strategy

Strategy is calculus if nothing else, the interplay of imagination and probabilities. Or, more formally, ‘the art of the dialectic of two opposing wills using force to resolve their dispute’.[xii] If AI is to revolutionise war, and alter its character, it is in this direction that one should look: its impact on strategic thinking and decision-making.

Indeed, the development of neural networks and deep learning AIs has shown very convincing results in complex strategy games, such as chess, Go and even imperfect information games like poker or Starcraft, where the decision space is so enormous it seems impossible to break through pure ‘computing strength’. Very recently, Cicero, an AI designed by Meta to play ‘Diplomacy’, consistently achieved top performances in a game relying almost entirely on negotiation and psychology, realms where humans traditionally enjoy an edge over the machine.

On their own, those feats might yet still appear slightly abstract to the reader wondering how this will change our ways to think and take strategic decisions. So perhaps most insightful is the case of AlphaZero and how it impacted the practice of the game of chess. This AI recombined ideas and principles well known to humans in ways that were never thought of before, despite the fact the game existed for thousands of years. Not only did these principles give the AI total dominance over all its counterparts, but it also taught high-level chess players new ways to think about the game. Nowadays, these high-level players make use of these new perspectives in their own games and play arguably much better than before AlphaZero. In a nutshell: it made them better strategic thinkers on the chessboard.[xiii]

The ability of AI to find its own, independent ways of solving a problem makes it a remarkably creative tool. Partially free from human biases, and empowered with much better computing abilities, it comes up with new ways of playing games, but also new ways of solving scientific problems or suggesting artistic creations. Hence, Strategy is a natural playground for AI, as it thrives when combining means and ideas to reach specific ends.

Therefore, it is not absurd to think that the potential of AI can be leveraged by strategists to make better sense of the options available to them. By simulating a strategic problem, in the form of a wargame for instance, and programming an AI around it, one might get interesting and creative insights into the problem and how to tackle it. To put it simply, it would provide the strategist with additional options and a more complete overview of the strategic problem. In a way, it would integrally be part of what Clausewitz calls the general’s ‘imagination’ and act as a kind of sixth sense, reinforcing intuition; or to frame it in a better way: enhance his coup d’oeil.

And this coup d’oeil is well needed in a world where entropy keeps increasing while our cognitive abilities remain the same. The notion of entropy in warfare, in fact, is a very interesting way to conceptualise the resilience and efficiency of a military organisation. It was first theorised by Mark Herman in 1997, and describes ‘the state of disorder imposed on a military system at a given moment’.[xiv] For instance, a military unit whose entropy reached the maximum level is then ‘no more than a mob’. This concept goes well along with the increased use of AI in command and control and decision-making, and can apply to any kind and any size of military organisation.

Indeed, the use of AI at this level requires very specific agency. It requires humans to define well the place of the machine in their decision-making process, a lucid understanding of what the machine brings and where its strengths lie as well as its limitations. This human-machine teaming is sometimes labelled ‘centaur warfighting’, as first expressed by ex-secretary for defence Robert Work.[xv] In this paradigm, the machine uses its superior cognitive capabilities to provide situational awareness and potential solutions, enabling the human to focus on actual decision-making and on the political and psychological components of the problem. The strategist, having a better grasp of the situation, can then redirect the efforts of the machine to more precise points of the problem, effectively resulting in a virtuous circle. Reaching such a level of cohesion can explain how, for instance, two amateur chess players with engines could beat grandmasters also using engines in a tournament in 2005.[xvi] In a similar way, a wrong understanding or use of the machine can have adverse consequences on the cohesion of an organisation, effectively raising the entropy level instead of decreasing it. Therefore, although software and hardware can both be replicated, actual AI-human teaming is hard to produce and always unique to each organisation. In the words of Paul Scharre, “There’s also better people, training, doctrine, experimentation. That all goes into making that package together, and that’s actually really hard to replicate.”

The incorporation of AI into the decision-making process of military organisations is a definite change from previous technologies and is likely to alter the character of war. But does it actually change the nature of war?

As long as humans remain in control and have the choice to apply or not the propositions of the machine, it does not. According to Kenneth Payne, the danger in AI, whether employed for a tactical weapons system or a strategic-scenario planner, “lies primarily in the gap between how the AI solves a problem framed by humans, and how those humans would solve it if they possessed the AI’s speed, precision and brainpower.”[xvii] As long as the strategist is conscious of this gap, and merely uses the AI as a tool for suggestions and a source of inspiration, the machine cannot be alienating. Rather, it enables them to focus on the heart of the problem and facilitate their stream of thoughts. Finally, human biases are always present in the process, as humans are the ones designing the AI and feeding it with their own perception of the strategic problem at hand. Thus, AI is never completely bias-free, nor is it totally separated from human touch.
Will AI make war obsolete?

In ‘the causes of war’, Geoffrey Blainey makes the point that war always happens out of miscalculations from one or both camps about their respective power and will and that, without these miscalculations, war would not break out.[xviii] But then what does the involvement of AI in war imply for this assumption? As AI becomes more and more elaborated, and closer to general AI, miscalculations should be fewer. Hypothesising that war will not be an attractive solution in the eye of AI thus sounds defensible. In the words of the supercomputer Joshua from the movie Wargames:

“A strange game. The only winning move is not to play.”

As we rely more and more on AI, war would then be pushed outside of the human realm, and what was normally the realm of passions becomes something else. He who lived by the sword would no longer die by the sword.

The danger of this perspective is perhaps best illustrated by ‘A taste of Armageddon’, an episode from Star Trek in which two planets, concluding that to kill was an unavoidable feature of human nature, decide to wage an eternal war through computer simulation, only killing the citizens that fell victim in the simulation without any other kind of physical violence to not have to fight a real war. But simultaneously, the simulation immunised both societies to the horrors of war, hence they see no reason to end it:

“Death, Destruction, Disease, Horror. That’s what war is all about. That makes it a thing to be avoided. You made it neat and painless. So neat and painless, you’ve had no reason to stop it. And you’ve had it for 500 years.”

Ultimately, technology is an expression of ourselves. It can be misused, and we can be driven by it just like we can be driven by our impulses and passions. But we always have a choice, ethics are in our control. We cannot outsource problems of ethics, politics, and decision-making to AIs. It cannot conceive what dissent is, despite dissent being the first act of war. The morale factor and the will to resist, the most crucial factors in war, are things AI cannot compute. All these features are essential to the phenomenon of war.

Indeed, the deep nature of war lies within the heart, as the purple heart medal awarded to killed and wounded American soldiers reminds us. This is also why it is an art and not a science. AI is not a pill of Murti-Bing, it cannot be a cure for independent thought.[xix] Eventually, human dilemmas and the most critical choices will always remain in our hands, for war is mostly a matter of conscience. And the human element in combat shall remain essential.[xx]
Conclusion

Our humanity is often defined by our ability to be intelligent and creative. Yet, the rise of AI seems to challenge this assumption. Far from changing the nature of war, however, it is the contrast it brings alongside humans that shows what this nature is all about: it is first and foremost the realm of passions. For this reason, as AIs become work partners, in warfare as elsewhere, the role of psychology should keep rising in importance. Indeed, questioning the moral and philosophical grounds behind human phenomena as fundamental as dissent, for instance, does increasingly matter for AI will never be able to sort out these kind of questions. Hence, a deeper inquiry into this singular side of warfare would probably be beneficial if we are to become better strategic thinkers at all.

No comments: