Pages

15 May 2023

Will Artificial General Intelligence Change the Nature of War?


Introduction

The idea of the immutability of the nature of war, as formulated by Clausewitz, is an article of faith that is constantly put to trial. The latest development in human history that can potentially change the nature of war is Artificial Intelligence (AI). In a recent article published in this magazine, Alloui-Cros argued that the nature of war will not change.[i] He based this conclusion on three points: AI is just a tool that compresses timeframes but is unable to make complex decisions, AI has human biases and is designed to solve human problems, and war is a human activity, and we will always have a choice to determine its course. Looking at the AIs that are currently available, he is probably correct and AI will not change what war is or break the trinity of passion, chance and policy that defines its nature. His conclusions are aligned with those of other scholars that discussed how military revolutions changed war. For example, Gray concluded that ‘some confused theorists would have us believe that war can change its nature’.[ii] Echevarria investigated the relation between RMA, globalisation, and the nature of war and concluded that, although it is changing, the Clausewitzian framework remains ‘more suitable for understanding the nature of war in today’s global environment than any of the alternatives’.[iii]

On one hand, Alloui-Cros’ article has merits because it recognized that Clausewitz’s theory of war is still the point of reference for any discussion and updated to AI past conclusions on the effects of technological revolutions on the nature of war.

On the other hand, he did not consider if an AI with human-like capabilities, so-called Artificial General Intelligence (AGI), of whose capabilities far surpass human comprehension can falsify this theory. Vinge called this AI a ‘singularity’, a mathematical term used to label a point where a function degenerates and changes its nature becoming qualitatively different from what was before. Vinge concluded that ‘it is a point where our models must be discarded and a new reality rules’.[iv] An AGI that far surpasses human capabilities is thus called a singularity because, once it appears, the past will not be a guide to forecast or understand the future. Some authors portrayed this possibility as the end of the world.[v] The implicit conclusion is that it is not worth studying what comes after because the AGI singularity will annihilate us. This position is disputable because if we have no way to know how this new reality will be, then it is impossible and equally useless to conclude that the singularity will destroy instead of saveing us. Furthermore, as Vinge argued in his seminal paper, as time passes, we should see the symptoms of the singularity advent.[vi] Hence it is worth studying how the nature of war will be altered by this new evolving reality. Alloui-Cros answered the question on AI and the nature of war for the reality we know. The purpose of this article is to add to this discussion by speculating what might happen to the nature of war when we approach the AGI singularity.

This essay is divided into three parts. Firstly, it will present the two conditions needed for an AGI to become a singularity: super-intelligence and consciousness. Secondly, it will try to answer if AI super-intelligence and consciousness could change Clausewitz’s definition of war. Thirdly, once we establish that war is still organised violence for political aims, it will describe how AI super-intelligence and consciousness might influence Clausewitz’s trinity of violence, chance, and politics. The conclusion is that the AI super-intelligence and consciousness have the potential to change the nature of war.
What is Artificial Intelligence?

AI researcher Micah Clark wrote that on ‘a very personal and philosophical level, AI has been about building persons, is about “personhood”’.[vii] Current AIs are far from achieving personhood and can be better understood as highly optimised algorithms to solve narrow tasks but are poor at transferring these skills to new ones.[viii] Researchers are even in disagreement about whether a synthetic, conscious intelligence capable of performing humanly relevant complex cognitive tasks will ever emerge and eventually surpass human capabilities.[ix] Nonetheless, super-intelligence and consciousness are two steps that, if ever reached, could change war and its nature.
Super-intelligence

There is no consensus on the essence of human intelligence and even less so on super-intelligence.[x] It is still possible to adopt a working definition like the one proposed by Bostrom: ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’ is super-intelligent.[xi] This can materialise as comparable human intellect capability but multiple orders of magnitude faster, or vastly more intelligent, or a combination.[xii] Initially, it would be a ‘seed’ AI capable of building a slightly better version of itself through recursive self-improvement.[xiii] AI researchers think that with sufficient skill at intelligence amplification, the system could develop new cognitive modules as needed, including empathy, strategic thought and political acumen.[xiv]

Social psychologists, however, have recognized that the mind, as something associated with a single organism, is an approximation of intelligence. In reality, the mind is social, and it exists inside social and cultural systems.[xv] Artificial Life (ALife) research can give us insights into how machines can organise societies with rules for trade and fight and act as social intelligence. ALife envisions the possibility of a society of AI that leads to their superior intelligence.[xvi]

Consciousness

An AGI might develop consciousness as a tool to optimise its overall reward function and might have characteristics significantly different from that of humans.[xvii] Philosophers and researchers disagree on what consciousness is and whether self-consciousness is necessary or just a particular sort of phenomenal consciousness.[xviii] In particular, the lack of bodily experience and biological motivations would realise a clear cartesian dualism of body-mind that would question at its core the ability of AI to distinguish itself from the rest, care about itself and express intentionality.[xix]

The evolution of AI is not completely predictable, but we can expect increasing intelligence and some level of autonomy approaching consciousness to develop. We can explore its impact on war through these concepts.
Is it war?

Clausewitz’s definition of war

The first question to answer is if war fought with and by AGIs is still war or a different type of interaction. In ‘On war’, Clausewitz introduces the concept (Begriff) of war as ‘an act of violence (Gewalt) to force an opponent to fulfil our will’.[xx]

This definition comprises three elements: a) the violence, b) the purpose, and c) the social element.For Clausewitz, the results of the application of violence are ‘bloodsheds’,[xxi] and the reciprocal element of war gives violence an escalatory quality without theoretical limits to its application.[xxii]

On the other hand, escalation is a potential outcome rather than a necessary one because the rational decisions of human beings should determine it.[xxiii] Military aims (Ziel) are thus constrained and judged in relation to the political purpose of the war (Zweck) and are only a component of the overall means (Mittel) available.[xxiv]

War is a relation between communities willing to resist and realise their political aims. It is a function of ‘coalitionary aggression’ and must happen between organised groups with a shared understanding of reality.[xxv]

a. Violence and AGIs

Handel highlights that for Clausewitz, victory without violence is an aberration in the history of warfare.[xxvi] In theory, it can be achieved by two methods, through manoeuvre,[xxvii] or as ‘war by algebra’, a clash resolved by comparing figures of each other’s strengths.[xxviii] The Prussian general believed the first ineffective and the second impossible because of passion. By contrast, an AI commander might act as a perfectly rational entity and realise the ‘war by algebra’. However, there are different combinations of this situation that are worth mentioning. If the AGI is under human control, the AGI evaluation might be overrun by a passionate human commander. Similarly, for the reciprocal nature of war, if the opponent is a human agent, the AGI might be forced to use violence to react to non-rational decisions. Conversely, if it faces another purely rational entity, or Huntington’s Civil-Military relations concept remains valid, even when AGI is in charge of military operations, then an AGI commander might calculate that a battle or a war should not occur. Paradoxically, AGIs commanders might agree that the most efficient way to resolve a battle is to calculate the likely outcome and destroy their own resources based on this shared conclusion.[xxix] They would maintain valid the ‘dominance of the destructive principle’,[xxx] but would morph war and make explicit that it is an act of self-violence.

b. Purpose and AGIs

There must be a rational purpose for a conscious, and thus intentional, AGI to resort to war and violence or self-violence. If the AI does not have a freely chosen purpose and acts violently, if it goes ‘rogue’, then it is not war: it is an unnatural disaster. At the same time, it is unclear what a rational purpose would be for an AGI. Humans have biological motivations and emotions that connect these needs to our behaviours.[xxxi] It is unclear if an AI would have motivations or if some non-human motivations will emerge during their evolution. Minsky suggested that free will develops from a ‘strong primitive defense mechanism’ to resist or deny compulsion.[xxxii] If this is true, we can at least assume that a conscious AI will try to defend itself. Unfortunately, it does not clarify if an AGI will understand human motivations and how much value it will give to itself in relation to the rest of reality.

c. Social element and AGIs

An additional element to consider is that humans and AGIs might have different perceptions about what constitutes a violent act and its severity. Moreover, as humans, we might not be able to understand the thought processes of a super-intelligent being. This incomprehension of aims and means undermines the definition of war as a social institution: we do not wage wars on apes or cats, and similarly, AGIs will not have wars with us.[xxxiii] Interestingly, if AGIs develop their own society with norms and shared understandings, as ALife suggests, it means that they could potentially have AGI social wars waged for AGIs social motivations.

Overall, AGIs might not be interested in human wars unless they perceive them as threats. We will likely need a new word to identify these new social interactions. At the same time, war between humans with AGIs assistance is impossible to rule out, and it is thus essential to explore how its nature might change.
Does it change the nature of war?

What is the nature of war?

The nature of war is distilled into what Clausewitz called the ‘wondrous trinity’.[xxxiv] Its elements are a) violence, hatred, and enmity, b) the play of chance and probability, and c) the element of subordination of war to policy and reason.Clausewitz identified two types of hostility: hostile feelings or animosity and hostile intentions. Hostile intentions are essentially political in nature, necessary for war to occur and can exist without hostile feelings.[xxxv] The latter is variable in intensity, and war would be an algebraic exercise if absent.[xxxvi]

Clausewitz states that war is the realm of probabilities. The unfavourable cases are caused by friction: moral and physical depletion (danger and exertion), and lack of knowledge and bad luck (uncertainty and chance).[xxxvii] Estimating the impact of these factors is a matter of judgement and approximation because the extremely high number of cases makes it impossible to calculate mathematically.[xxxviii] Human, limited cognitive capabilities force the commander to make ‘good enough’ decisions.[xxxix]

Clausewitz is adamant that war has a rational component and it is not ‘something autonomous but always […] an instrument of policy’.[xl] It is the job of the statesman and the commander to establish ‘the kind of war on which they are embarking; neither mistaking it for nor trying to turn it into something that is alien to its nature’.[xli] They should do this while not clouded by hostile feelings and after having correctly judged the probabilities.

a. Hostility and AGIs

Superficially, a perfectly rational entity would not be influenced by feelings like hostility. As discussed, it is not clear if even conscious AGIs would have a purpose other than self-defence. Nonetheless, we can imagine that an AGI might see itself as so precious that any human activity is perceived as hostile. AGIs might thus exist in a state of constant AI-fear, defined as a hyper-rational passion that is very different from our biologically driven fear, and develop both hostile feelings and intent. A ‘dehumanised perception’ may facilitate violence and brutality and even extermination with the awareness of what it is doing.[xlii]

b. Chance and AGIs

A super-intelligence explosion will eventually become asymptotic with perfect knowledge and calculus, effectively realising a so-called ‘Laplace’s Demon’.[xliii] In theory, this entity would suffer almost no friction: it would immediately adjust to events and be relentless in its effort. This is the perfect realisation of war by algebra, and it is a vision incompatible with the trinitarian war. In practice, perfect knowledge is impossible because of nonlinear dynamics: it is impossible to eliminate mismatches between the representation of phenomena and their actuality.[xliv] Nonetheless, an AGI would suffer no friction compared to humans.

As Allen argued, when under humans’ control, our fiat would only be a constraint and a weakness, and the centre of gravity (Schwerpunkt) will become the speed of action and the effect itself.[xlv] War with almost perfect knowledge would no longer be the realm of the military human genius and, as Van Creveld concluded, ‘fighting does not make sense since it can neither serve as a test nor be experienced as fun’.[xlvi]

c. Policy and AGIs

The acceleration of almost frictionless military activities brings forward the issue of policy control over them. We assume that an aware and intentional AGI is always in control of its means and can mediate responses and escalations. The problem arises when humans can access the power of a super-intelligent but not-conscious AI. If you know that the enemy will relentlessly attack you, you must be ready to defend yourself relentlessly. This might just translate into a mindless acceleration of escalation and violence. The not-conscious AIs can be programmed to act within policy limits, but this still accounts for a diminished policy role after the conflict started.

Ultimately, investigating what could happen to the nature of war closer to AI super-intelligence and consciousness shows that there can be extreme cases where one or two elements of the trinity might collapse and become irrelevant. Unexpectedly, only passion might remain a constant element.
Conclusions

Alloui-Cros’ article proved that even narrow AI will not change the validity of Clausewitz’s theory. This article speculates that a super-intelligent and conscious AGI might. It appears that the interaction and conflict with and between super-intelligent conscious AGIs have the potential to be a novel social interaction with a Begriff different from that of purely ‘human’ wars. Following this logic, AGIs would not change the nature of war but an ‘AGI-war’ would have its own different nature. Nevertheless, ‘human’ war is unlikely to disappear, and the participation of an AGI nearing super-intelligence and consciousness has the potential to change its nature.

Brodie suggested th­­­­at Khan’s ‘On Thermonuclear War’ ‘usefully supplements Clausewitz but […] he does not in any way help to supplant him’.[xlvii] It is possible that, if an AGI emerges, and in anticipation of its super-intelligence and consciousness, we might need a further expansion of Clausewitz’s theory, an ‘On AGI-War’.

No comments:

Post a Comment