Pages

24 February 2021

The terrifying development of AI warfare


The Chauvet-Pont-d’Arc Cave in France contains some of the earliest known Palaeolithic cave paintings, including those of lions, bears, and hyenas. Thought to be the earliest expressions of human fear, it is hard for us to compute today how frightened man once was to live alongside creatures who viewed us as prey.

This primal fear is buried deep within us. It explains our fascination with rare stories of humans being eaten by sharks, crocodiles and big cats. The fear isn’t just about our mortality, but the very thought of us as prey to an unfeeling predator, who doesn’t recognise us as individual thinking humans.

In science fiction, this deep-rooted fear has been widely expressed through the portrayal of artificial intelligence (AI). Whether it’s AI concluding we are no longer required and launching nuclear Armageddon, as in the case of Skynet in the Terminator films, or killer robots driven by a rogue algorithm hunting us down like the cave lions of our past, as seen in I Robot, we have a fascination and fear of being outwitted and hunted by technology.

But, despite this unease, we continue to use this technology – the latest case being in the field of autonomous weapon systems (AWS). These include micro-drones that attack in swarms, drones the size of small planes carrying hellfire missiles, unmanned armour vehicles and submarines, and even software capable of launching a cyber counter-attack, all of which can identify a target, decide to engage it and then potentially destroy it, without a human needing to intervene at any stage. The US National Security Commission on AI (NSCAI) concluded at the end of last month that they disagreed with a proposed global ban on the use or development of AWS. This poses serious issues, for it is unlikely that machines will ever be able to understand the professional codes, legal precepts, and religious and philosophical principles that allow soldiers to navigate decision making on the battlefield.

Militaries have used forms of AWS for centuries: in what’s likely the earliest documented example, the 162 BC Battle of Beth Zechariah saw the Syriac-Greek army send 30 wine-fuelled war elephants rampaging through the battlefield. AWS is controversial today, however, because technology advances mean that the speed with which weapons can act and the scale of damage that they can inflict is unlike anything seen before in history.
Why should we trust our fates to machines?

AWS’s proponents believe that introducing automation in war will reduce deaths. The vice chairman of the NSCAI, Robert Work, who served as deputy secretary of defence under Obama and Trump, said: ‘It is a moral imperative to at least pursue this hypothesis’. This is a noble aim, of course. And, having been on operations myself in Iraq and Afghanistan, and forced to make decisions in the fog of war, I understand the attraction of removing humans – with our bias, emotions and limited speed at processing information – from the equation.

But it’s not that simple.

In the first instance, there are practical problems to removing humans from war. For starters, why should we trust our fates to machines? Can we ever truly ensure that they can’t be hacked and used against us? And if we could, how do we turn AWS off once a war has ended?

If a soldier goes rogue, there are consequences, but what if, due to one error in millions of lines of code, an army of AWS goes rogue? It is not far-fetched to think that these errors could escalate an evolving crisis into a war. In 2010, a trading algorithm contributed to a ‘flash crash’ in the stock market, causing a temporary loss of almost a trillion dollars. Subsequently, regulators updated circuit breakers to halt trading when prices drop too quickly, but how do you regulate against a flash war?

And what if it is not one line of code but one commander that goes wrong? A commander, prepared to do anything to achieve victory, in charge of an army of AWS, could change their settings to set new and dangerous norms, quicker and with less resistance than changing the ethics of an entire human army.

Even if we solve this problem, and somehow make AWS secure, we cannot currently programme a machine to conform to the international laws that govern conflict outside of the most highly predictable and less populated environments, such as underwater or in space. There are rules that dictate how troops distinguish targets, define what force is appropriate and what risk of collateral damage in an attack is appropriate, and explain whether an offensive is necessary to achieve the overall mission.

It’s not a case of simply recognising who’s holding a gun. In Afghanistan former Army Ranger Paul Scharre describes in his book Army of None: Autonomous Weapons and the Future of War, spotting a young girl with a herd of goats, circling their position. They realised she was radioing in their location to the Taliban. Rules of engagement allowed him to shoot. For Scharre this would have been wrong, which is why he didn’t, and his team and the girl survived to fight another day. He believes, however, that it’s likely an AWS would have engaged.

The NSCAI report recognises that AWS would make mistakes – just like soldiers. But the report states that a human should be held accountable for the development and use of any AWS, making it no different than for any other weapon system. It is, however, very different. An AWS must be pre-programmed by makers who cannot predict every situation they may face. Systems which subsequently learn for themselves from their environments will likely act in ways designers have no way of foreseeing when faced with unanticipated situations. While it is, of course, impossible to predict with absolute certainty how any one individual will react to the rigours of war, at the unit level, soldiers adapt and behave in line with expectations. They may make mistakes and their conventional weapons may malfunction, but there will be a level of awareness of the error. And any errors made simply do not have the same potential to escalate at the unparalleled speed and scale of AWS.

It is also possible that AWS could develop their own ‘ethics’. With no identifiable operator an AWS could, in theory, be launched by a low-ranking technician before they enter the combat zone and be autonomous from then on. With AWS acting in ways its designers cannot foresee, will commanders avoid prosecution for potential war crimes? It is not hard to imagine AWS developing its own ‘ethics’. After all, we know AI machines have developed their own language. Take, for instance, Tay, an AI chatterbot released by Microsoft via Twitter in 2016 which was created with the aim of learning from interacting with human users of Twitter. The bot began to post offensive racist and sexist tweets, causing Microsoft to shut down the service after only 16 hours. According to Microsoft, this was caused by trolls who ‘attacked’ the service as the bot made replies based on its interactions with people on Twitter. But, in essence, it did what it was meant to: it learnt from its interactions, just not in a way its designers foresaw.

The report says that we ‘must consider factors such as the uncertainty associated with the system’s behaviour and potential outcomes, the magnitude of the threat, and the time available for action.’ This opens the door to unspecified levels of uncertainty and euphemistically termed ‘potential outcomes’, so long as the ends justify the means.

The lack of responsibility is also a concern. If AWS kills the wrong people or uses disproportionate force, commanders will likely be able to claim AWS did not behave as expected. In such cases, it is unlikely that there will be individual moral accountability outside clear cases of gross negligence or deliberate misuse. We should want those entrusted with lethal weapons to be held to the highest standards. The responsibility this entails is part of the safety catch of those weapons.

Lastly, and crucially, is the question of who we let make life and death decisions. Would we want an algorithm to decide to turn off a Covid-19 patient’s ventilator rather than a doctor? We should want whoever pulls the trigger to recognise the value of the life they are taking. One weapon that doesn’t is the landmine, and 164 states are currently signed up to the Ottawa Convention that aims to put an end to its use.

With recent developments in the US a ban on AWS looks unlikely. In the report the Commission emphasises that the United States’ strategic competitors are already deploying such weapons – and without ethical frameworks for responsible design or use. The imperative to continue to develop AWS seems driven by fear of battlefield disadvantage to an enemy who will not play by our rules. The report recommends the US continues to develop AWS while encouraging others to only deploy them in a responsible manner. No matter what we tell ourselves in peacetime, once we have these weapons, there is no going back.

Horrors like the Holocaust are marked by machine-like processes and the arbitrariness of life and death decisions. In the absence of human judgement, how can we ensure that killing is not arbitrary? When one human attempts to dehumanise another, that person is stained in a way a machine cannot be.

A soldiers’ job is to make judgements; we move through multiple layers of interpretation and context to do so. Governments should focus more on ethical education and use AI to help, but not supersede, their decision making. Yes, soldiers will remain imperfect, and lives will continue to be lost, but those that make that decision will carry the burden of it.

No comments:

Post a Comment