Pages

2 June 2020

Sci-Fi Eye: Natural selection on the unmanned battlefield


Forget the “Terminator scenario”. The future of AI based warfare could be far weirder than that writes Gareth L Powell

Two articles in last month’s issue caught my eye. The first was about the Royal Navy’s decision to test extra-large autonomous submarines with a view to incorporating them in its fleet, and the second concerned the MOD’s acquisition of five unmanned ground vehicles for battlefield resupply missions (Qinetiq’s Titan UGV is pictured below).

Now, as I’m a science fiction author, you might be expecting me to leap straight to the conclusion that these automated vehicles will somehow rise up against us and destroy the world in a Terminator-style apocalypse. And while that may be a fun scenario for a Hollywood blockbuster, frankly any species dumb enough to place its entire offensive capability in the charge of a single artificial intelligence deserves everything it gets.

No, in this month’s column, I want to look at some of the stranger implications of this technology.

To start with, let me state the obvious: war produces casualties, and if we’re deploying autonomous vehicles into active theatres, they are going to get damaged. It’s easy to imagine automated ambulances ferrying human casualties away from the front line, but what about unmanned tow trucks and drones equipped to repair autonomous vehicles? Machines repairing other machines without human intervention.


If those machines can be repaired on the battlefield, perhaps they can also be improved and modified in situ to cope with unexpected changes in terrain, mission requirement, or threat level? Throw in some simple learning algorithms for the tow trucks, and that sounds like something I could write a story about: a fleet of war machines that are turned loose and adapt to the needs of the battle as it happens, undergoing a rapid Darwinian machine evolution dictated by the circumstances in which they are operating.

Frankly any species dumb enough to place its entire offensive capability in the charge of a single artificial intelligence deserves everything it gets

What might such machines look like by the end of a protracted conflict? If the other side also uses similar technology, would the evolution be accelerated as each side became involved in a race to outclass the other? A simple unmanned supply truck might evolve into a heavily armoured stealth vehicle with fat mesh tires that allow it to traverse any kind of rough terrain, while being almost immune to IEDs and other hazards.

Earlier, I mentioned how unwise it would be to place your entire military capability under the command of a single artificial intelligence. However, the ‘smarter’ an unmanned vehicle is, the more chance it has to survive, so an ongoing upgrade of its onboard processing power wouldn’t be unreasonable. But how smart do you want a drone to be? At what point will it assess its situation and realise its best chance of survival is to refuse to follow orders or defect to the enemy?

Assuming we somehow manage to avoid insurrection in the ranks, we face another potential problem when machines start upgrading machines on an ad hoc basis. We run the risk that sooner or later, they might become too complex for us to understand. We’ll lose the ability to repair our own creations, as they diverge into a multitude of sub-species, each with its particular specialisms and evolutionary history. What started out as a tank might come back to us as a swarm of complex drones or a slick of nanotechnological goop. At that point, even if they don’t evolve the intelligence to become disloyal, could we still really claim to be in control of them? If we can’t understand how they work, can we trust them to make the life-or-death decisions that are necessary on a battlefield? If an unmanned vehicle decides the success of its mission would be increased by the neutralisation of civilian targets, would we be able to convince it otherwise?

Some of you may remember the talking bomb in the movie Dark Star, which discovers philosophy, decides it’s god, and with the words, “Let there be light,” detonates while still attached to the ship that should have dropped it. That is something we definitely want to avoid.

We also want to avoid the situation described in Philip K. Dick’s story ‘Second Variety’, where the few remaining human soldiers on both sides of a conflict discover that their automated weapons have gained sentience and joined forces, and are now lying to their former masters about the progress of a war that’s no longer happening.

Leon Trotsky claimed that, “War is the locomotive of history.” If our unmanned vehicles go on to evolve beyond us, then perhaps war will also provide the future of the locomotive.

No comments:

Post a Comment