Pages

18 July 2021

The myth of ethical AI in war

STEPHEN BRYEN

US Secretary of Defense Lloyd Austin wants the Defense Department to do artificial intelligence (AI) “the right way,” even if our main competitor, China, does not.

Speaking to the National Security Commission on Artificial Intelligence Austin said: “… our use of AI must reinforce our democratic values, protect our rights, ensure our safety and defend our privacy.”

This is close to the formula assigned to the fictional “Officer Murphy” in the 1987 movie Robocop. Murphy, a murdered police officer, was turned into Robocop, a cyborg with his human memory (mostly) erased.

He was given four directives, the first three being to serve the public trust, protect the innocent and uphold the law. He was also given a classified order, Directive 4, which blocked Robocop from causing harm to employees of Omni Consumer Products (OCP), the company that built him.

This was an attempt to install company ethics in a cyborg.

Sort of like attempting to install American military ethics in an AI-enhanced weapon. Secretary Austin appears to believe in Officer Murphy, but the ethics of warfare, practiced by the United States or our allies or our adversaries, are in the soldiers and commanders.

If we can’t do this right now without AI, then we can’t do it with AI-enhanced weapons.

Defense ministries around the world are anxious to infuse military systems with AI. Building on the success of surveillance and armed drones and their increasing combat importance, the US, Israel, Russia and China are all seeking autonomous war-fighting systems.

At their simplest, these are capable of carrying out a task without a person in the loop. A drone can be sent to destroy a target without any communication or control system outside the weapon itself. The immediate fiery aftermath of the missile and drone attacks at Abqaig on September 15, 2019, in Saudi Arabia. Photo: Screengrab

When the Iranians sent cruise missiles and drones against Saudi Arabian oil installations in September 2019, it appeared to some experts that the drones that hit the Abqaiq oil facility were operating autonomously. Autonomy, however, only lets a weapon do the job it is programmed to do. AI has the weapon making decisions.

In some cases, that seems fairly straightforward.

During the Gaza conflict in May, Israel claims to have launched and operated a large swarm of drones that were managed by AI. According to recent reports, the drones each covered a preselected surveillance area and were capable of sending coordinates back to gun and mortar brigades to attack the selected targets.

The technology is said to have been developed by the Israeli army’s Unit 8200 specializing in signals intelligence – roughly equivalent to the National Security Agency (NSA) in the United States. Israel has not provided detailed information about the drone swarms, but it makes sense that each drone not only could map a specific area but could also detect missile launch sites and other military activities and select them as targets.

Whether there was a person in the loop or not – the likelihood is not – isn’t known.

The US is working on a number of autonomous vehicles ranging from land systems to surface and underwater naval vessels to autonomous refueling aircraft. The army, for example, is adding AI capability to land vehicles, including tanks that ultimately will be able to coordinate with surveillance drones and select the safest available roadways, predict where blockages may be and automatically take alternative action.

These systems are built on civilian technology developed in Israel – eg, WAZE – and in the US – self-driving vehicles, pioneered by Tesla.

Some of this is readily apparent in targeting and killing terrorists using drones and hellfire missiles. The US has been at this for many years now, with considerable success, but it has occasionally hit the wrong target.

These are not AI systems currently, but it is a singular path to “improve” them with AI so that the AI has instructions that it carries out without external decision making – “Did we choose the right target?” or “Is there too great a danger of collateral damage?” AI-controlled soldiers might do a better job in close combat, but there are limitations. Photo: AFP / Carolco Pictures

The real crunch comes when we get to the future soldier. To make the future soldier effective, he or she has to eliminate perceived threats before the threats eliminate them. Thus, the soldier is not actually a cyborg, but his or her capabilities are reaching a cyborg level of capability.

There is no reason to believe that an AI-driven system would be any better or worse than a purely human-operated system. Furthermore, in a complex combat environment, AI might do a better job than stressed humans fighting for their lives.

AI can be programmed to obey some “ethical” rules, particularly when it comes to civilians in a combat environment, but commanders may find this programming interferes with the mission if built-in “ethical” limitations endanger warfighters.

For example, an AI system might lock a soldier’s gun to prevent a civilian from being killed, when in fact the civilian is a combatant or terrorist.

In urban terror events, it is virtually impossible to know who is, or is not, a civilian. Israel, for example, has been struggling, with or without AI, how to minimize civilian casualties when terrorists launch rockets from mosques, schools and apartment buildings.

AI is not going to solve this problem by itself or even in combination with human operators.

Our adversaries are not in the least worried about constraints on the use of AI. While it is practically impossible to make AI “ethical,” it is possible and absolutely essential to press our military and civilian leaders to act ethically and unleash weapons only when justified and essential for our security.

In short, while ethical AI may be a myth, ethical leaders are not in the least mythological.

No comments:

Post a Comment