Pages

7 October 2014

Naval Drones ‘Swarm,’ But Who Pulls The Trigger?

October 05, 2014


The Navy’s research arm is justifiably proud of its recent experiment with “swarming” drone boats, whose results (with video) were officially released today. But the very thing that’s most impressive about the swarmboats — their ability to act autonomously with minimal human guidance — raises crucial questions about when we can trust a robot to pull the trigger in combat.

Those are questions the Office of Naval Research (ONR) candidly told me it has not yet addressed. With the chief of ONR expecting a full-scale operational demonstration within a year, however, someone had better get on them soon. According to at least one expert I spoke to, surprisingly, the answers might be more reassuring than you’d expect.

The central issue? Rear Adm. Matthew Klunder aims to break the current Predator paradigm in which each unmanned vehicle requires constant supervision by at least one human being, if not several. “The excitement about this technology is it is autonomous,” not just remote-controlled, Klunder told reporters: “We basically have one sailor overseeing the event.”

That lone human managed 13 unmanned surface vessels (USVs) at once — an unprecedented number — as they escorted a manned control ship through the James River of Virginia. They were simulating a Navy carrier or other “high value unit” transiting a dangerous strategic chokepoint like the Strait of Hormuz off Iran. When a “suspicious” craft approached, piloted by a human tester, the controller ordered the five smartest of the unmanned boats to stop it. (The other eight, less autonomous, kept escorting the mothership).

In effect, the sailor told the swarmboats, “sic ‘em.”

The five robots switched from “escort” mode to “swarm” and plotted their own courses to intercept. As they moved, they constantly shared sensor data and planned routes, using software, called CARACaS, derived from NASA’s Mars Rover and originally developed tocoordinate construction robots working far from Earth. With this shared awareness, the five swarmboats could not only avoid colliding with civilian traffic — there were other vessels on the river — but also arrive at the target as a single, coordinated unit and then block it from approaching any closer.

And there things stopped, this time. Showing the swarming behavior worked was “the purpose of this demonstration,” Klunder said. “It was not to destroy the target. We certainly could have done that if it was needed.”

“We had flashing lights, we had blaring loudspeakers, we had high-powered microwaves on one of our vessels, and we also had 50 caliber machineguns],” Klunder said. In fact, rather than develop purpose-built unmanned vessels, ONR had simply fitted a few thousand dollars’ worth of circuitry to the existing small craft the fleet already uses for security. Now one sailor could control a flotilla of escorts, instead of putting a person at each helm and on each gun. That said, Klunder emphasized, “there is always a human in the loop [for] designation of the target and, if [necessary], the destruction of the target.”

But who’d actually pull the trigger? After some back-and-forth, ONR spokesman Peter Vietti gave me this statement: “Under this swarming demonstration with multiple USVs, ONR did not study the specifics of how the human-in-the-loop works for rules of engagement.”

ONR emphasized it took elaborate safety precautions in the James River experiment, even though no live weapons were involved. Multiple fail-safes would have stopped any boat dead in the water if it lost contact with its controllers — but that wouldn’t work in a war zone, where the enemy could deliberately jam such transmissions. Just like a human soldier, a battle-worthy robot must know how to follow rules of engagement (ROE) without constantly querying its superiors for guidance.

A Navy patrol boat converted to operate unmanned as part of an Office of Naval Research experiment in autonomous “swarms.”

“Future demonstrations could include rules of engagement and what it will take for the Navy to engage adversaries,” Vietti told me in an email. “The most important point to remember is that there are and always will be humans in the loop when it comes to engaging the enemy. So while the swarming capability is autonomous, without humans on board, there is always a human in the loop when it comes to the actual engagement of an enemy, whether through non-lethal or lethal effects. Operational specifics beyond that are classified.”

There are some things, though, we can deduce. However swarmboats handle the use of force, lethal or otherwise, it will have to be different from today’s armed Predator drones. A Predator has a human pilot controlling every action, including a missile launch. Even more advanced drones like the (unarmed) Global Hawk/Triton family, which can fly themselves from one point to another, have human eyes on their control screens at all times.

But if a single human being is really going to control “10 or 20 or 30 of these unmanned surface vessels” at once, as Klunder envisions, then that human can’t possibly aim every weapon and pull every trigger for every shot. That’s especially true because the swarmboats won’t be launching a handful of guided missiles, as the Predator does, but instead be firingmachineguns, which can pour out hundreds of unguided bullets a minute. Hollywood heroes aside, the human brain has trouble aiming two guns at once:

The straightforward way to solve this problem is to have the human designate the target but let the robots autonomously aim and fire. It’s the same principle — albeit a lethal application — as how the operator in the James River experiment selected the target and the swarmboats then autonomously planned and sailed courses to intercept it.

In fact, future-warfare expert Paul Scharre told me, we already let some robots do this: We call them “guided missiles.” When a fighter pilot locks on an enemy plane and fires, “the pilot’s not steering the missile,” he told me. That’s why the unnerving slang for a modern guided missile is “fire and forget.”

But that’s okay, said Scharre, the director of the Center for New American Security’s Project 20YY, who advocates greater automation in warfare and is finishing a study called The Coming Swarm.

“When you look at the way automation is applied across a range of weapon systems… which are the decisions that really matter, that we want a person to make?” Scharre asked. “The decision to turn left or right to strike the target….does a person need to make it? Probably not.” The decision on what target to strike and what not to, though, — that’s where the human must be in total control.

Automation can even make lethal force safer to use, Scharre argued. If the software defaults to “make sure you don’t hit anything but the target,” the robot can keep track of where every friendly, neutral, or unknown contact is and automatically stop shooting when one gets too close to the line of fire. Humans, by contrast, tend to get tunnel vision under the stress of combat — sometimes literally — and focus on the greatest perceived threat while losing track of what might be next to or behind it. If you want to enforce an ironclad rule that you don’t shoot in certain directions no matter what, he said, “a machine will be infinitely better at that than a person every time.”

None of this lets you dispense with humans, Scharre emphasized. While machines are better at consistently following unambiguous rules (e.g. “don’t shoot towards X”), human brains are still far better at sorting out ambiguous, complex, and chaotic situations. One human may be able to control a whole swarm of robots but get confused by an incoming swarm of enemies— say, Iranian Revolutionary Guard Corps attack boats — each coming from a different direction. Said Scharre, “if you’re in a situation where you had a simultaneous multi-axis attack from the enemy, how many people do you need to be able to understand the threat environment and make decisions about responding?”

“The good news is,” said Scharre, “if you’re automating things like driving the boat, you can allow humans to focus on the most important decisions, which are things like use of force.”

No comments:

Post a Comment