Pages

9 September 2017

* ‘Killer Robots’ Can Make War Less Awful

By Jeremy Rabkin 

On Aug. 20, Tesla and SpaceX CEO Elon Musk and dozens of other tech leaders wrote an open letter sounding the alarm about “lethal autonomous weapons,” the combination of robotics and artificial intelligence that is likely to define the battlefield of the future. Such weapons, they wrote, “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” and they could fall into the hands of terrorists and despots. The tech leaders urged the U.N. to pre-empt an arms race in these technologies by acting immediately, before “this Pandora’s box is opened.”

Mr. Musk has established himself in recent years as the world’s most visible and outspoken critic of developments in artificial intelligence, so his views on so-called “killer robots” are no surprise. But he and his allies are too quick to paint dire scenarios, and they fail to acknowledge the enormous potential of these weapons to defend the U.S. while saving lives and making war both less destructive and less likely.

In a 2014 directive, the U.S. Defense Department defined an autonomous weapons system as one that, “once activated, can select and engage targets without further intervention by a human operator.” Examples in current use by the U.S. include small, ultralight air and ground robots for conducting reconnaissance and surveillance on the battlefield and behind the lines, antimissile and counter-battery artillery, and advanced cruise missiles that select targets and evade defenses in real-time. The Pentagon is developing autonomous aerial drones that can defeat enemy fighters and bomb targets; warships and submarines that can operate at sea for months without any crew; and small, fast robot tanks that can swarm a target on the ground.

Critics of these technologies suggest that they are as revolutionary—and terrifying—as nuclear weapons. But robotics and the computing revolution will have the opposite effect of nuclear weapons. Rather than applying monstrous, indiscriminate force, they will bring more precision and less destruction to the battlefield. The new generation of weapons will share many of the same qualities that have made the remote-controlled Predator and Reaper drones so powerful in finding and destroying specific targets.

The weapons are cost-effective too. Not only can the U.S. Air Force buy 20 Predators for roughly the cost of a single F-35 fighter, it can also operate them at a far lower cost and keep them on station for much longer. More important, robotic warriors—whether remote-controlled or autonomous—can replace humans in many combat situations in the years ahead, not just in the air but on the land and sea as well. Fewer American military personnel will have to put their lives on the line in risky missions.

Critics are concerned about taking human beings out of the loop of decision-making in combat. But direct human involvement doesn’t necessarily make warfare safer, more humane or less incendiary. Human soldiers grow fatigued and become emotionally involved in conflict, which can result in errors of judgment and the excessive use of force.

Deploying robot forces might even restrain countries from going to war. Historically, the U.S. has deployed small contingents of military personnel to global hot spots to serve as “tripwires”—initial sacrifices that, in the event of a sudden attack, would lead to reinforcements and full military engagement. If machines were on the front lines for these initial encounters, however, they could provide space—politically and emotionally—for calmer deliberation and the negotiated settlement of disputes.

The MK 15 Phalanx Close-In Weapons System during a live-fire exercise aboard the USS Curtis Wilbur, Sept. 16, 2016, in the Philippine Sea. PHOTO: U.S. NAVY

Critics also fear that autonomous weapons will lower moral and political accountability in warfare. They imagine a world in which killer robots somehow fire themselves while presidents and generals avoid responsibility. But even autonomous weapons have to be given targets, missions and operating procedures, and these instructions will come from people. Human beings will still be responsible.

Could things go terribly wrong? Could we end up in the world of “RoboCop” or “The Terminator,” with deadly devices on a rampage?

It is impossible to rule out such dystopian scenarios, but we should have more confidence in our ability to develop autonomous weapons within the traditional legal and political safeguards. We regularly hold manufacturers, sellers and operators liable for automobiles, airplanes and appliances that malfunction. Mr. Musk, for example, knows from experience that mishaps and problems with self-driving cars could generate lawsuits, which gives Tesla an added incentive to refine its technology.

A missile-defense system that automatically destroys incoming rockets could greatly advance peace and security, but it also could malfunction in some catastrophic way, perhaps shooting down a civilian airliner. We’ve seen such tragedies before, and they follow a time-honored script: public outrage, political and scientific investigation, and reform. It would be no different with autonomous weapons.

Robots won’t bring perfection to the use of force, but they can reduce mistakes, increase precision and lower overall destruction compared with their human counterparts.

Some worry that autonomous weapons might prompt leaders to turn more readily to conflict. But decisions about war and peace have much more to do with political leaders and their choices than with the technology available to them. Presidents George W. Bush and Barack Obama had roughly the same military at their disposal, but they used it very differently.

The greater risk today isn’t that the U.S. will intervene impulsively but that it won’t intervene at all, allowing serious challenges to intensify. Robotic weapons can ease the dilemma. In World War II, the Allies leveled cities, at enormous human cost, to destroy Axis transportation and manufacturing facilities. Drones today can strike an arms factory, pick off terrorists or destroy nuclear-weapons sites while leaving neighboring structures and civilians untouched. Robotic weapons can reduce the costs that discourage the U.S. and its allies from intervening in humanitarian disasters or civil wars.

Even if we shared the apocalyptic worries of Mr. Musk and his allies, it isn’t at all clear that arms control could begin to deal with the problem. Arms control has mostly failed to prevent weapons innovation, from the crossbow to the nuclear bomb. Nations have had more success in imposing rules of warfare, especially to protect civilians, than in restricting specific weapons.

‘The same technology that can produce a self-driving car can also drive an autonomous tank.’

Robotics and AI will be even harder to control. Countries already hide their nuclear-weapons programs behind claims of scientific research or energy production. The technology involved in autonomous weapons is a classic instance of “dual use,” with obvious peaceful applications. The same technology that can produce a self-driving car can also drive an autonomous tank. A drone can just as easily deliver a bomb as a box from Amazon . A ban on military development would be almost impossible to verify.

Such limits would also be a serious handicap for the U.S. The new technology is most likely to benefit countries with the technical dynamism and the innovative drive to deploy them broadly. Restraining their development seems likely to cancel an American advantage over our authoritarian rivals and terrorist enemies.

Tech executives may worry that consumers will come to associate their products with war. But responsible governments buy weapons and wage wars for good reasons, such as self-defense, coming to the assistance of allies, pursuing terrorists and alleviating humanitarian disasters. If force is misused, we should blame our elected leaders, not the weapons.

No comments:

Post a Comment