Pages

30 July 2015

Is a Killer Robot Arms Race Inevitable?

Research into designing and producing autonomous weapon systems is soaring in the U.S. military. For example, the U.S. Navy is trying to figure out how to successfully launch a whole swarm of tiny autonomous drones in order to assault an adversary with a cloud of cheap and disposable UAVs and paralyze defenses by the sheer quantity of unmanned attackers in the air.

Last week (See: “Super Humans and Killer Robots: How the US Army Envisions Warfare in 2050”), I reported on the finding of U.S. Army-sponsored workshop on the future of land warfare that concluded that a new breed of super humans and autonomous combat robots will be two of the key features of the battlefield in 2050.

Now, over 1,000 artificial intelligence and robotics researchers, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking among them, have signed an open letter that calls for ban of “offensive autonomous weapons beyond meaningful human control.”


The signatories’ biggest concern is the start of a military artificial intelligence arms race. “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the letter reads.

The letter points out that the threshold of acquiring autonomous weapons will be very low in the future since “they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”

Consequently, non-state actors and pariah states might be particularly drawn to procure so called “killer robots”:

It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

Furthermore, the letter notes that the deployment of autonomous weapon systems – the third revolution in warfare – is not decades but only a few years away. However, according to a Human Rights Watch report, current legal frameworks have not yet been updated to reflect this new technological development: “Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harms fully autonomous weapons might cause.”

Also, a March 2015 paper by the Center for New American Security (CNAS) entitled “Meaningful Human Control in Weapon Systems” further delves into the subject of killer robots and discusses some of the ethical (and technical) problems that might occur from the deployment of these new weapon systems may:

In discussions on lethal autonomous weapon systems at the United Nations (UN) Convention on Certain Conventional Weapons in May 2014, “meaningful human control emerged” as a major theme. Many who support a ban on autonomous weapon systems have proposed the requirement of meaningful human control as one that ought to apply to all weapons, believing that this is a bar that autonomous weapons are unlikely to meet.

This is also the conclusion of the signatories of the letter:

We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

No comments:

Post a Comment