Pages

12 April 2017

AI that can kill? Military takes a pass

By: Mark Pomerleau, 

What is the future of autonomy and artificial intelligence? Many have postulated futuristic capabilities and scenarios involving intelligent and killer robots on the battlefield that have been delegated the authority to take human lives without the intervention of human operators.

In fact, a group of esteemed scientists and influencers — including Stephen Hawking and Elon Musk — signed an open letter endorsing a “ban on offensive autonomous weapons beyond meaningful human control.”

But for all intents and purposes, the military is not interested in what the vice chairman of the Joint Chiefs of Staff calls general AI. Narrow AI is teaching a machine to perform a specific task, while general AI is the T-1000, Gen. Paul Selva said April 3 at an event hosted by Georgetown University, referencing the shape-shifting robot assassin from the "Terminator 2" movie.

“General AI is this sort of self-aware machine that thinks it knows what’s right and what’s wrong,” he said. “The issue is whether or not a person or a country or an adversary would take narrow AI and build it into a system that allows the weapon to take a given life without the intervention of a human.”

This could take the form of someone building a set of algorithms, saying, for instance, to make every gray-haired guy with a flat top a target, Selva said, using himself as an example. 

“I don’t think we need to go there,” Selva followed. “I think what we can do is apply narrow AI to empower humans to make decisions in an every increasingly complex battlespace.”

This type of so-called narrow AI will be able to sift through the kinds of things decision-makers have to sift through to get at an adversary at a high speed, complex battlespace. The way the character of war is changing today, war fighters must be able to sense the patterns of their adversary’s behaviors, then empower the decision quickly; the third part of that equation is acting at that speed, Selva said.

The lesson here, he said, is if the U.S. is creating slow weapons to act on a fast battlespace, “we’re going to get our clock cleaned.” Sensing adversary patterns, empowering decision-makers and acting quickly are really at the heart of the so-called third offset strategy the Pentagon is war gaming, Selva said.

“What’s that imply for this new battlespace? If you have an aware system that informs a decision-maker quickly and as quickly as they decide, they advance, that ought to change the battlespace,” he added.

However, interoperability will be a critical component as AI advances and the services look to leverage operations across domains and across coalitions. Selva said that U.S. export policies denying the sale of certain systems, such as large unmanned systems, creates a self-limiting factor within semi-autonomous systems.

The U.S. has allies and partners who want to buy MQ-1 Predator and MQ-9 Reaper drones, so they go to the Russians, the Chinese, the Israelis, the French, all whom are more than willing to sell their technology to others, he said. So the U.S. ends up with an ally that has systems that are not interoperable with U.S. systems. “That’s a problem,” Selva said.

He added that he has been an advocate for some kind of convention on how states use AI, but offered skepticism regarding how it could be enforced.

No comments:

Post a Comment