Pages

24 February 2018

AI warfare is coming, and some global leaders say NATO isn’t ready

By: Jill Aitoro 

MUNICH — The future of warfare will involve artificial intelligence systems acting as lethal weapons, and much like cyber a decade ago, NATO allies are ill-equipped to manage the potential threat, said current and former European leaders speaking at the Munich Security Conference.

Kersti Kaljulaid, president of Estonia, estimated a 50 percent chance that by the middle of this century we will have an AI system capable of launching a lethal attack. And yet, just as the world was not prepared for a cyberattack when Russia first launched a cyberattack against Estonia in 2007 — bombarding websites of Estonian parliament, banks, ministries, and news outlets — there is no strategy or international law to deter such tactics of warfare.

First, “we need to understand the risks — what we’re afraid of,” said Kaljulaid,, pointing to three: someone using AI disruptively; intelligence going widespread; and AI depleting energy.

“Right now we know we want to give systems some right of auto decision-making when it has the necessarily information to react,” Kaljulaid said. But once that is accomplished, “then we have the responsibility” to establish standards — the ability to apply reporting requirements to AI system, or to even shutdown systems if they are deemed threatening.

The kind of standards gradually being put in place for cybersecurity “need to apply to the AI world, exactly the same way,” she said.

For such standards to potentially be established for AI, there must be acceptable models of use in combat, and in conjunction with that, when there is evidence that AI is deployed outside those established boundaries, there must be a right to intervene.

And much like nuclear non-proliferation efforts, “if we say that we will have the right to intervene, we have to have the right to international inspection,” Kaljulaid said.

Among the standards advocated by Anders Fogh Rasmussen, former NATO secretary general, is that AI always involve human beings. There are three options, he said during the panel: humans can be in charge, always “in the loop;” humans can be “on the loop” through a supervisory role, able to intervene; or humans can be “out of the loop” – telling the system to attack, and then leaving the rest to the machine.

“I’m in favor of trying to introduce legally binding [standards] that will prevent production and use of these kinds of autonomous lethal weapons,” Rasmussen said, strongly advocating for a human role.

But such standards don’t come fast. It took until 2017 for NATO to declare that a cyberattack would spur an Article 5 response – that being, collective defense among allies — after a massive computer hack paralyzed portions of government and businesses in Ukraine before spreading around the globe. In the meantime, much like cybersecurity, AI presents an opportunity for Russia as well as China to use “grey zones,” said Rasmussen – not initiating open military conflict, but provoking allies enough to disrupt.

So what is the red line?

“The NATO perspective its clear: ambiguity,” Rasmussen said. “We never define when a red line is crossed. We never define how to respond if a certain member state is attacked. Nobody should know when they cross the line and how we would react. It’s easy- abstain from attacking any NATO ally; if you do [attack], we’ll respond decisively. It may be conventional, it may be a cyber counterattack, you never know.”

But to prevent adversaries from taking advantage of the technological capability, “we need leadership from the democratic world,” Rasmussen added. “Whenever the democratic countries retrench and retreat they leave behind a vacuum. And that will be filled by the bad guys.”

No comments:

Post a Comment