Pages

29 April 2018

How AI Could Destabilize Nuclear Deterrence

BY ELIAS GROLL
When Russian President Vladimir Putin announced last month that his country was developing an autonomous nuclear-powered torpedo, it marked a milestone in the marriage of nuclear weapons and artificial intelligence — that is, if the weapon does what he claims. The torpedo, armed with a nuclear warhead, would be launched from the Arctic Ocean and travel at high speeds for hundreds of miles until it reached its target — probably an American harbor — all the while maneuvering autonomously to evade underwater defenses and outrunning any adversaries.


If operational, the torpedo would combine the proven destructiveness of a nuclear weapon with the burgeoning field of AI.

With this and other developments in a mind, a provocative new report from Rand Corp., the Santa Monica-based think tank, asks: How might AI affect the risk of nuclear war?

For now, the technology probably isn’t improving the odds and may destabilize the fragile post-Cold War order that has kept nuclear missiles in their silos.

AI is far from being used in the doomsday nuclear weapons scenarios imagined by science fiction — a computer deciding to launch intercontinental ballistic missiles (ICBMs) for example. Instead, the ways in which AI is being integrated into nuclear weapons systems lie in the worlds of intelligence. AI-enabled reconnaissance systems, for example, could be used to analyze huge reams of data. Autonomous drones could scan vast swaths of terrain.

And these technologies, the report finds, “could stoke tensions and increase the chances of inadvertent escalation.”

“When it comes to artificial intelligence and nuclear warfare, it’s the mundane stuff that’s likely to get us,” says report author Edward Geist, a Rand researcher. “No one is out to build a Skynet,” a reference to the nuclear command-and-control AI system from the Terminator movies that concludes it must kill humanity in order to ensure its own survival.

For example, AI-enabled intelligence tools — such as autonomous drones or submarine-tracking technology — threaten to upset the delicate strategic balance among the world’s major nuclear powers. Such technology could be used to find and target retaliatory nuclear weapons, which are held in reserve to ensure that any nuclear strike on a country’s territory will be met in kind.

This capability could upend “mutually assured destruction,” the idea that any use of nuclear weapons will result in both sides’ destruction. But if a country is able to use AI-enabled technology find and target missiles stored in silos, on trucks, and in submarines, that threat of retaliation could be taken off the table, inviting a first strike.

And in the paranoid logic of nuclear deterrence, AI doesn’t have to actually provide this breakthrough in order for it to be destabilizing — the enemy only has to think that it provides a putative edge that puts its nuclear force at risk.

In the case of intelligent image processing, it’s not just paranoia. The U.S. Defense Department’s Project Maven aims to take reams of drone video and pick out objects automatically from full-motion video, enabling the analysis of massive quantities of video surveillance.

The Rand report makes clear that AI doesn’t have to be a destabilizing technology. Improved intelligence collection could assure major nuclear powers that their opponents are not on the verge of launching a surprise first strike, but that assumes equal access to the cutting-edge technology.

“The social and political institutions that would normally be trying to keep this manageable are dysfunctional or are breaking down,” Geist says.

And that leaves nuclear powers competing with one another to develop the best AI, with apparently huge stakes.

“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin famously said last year. “Whoever becomes the leader in this sphere will become the ruler of the world.”

Chinese authorities, meanwhile, have developed a detailed plan to become a world leader in the field. In February, the South China Morning Post reported that Chinese military officials are planning “to update the rugged old computer systems on nuclear submarines with artificial intelligence to enhance the potential thinking skills of commanding officers.”

In researching the report, Geist and his co-author, Andrew Lohn, a Rand engineer, convened a series of focus groups bringing together technologists, policymakers, and nuclear strategists. They observed an aversion to handing computers control of any aspect of the decision to use nuclear weapons.

But that leaves machine intelligence playing a subtler role in a nuclear weapons system. “If you are making decisions as a human based on data that was collected, aggregated, and analyzed by a machine, then the machine may be influencing the decision in ways that you may not have been aware,” Lohn says.

And as AI improves its ability to recognize patterns and play games, it may be incorporated as an aid to decision-making, telling human operators how best to fight a war that may escalate to a nuclear exchange.

In a hypothetical scenario in which Russia masses troops at a border position, an AI system could advise policymakers that the proper response would be to place troops in certain cities and place bombers on alert. The computer could then lay out that Russia would retaliate and how escalation would play out.

That technology doesn’t exist today, Lohn says, but “if AI is winning in simulations or war games, it will be hard to ignore it.”

No comments:

Post a Comment