Pages

11 May 2017

Forget the Mother of All Bombs — fear the Mother of All Algorithms

Chris Reed

The Mother of All Bombs made news last week after the U.S. military dropped its most powerful non-nuclear bomb at a site in Afghanistan’s Nangarhar Province laced with tunnels that suspected Islamic State militants had been using, reportedly killing up to 36 of them instantly without harming any civilians. The 11-ton GBU-43/B Massive Ordnance Air Blast Bomb never actually struck the ground.

Instead, by design, it detonated over the target, lacing the air with fuel, which then atomized in a second explosion, creating immense atmospheric “overpressure” in the area that can kill people underground by turning their bodies “inside out,” in the evocative phrase of former Air Force Special Operations combat controller Edward Priest.

This massive version of what’s known as a fuel-air explosive device may seem a high-tech marvel. But the technology is old news, based on the World War II-era theories of Mario Zippermayr, an eccentric Austrian genius who worked for the Nazis and who later pioneered a method for treating respiratory diseases. Yet there’s plenty of new news on the military weapons front — including developments so striking and ominous that they eventually will require leaders in Washington, Beijing and Moscow to make some of the most profound decisions in human history.

Let’s start with the simpler stuff: the near-imminent debut of cheap, powerful laser weapons in the U.S. military arsenal. This is from a 2015 Ars Technica report:

Sometime very soon, combat aircraft may be zapping threats out of the sky with laser weapons. “I believe we'll have a directed energy pod we can put on a fighter plane very soon,” Air Force Gen. Hawk Carlisle said at this week’s Air Force Association Air & Space conference in a presentation on what he called Fifth-Generation Warfare. “That day is a lot closer than I think a lot of people think it is.” ...

Directed-energy weapons pods could be affixed to aircraft to destroy or disable incoming missiles, drones, and even enemy aircraft at a much lower “cost per shot” than missiles or even guns, Carlisle suggested.

The Pentagon and defense contractors aren’t as far along in testing with the electromagnetic railgun, which harnesses the power of an electromagnetic pulse to fire a projectile up to 4,600 mph. The 25-pound projectile hits with such immense, destructive force that it doesn’t need explosives. The railgun requires vast amounts of energy to fire, and every launch degrades the weapon significantly. But if these obstacles can be overcome, the Pentagon could end up with a GPS-precision-guided weapon straight out of 1940s pulp science fiction — one whose single use would cost less than one-40th of a Tomahawk missile.

These relatively inexpensive weapons aren’t the historic norm in defense procurement, where the military’s new toys are often fantastically costly. For a current example, the B-21 Raider stealth bomber that will be built by Northrop Grumman and deployed within a decade will cost $564 million each. Yet in some categories, technological advances create opportunities for cheap but powerful military tools, only starting with weaponized drones — drones able to wipe out enemies on their own volition.

Which brings us to the scary stuff. This is from an October report in The New York Times:

Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power. It is spending billions of dollars to develop what it calls autonomous and semiautonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction ... .

The Defense Department is designing robotic fighter jets that would fly into combat alongside manned aircraft. It has tested missiles that can decide what to attack, and it has built ships that can hunt for enemy submarines, stalking those it finds over thousands of miles, without any help from humans. ...

“China and Russia are developing battle networks that are as good as our own ... ,” said Robert O. Work, the deputy defense secretary, who has been a driving force for the development of autonomous weapons. “What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past.”

The dilemma posed by artificial intelligence-driven autonomous weapons — which some scientists liken to the “third revolution in warfare, after gunpowder and nuclear arms” — is that to take fullest advantage of such weapons, the logical move would be to leave humans entirely out of lethal decision-making, allowing for quicker responses to threats and in theory making us safer. But if future presidents and Pentagons trusted algorithms to make such decisions, conflicts between two nations relying on such technology could rapidly escalate — to possibly apocalyptic levels — without human involvement.

The U.S. military officially scoffs at this idea. In making life-and-death decisions, “there will always be a man in the loop,” Deputy Defense Secretary Work told the Times in October. But Work and others who try to offer a reassuring vision of a future in which radical new military technologies are deployed with care and caution may be swimming against the tide.

In August, as BreakingDefense.com reported, the Pentagon’s Defense Science Board urged the Department of Defense to ...

“accelerate its exploitation of autonomy — both to realize the potential military value and to remain ahead of adversaries who also will exploit its operational benefits” ... . Machines and computers can process much more data much more quickly than can humans, “enabling the U.S. to act inside an adversary’s operations cycle.” And that is why it is “vital if the U.S. is to sustain military advantage.”

In a 2013 Wall-Street Journal op-ed, authors and teachers Robert H. Latiff and Patrick J. McCloskey warned of the dangers of thinking like this: “Full lethal autonomy is no mere next step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object.”

But there is a pecuniary twist to this debate. In coming years, America’s military and political leaders won’t be considering whether to embrace autonomous defense and combat in a vacuum in which moral, ethical and philosophical concerns are carefully weighed. In the post-sequester era, military budgets have been cramped for years. Whatever President Trump’s short-term plans, this budget pressure is unlikely to recede in the medium- and long-term as the national debt grows and an aging population sends Social Security and Medicare costs soaring. Autonomous weapons are so relatively inexpensive that qualms about their riskiness could be swept aside — not just in Washington but in Beijing and Moscow as well.

This fear of a future in which such weapons are “cheap and ubiquitous” led more than 20,000 AI researchers, scientists and interested individuals — including Elon Musk, Stephen Hawking and Steve Wozniak — to sign a Future of Life Institute petition endorsing a ban on offensive autonomous weapons.

Will this have any effect on the ultimate decision-makers? That doesn’t appear to be the case so far.

So if you too are worried about the future of life, stop mulling the Mother of All Bombs and start worrying about the Mother of All Algorithms — and urge your elected officials to do so as well. Considering the stakes, it’s hard to fathom this specter not getting a thorough airing before the next presidential election.

No comments:

Post a Comment