Pages

29 January 2020

The Killer Algorithms Nobody’s Talking About

BY ARTHUR HOLLAND MICHEL
Source Link

This past fall, diplomats from around the globe gathered in Geneva to do something about killer robots. In a result that surprised nobody, they failed.

The formal debate over lethal autonomous weapons systems—machines that can select and fire at targets on their own—began in earnest about half a decade ago under the Convention on Certain Conventional Weapons, the international community’s principal mechanism for banning systems and devices deemed too hellish for use in war. But despite yearly meetings, the CCW has yet to agree what “lethal autonomous weapons” even are, let alone set a blueprint for how to rein them in.

Meanwhile, the technology is advancing ferociously; militaries aren’t going to wait for delegates to pin down the exact meaning of slippery terms such as “meaningful human control” before sending advanced warbots to battle.

To be sure, that’s a nightmarish prospect. U.N. Secretary-General António Guterres, echoing a growing chorus of governments, think tanks, academics, and technologists, has called such weapons “politically unacceptable” and “morally repugnant.” But this all overlooks an equally urgent menace: autonomous systems that are not in themselves lethal but rather act as a key accessory to human violence.


The debate over so-called killer robots overlooks an equally urgent menace: autonomous systems that are not in themselves lethal but rather act as a key accessory to human violence.

Such tools—let’s call them lethality-enabling autonomous systems—might not sound as frightening as a swarm of intelligent hunter drones. But they could be terrifying. At best, they will make conflict far more unpredictable and less accountable. At worst, they could facilitate ghoulish atrocities.

Many such technologies are already in use. Many more are right around the corner. And because of our singular focus on headline-grabbing killer robots, they have largely gone ignored.

Militaries and spy services have long been developing and deploying software for autonomously finding “unknown unknowns”—potential targets who would have otherwise slipped by unnoticed in the torrent of data from their growing surveillance arsenals. One particularly spooky strand of research seeks to build algorithms that tip human analysts off to such targets by singling out cars driving suspiciously around a surveilled city.

Other lethality-enabling technologies can translate intercepted communications, synthesize intelligence reports, and predict an adversary’s next move—all of which are similarly crucial steps in the lead-up to a strike. Even many entry-level surveillance devices on the market today, such as targeting cameras, come with standard features for automated tracking and detection.

For its part, the U.S. Department of Defense, whose self-imposed rules for autonomous weapons specifically exempt nonlethal systems, is allowing algorithms dangerously close to the trigger. The Army wants to equip tanks with computer vision that identifies “objects of interest” (translation: potential targets) along with recommendation algorithms—kind of like Amazon’s—that advise weapons operators whether to destroy those objects with a cannon or a gun, or by calling in an airstrike. All of these technologies fall outside the scope of the international debate on killer robots. But their effects could be just as dangerous.

The widespread use of sophisticated autonomous aids in war would be fraught with unknown unknowns. An algorithm with the power to suggest whether a tank should use a small rocket or a fighter jet to take out an enemy could mark the difference between life and death for anybody who happens to be in the vicinity of the target.
An algorithm with the power to suggest whether a tank should use a small rocket or a fighter jet to take out an enemy could mark the difference between life and death for anybody who happens to be in the vicinity of the target. But different systems could perform that same calculation with widely diverging results. Even the reliability of a single given algorithm could vary wildly depending on the quality of the data it ingests.

It is also difficult to know whether lethality-enabling artificial intelligence—prone as computers are to bias—would contravene or reinforce those human passions that all too often lead to erroneous or illegal killings. Nor is there any consensus as to how to ensure that a human finger on the trigger can be counted on as a reliable check against the fallibility of its algorithmic enablers.

As such, in the absence of standards on such matters, not to mention protocols for algorithmic accountability, there is no good way to assess whether a bad algorithmically enabled killing came down to poor data, human error, or a deliberate act of aggression against a protected group.

A well-intentioned military actor could be led astray by a deviant algorithm and not know it; but just as easily, an actor with darker motives might use algorithms as a convenient veil for an intentionally insidious decisions.

Automation’s vast potential to make humans more efficient extends to the very human act of committing war crimes.

If one system offers up a faulty conclusion, it could be easy to catch the mistake before it does any harm. But these algorithms won’t act alone. A few months ago, the U.S. Navy tested a network of three AI systems, mounted on a satellite and two different airplanes, that collaboratively found an enemy ship and decided which vessel in the Navy’s fleet was best placed to destroy it, as well as what missile it should use. The one human involved in this kill chain was a commanding officer on the chosen destroyer, whose only job was to give the order to fire.

Eventually, the lead-up to a strike may involve dozens or hundreds of separate algorithms, each with a different job, passing findings not just to human overseers but also from machine to machine. Mistakes could accrue; human judgment and machine estimations would be impossible to parse from one another; and the results could be wildly unpredictable.

These questions are even more troubling when you consider how central such technologies will become to all future military operations. As the technology proliferates, even morally upstanding militaries may have to rely on autonomous assistance, in spite of its many risks, just to keep ahead of their less scrupulous AI-enabled adversaries.

Once an AI system can navigate complicated circumstances more intelligently than any team of soldiers, the human will have no choice but to take its advice on trust

And once an AI system can navigate complicated circumstances more intelligently than any team of soldiers, the human will have no choice but to take its advice on trust—or, as one thoughtful participant at a recent U.S. Army symposium put it, targeting will become a matter of simply pressing the “I-believe button.” In such a context, assurances from top brass that their machines will never make the ultimate lethal decision seem a little beside the point.

Most distressing of all, automation’s vast potential to make humans more efficient extends to the very human act of committing war crimes. In the wrong hands, a multi-source analytics system could, say, identify every member of a vulnerable ethnic group.

China’s Uighur population is already routinely submitted to exactly this kind of digital despotism; state and local authorities have deployed facial recognition tools capable of picking out members of the predominantly Muslim minority in closed-circuit TV footage, along with myriad other spying tools, to chart their every move.

Imagine what such a technology could achieve in war. Militaries have long argued that AI will make conflict more precise. But that argument has a dark flipside: An algorithm designed to minimize civilian casualties could just as easily be used to calculate how civilian harm could be maximized.

An algorithm designed to minimize civilian casualties could just as easily be used to calculate how civilian harm could be maximized.

Governments must broaden the debate on killer robots to include all algorithmic links in the kill-chain. They need to consider how to align such systems to the fundamental laws of war, and model the complex interactions between disparate lethality-enabling systems so as to avoid nasty surprises. Finally, governments must develop inscrutable and transparent mechanisms to audit algorithms that go bad, as well as those humans who employ their algorithms badly.

This could even streamline the debate in Geneva, which has largely broken down over disagreements as to what counts as a true lethal autonomous weapon. The norm-building process would no longer have to navigate the intellectually dubious distinction between warfighting AI and AI used by warfighters. Instead, the same fundamental principles could be applied equally to those algorithms that do the killing and those that are adjacent to it.

If, on the other hand, the debate among policymakers remains narrowly focused on “killer robots,” these issues will remain unresolved until it’s too late. That would be an unacceptable mistake.

No comments:

Post a Comment