Michael A. Santoro
As artificial intelligence systems are integrated into military operations, a familiar intuition hardens into an institutional standard: The higher the stakes, the more essential it is to keep humans in the loop. In matters of life and death, machines must not be left to decide on their own.
That intuition is understandable. It is also, in important respects, wrong.
In lower-stakes environments—traffic management, service delivery, even routine policing—human oversight can sometimes function as a backstop. Errors are visible, decisions can be revisited, and the costs of delay are tolerable. In crisis response, human oversight becomes less effective in addressing errors. Decisions must be made quickly, information is incomplete, and the consequences of hesitation grow more severe. Under these conditions, late-stage human intervention becomes less reliable, not more.
In military contexts, where these dynamics are the most consequential, late-stage human-in-the-loop overrides are, in fact, the least trustworthy and effective way to fix errors that arise because of the algorithmic system. In military engagement, errors can be lethal. Time is compressed, uncertainty is pervasive, and decisions are often irreversible. Understandably, the conventional wisdom is that it is precisely here that the case for human-in-the-loop control is strongest. The assumption is that human judgment—especially in identifying targets and avoiding civilian harm—is inherently superior to algorithmic decision-making. However, the conventional wisdom does not hold up under scrutiny.
No comments:
Post a Comment