Michael A. Santoro
As artificial intelligence systems are integrated into military operations, a familiar intuition hardens into an institutional standard: The higher the stakes, the more essential it is to keep humans in the loop. In matters of life and death, machines must not be left to decide on their own.
That intuition is understandable. It is also, in important respects, wrong.
In lower-stakes environments—traffic management, service delivery, even routine policing—human oversight can sometimes function as a backstop. Errors are visible, decisions can be revisited, and the costs of delay are tolerable. In crisis response, human oversight becomes less effective in addressing errors. Decisions must be made quickly, information is incomplete, and the consequences of hesitation grow more severe. Under these conditions, late-stage human intervention becomes less reliable, not more.
No comments:
Post a Comment