Nathan G. Wood
Militaries around the world are developing increasingly autonomous weapons systems. These efforts, however, have been met with staunch opposition from a number of groups.1 Critics object that “autonomous weapons would face great, if not insurmountable, difficulties in reliably distinguishing between lawful and unlawful targets as required by international humanitarian law.”2 One part of this objection argues that autonomous weapons system(s) (AWS) cannot recognize the subtle cues that can distinguish civilians from combatants, active combatants from those attempting to surrender, or active combatants from those who are out of action due to unconsciousness, wounds, or sickness (hors de combat).3 A second concern is that autonomous weapons may have certain features that render them unpredictable, making their use inherently indiscriminate.4 As a result, opponents charge, such weapons would be in breach of international humanitarian law (IHL) and the principle of distinction.5
These objections fundamentally mistake what is required under the principle of distinction, rely on inaccurate depictions of AWS, and misunderstand what IHL demands with regard to the use of these weapons. The core of the principle of distinction is not concerned with regulating weapons, but rather uses of weapons. Precautions in attack and the principle of proportionality both affect distinction in war, and the relevant question is not about the technology per se, but the conditions under which commanders are required to exercise greater care. The principle of distinction will indeed set bounds on how AWS may be used, and on when, where, and under what limitations they may be deployed, but it cannot underpin any blanket prohibitions for existing or near-future autonomous weapons.6 Autonomous weapons do create legitimate ethical and legal worries, but we must identify actual problems rather than chasing phantom concerns rooted in misunderstandings about technology and military operations
No comments:
Post a Comment