Some states are planning to acquire armed drones that incorporate artificial intelligence (AI) and fly alongside inhabited aircraft. The use of drones according to this “Loyal Wingman” concept is an example of tactical human-machine teaming, and it could be militarily advantageous in future aerial warfare.
Incorporating AI into the operation of a weapon system’s critical functions (selecting and engaging targets) nevertheless carries an ethical risk: that a human will be unable to exercise adequate control over the use of force and unable to take responsibility for any injustice caused. To reduce this risk,
one potential approach is to pursue “meaningful human control” over armed and AI-enabled drones by increasing their human supervisors’ cognitive capacity. The use of brain-computer interfaces (BCIs) to achieve such an increase might be beneficial from the perspective of military ethics if it enabled faster human interventions to prevent unjust,
AI-associated harms. However, as this article shows, that benefit would be outweighed by the ethical downsides of waging cyborg-drone warfare: the undermining of pilots’ hors de combat noncombatant status and of human moral agency in the use of force.
No comments:
Post a Comment