Pages

16 March 2026

Human-in-the-Loop or Loophole? Targeting AI and Legal Accountability

Khyati Singh

There is no doubt that incorporating artificial intelligence (AI) within the targeting cycle has its operational advantages. During a complex urban scenario, the AI-driven decision-support systems (AI-DSS) has the potential to rapidly synthesize incredible volumes of data received from diverse ISR, signals intelligence, and other feeds, at a velocity no human could match. In theory, this innovation would sharpen a commander’s situational awareness, more accurately ascertain military objectives, and model collateral damage with newfound precision.

The objective is to achieve a “cleaner” battlefield that features faster and more accurate targeting with lower collateral damage to civilians and civilian infrastructure. This integration of AI-driven systems is increasingly viewed as a mechanism to fulfill the core mandates of International Humanitarian Law (IHL). By providing commanders with more granular data and precise modeling, these tools are designed to facilitate the principle of distinction, which requires parties to target only military objectives and combatants. Furthermore, the speed and accuracy of such systems are intended to support the principle of proportionality, assisting decision-makers in ensuring that an attack’s collateral impact does not outweigh its intended military necessity. This promise of a more accurate and automated targeting system is desirable within the operational limits of IHL regarding the principles of distinction (between combatants and civilians) and proportionality (not excessive attack).

No comments:

Post a Comment