Pages

29 June 2022

Trust in AI: Rethinking Future Command

Christina Balis and Paul O’Neill

AI is transforming warfare. How can Defence prepare for the changes that lie ahead?

The traditional response to the acceptance challenge posed by the military use of AI has been to insist on humans maintaining ‘meaningful human control’ as a way of engendering confidence and trust. This is no longer an adequate response when considering both the ubiquity and rapid advances of AI and related underpinning technologies. AI will play an essential, growing role in a broad range of command and control (C2) activities across the whole spectrum of operations. While less directly threatening in the public mind than ‘killer robots’, the use of AI in military decision-making presents key challenges as well as enormous advantages. Increasing human oversight over the technology itself will not prevent inadvertent (let alone intentional) misuse.

This paper builds on the premise that trust at all levels (operators, commanders, political leaders and the public) is essential to the effective adoption of AI for military decision-making and explores key related questions. What does trust in AI actually entail? How can it be built and sustained in support of military decision-making? What changes are needed for a symbiotic relationship between human operators and artificial agents for future command?

Trust in AI can be said to exist when humans hold certain expectations of the AI’s behaviour without reference to intentionality or morality on the part of the artificial agent. At the same time, however, trust is not just a function of the technology’s performance and reliability – it cannot be assured solely by resolving issues of data integrity and interpretability, important as they are. Trust-building in military AI must also address needed changes in military organisation and command structures, culture and leadership. Achieving an overall appropriate level of trust requires a holistic approach. In addition to trusting the purpose for which AI is put to use, military commanders and operators need to sufficiently trust – and be adequately trained and experienced on how to trust – the inputs, process and outputs that underpin any particular AI model.

However, the most difficult, and arguably most critical, dimension is trust at the level of the organisational ecosystem. Without changes to the institutional elements of military decision-making, future AI use in C2 will remain suboptimal, confined within an analogue framework. The effective introduction of any new technology, let alone one as transformational as AI, requires a fundamental rethinking of how human activities are organised.

Prioritising the human and institutional dimensions does not mean applying more control over the technology; rather, it requires reimagining the human role and contribution within the evolving human–machine cognitive system. Future commanders will need to be able to lead diverse teams across a true ‘Whole Force’ that integrates contributions from across the military, government and civilian spheres. They must understand enough about their artificial teammates to be capable of both collaborating with and challenging them. This is more akin to the murmuration of starlings than the genius of the individual ‘kingfisher’ leader. For new concepts of command and leadership to develop, Defence must rethink its approach not only to training and career management but also to decision-making structures and processes, including the size, location and composition of future headquarters.

AI is already transforming warfare and challenging longstanding human habits. By embracing greater experimentation in training and exercises, and by exploring alternative models for C2, Defence can better prepare for the inevitable change that lies ahead.

No comments:

Post a Comment