SIPRI and Netta Goussac
In a tense geopolitical environment and amid an apparent artificial intelligence (AI) ‘arms race’, militaries seeking to integrate AI capabilities are grappling with two important questions. The first is how to expedite the deployment and scaling up of these novel and rapidly developing technologies, which are now seen as essential for strategic dominance. The second is how to implement commitments to responsible use of AI in the military domain.
The procurement of AI capabilities is challenging for a number of reasons, not least the shortage of AI-literate personnel in today’s militaries and the fact that suppliers of AI capabilities are diverse and include non-traditional actors such as technology start-ups. This complicates efforts to adapt procurement processes for AI and in some cases increases risk. For example, decentralizing some AI procurement decisions to individual units or commands could reduce red tape, but it could also mean the decisions are not reviewed by AI-literate officials able to see through industry hype. Opting for industry-led off-the-shelf solutions may be much quicker than having suppliers develop new systems to meet the military’s specifications. But having less control over the design, development and testing of new AI capabilities increases the risk of deploying systems that do not meet forces’ needs or are even substandard or unsafe.
Principles of responsible behaviour are relevant not only to the use of AI capabilities but also to the processes by which those capabilities are procured. How then can militaries’ efforts towards streamlining procurement be kept in step with their efforts to develop and implement principles of responsible behaviour in relation to AI in the military domain?
This essay examines why and how states must align steps to adapt procurement processes for military AI with principles of responsible behaviour.
No comments:
Post a Comment