Pages

4 February 2020

The Future Of War: Less Fantastic, More Practical – Analysis

By Lindsey R. Sheppard

Artificial intelligence (AI) is a defining element in a societal transition from the Information Age to one dominated by data, information, and cyberphysical systems. As states now compete in “the gray zone” or through hybrid measures — tactics intended to remain below the threshold of armed conflict [i] — leveraging the massive amounts of information and data at hand is of strategic importance. [ii] Kinetic effect will no doubt remain crucial in armed conflict. However, securing advantage in a world with artificial intelligence, data analytics, and cloud computing requires mastery of data and information awareness — i.e., the non-kinetic and digital.
The AI toolbox

AI is an umbrella term that often includes various disciplines of computer science, [iii] learning strategies, applications, and use cases. AI has experienced a surge of excitement, research, and application in the past decade, driven by an increased availability of data and computing power, advances in machine learning (as distinguished from rules-based systems [iv]), and electronics miniaturisation. It has gone from largely residing in the realm of academia and research to widespread application across the public and private sectors. [v]


Despite the hype, some of the biggest impacts from the use of AI and applied machine learning in the defence context are mundane. Areas like logistics, predictive maintenance, and sustainment are ripe for computational innovation by assisting humans with repetitive tasking and enabling the processing of large volumes of data. The operational reality in the near term may simply be optimising resource allocation and management that allows actors to act efficiently within their opponent’s decision loop.

While the pursuit of AI for national defence and military systems raises concerns on the role of technology in warfare, many of the concerns are not inherently new. International law of war as well as nation-specific law [vi] on use of force and military action are applicable whether AI is incorporated into systems or not. Principles of military necessity, distinction, and proportionality along with existing frameworks, structures, and institutions remain relevant and regulate the development and deployment of any advanced technology for armed conflict.[vii] Military legal advisors, ethicists, and policymakers, to name a few, continue to work to identify potential gaps in existing law and guidance and have yet to reach consensus that such gaps do exist. What relevant stakeholders do agree on is the necessity for system attributes such as robustness, safety, transparency, and traceability.

Further, an AI system exists within an AI ecosystem [viii] that includes not only the algorithms but also the data from which the algorithms “learn,” computing infrastructure, governance structures, and the many humans that design, interact with, deploy, and are impacted by the technology. The development of the AI ecosystem will be critical to the success of AI systems in future conflicts. Many nations face an underdeveloped AI ecosystem and are rapidly working to invest the requisite time, attention, and financial support for growth.[ix]

For learning-based solutions, we must also address the necessity and availability of data and computing resources. Data quality and security are hurdles to AI application in the relatively data-scarce environment of national security contexts. While defence-related data does exist, it is often unstructured and not suited to machine learning solutions, let alone statistical data analysis. Foundational computational infrastructure and networking often require significant upgrades in most cases while simultaneously addressing the necessity of “compute.” [x]

Effectively, the one aspect of AI that is actually an arms race is the competition for talent. [xi] Developing an educated and skilled workforce is of critical importance to the success of highly capable machines. This includes professional education, university Science, Technology, Engineering, and Math (STEM), and incorporating computer science into early education for children. While we must not only contend with the significant time required to grow capable talent, nations will continue to compete for the existing talent. For example, both Russia and China are recognising the imperative to retain or bring home technical talent and expertise [xii] in addition to STEM education initiatives. [xiii]
Mitigating risk and managing expectation

Even with all puzzle pieces in place, the capabilities of AI and machine learning are still quite limited relatively and are expected to remain so for the foreseeable future. [xiv] AI is always purpose-built, problem-specific, and context-dependent. It operates effectively on discrete tasks over well-bounded problems. Further, machine learning requires volumes of labeled datasets that are time-intensive to create and maintain, and the need for access challenges the defence sector’s traditional approach to securing sensitive data through silos and restricted access.

A misunderstanding of the limitations of AI, in part through mismanaged expectations on the promise of intelligent machines, in fact serves as a mechanism for exacerbating risks and increasing the potential for accidents. AI introduces new vulnerabilities and failure modes into systems. [xv] While system failure in warfare is not unique to AI, failure in machine learning may look different, possibly in unrecognisable, new, and unexpected ways. Further, it may be difficult to verify a system is behaving as intended. Even more challenging for applied machine learning is classifying an unwanted behaviour and ensuring a system does not exhibit said behaviour again. Deploying machine learning in the context of warfare thus requires an assessment of the consequences of failures. Even in the most well-known defence applications, like drone video analysis, [xvi] the technical maturity and capability of AI currently presents an unacceptable risk in relying completely on machines.

The decisions men and women face in combat are uniquely human. While the applicability of AI to security challenges holds promise in areas with repetitive, well-defined tasking, we should resist the temptation to blindly tackle with AI our hardest problems of how and when humans wage war. AI will not make the difficult choices and decisions inherent in armed conflict any less difficult. Additionally, the use of AI, machine learning, and analytic support tools is not a mechanism by which humans can abdicate responsibility over decisions.

Two conclusions become clear. One, AI is likely to be one tool of many in the digital toolbox where it is applicable. Given technical maturity considerations, learning-based systems may not be the most appropriate solution for many problems. However, the defence enterprise is ripe with areas that are appropriate for the application of AI and limiting considerations should not be lost in discussions on lethal force.

Two, investing in people may be the best safeguard against missteps and misuse. From senior leaders to developers to end users, people must understand the capabilities as well as the limitations to guide the development and deployment of AI and machine learning capability. In our increasingly digital future, featuring increasingly digitised warfare, we cannot afford to under- or over-estimate the applicability and potential of AI.

No comments:

Post a Comment