Pages

5 May 2018

The promise and peril of military applications of artificial intelligence

Michael C. Horowitz

Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminatorfranchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom’s concern about the existential risk to humanity posed by artificial intelligence to Tesla founder Elon Musk’s concern that artificial intelligence could trigger World War III to Vladimir Putin’s statement that leadership in AI will be essential to global power in the 21st century.

What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon. Instead, artificial intelligence, from a military perspective, is an enabler, much like electricity and the combustion engine. Thus, the effect of artificial intelligence on military power and international conflict will depend on particular applications of AI for militaries and policymakers. What follows are key issues for thinking about the military consequences of artificial intelligence, including principles for evaluating what artificial intelligence “is” and how it compares to technological changes in the past, what militaries might use artificial intelligence for, potential limitations to the use of artificial intelligence, and then the impact of AI military applications for international politics.

The potential promise of AI—including its ability to improve the speed and accuracy of everything from logistics to battlefield planning and to help improve human decision-making—is driving militaries around the world to accelerate their research into and development of AI applications. For the US military, AI offers a new avenue to sustain its military superiority while potentially reducing costs and risk to US soldiers. For others, especially Russia and China, AI offers something potentially even more valuable—the ability to disrupt US military superiority. National competition in AI leadership is as much or more an issue of economic competition and leadership than anything else, but the potential military impact is also clear. There is significant uncertainty about the pace and trajectory of artificial intelligence research, which means it is always possible that the promise of AI will turn into more hype than reality. Moreover, safety and reliability concerns could limit the ways that militaries choose to employ AI.

What kind of technology is artificial intelligence? Artificial intelligence represents the use of machines, or computers, to simulate activities thought to require human intelligence, and there are different AI methods used by researchers, companies, and governments, including machine learning and neural networks. Existing work on the trajectory of AI technology development suggests that, even among AI researchers, there is a great deal of uncertainty about the potential pace of advances in AI. While some researchers believe breakthroughs that could enable artificial general intelligence (AGI) are just a few years away, others think it could be decades, or more, before such a breakthrough occurs. Thus, this article focuses on “narrow” artificial intelligence, or the application of AI to solve specific problems, such as AlphaGo Zero, an AI system designed to defeat the game “Go.”

From a historical perspective, it is clear that AI represents a broad technology with the potential, if optimists about technology development are correct, to influence large swaths of the economy and society, depending on the pace of innovation. It is something that could be part of many things, depending on the application, rather than a discrete piece of equipment, such as a rocket or an airplane. Thus, AI is better thought of, for military purposes, as an enabler.

What could artificial intelligence mean for militaries? What might militaries do with artificial intelligence, though, and why is this important for international politics? Put another way, what challenges of modern warfare might some militaries believe that artificial intelligence can help them solve? Three potential application areas of AI illustrate why militaries have interest.

First, the challenge for many modern militaries when it comes to data is similar to that faced by companies or government in general—there is often too much data, and it is hard to process it fast enough. Narrow AI applications to process information offer the potential to speed up the data interpretation process, freeing human labor for higher level tasks. For example, Project Maven in the United States military seeks to use algorithms to more rapidly interpret imagery from drone surveillance feeds. This type of narrow AI application for militaries has clear commercial analogues and could go well beyond image recognition. From image recognition to processing of publicly available or classified information databases, processing applications of AI could help militaries more accurately and quickly interpret information, which could lead to better decision making.

Second, from hypersonics to cyber-attacks, senior military and civilian leaders believe the speed of warfare is increasing. Whether you think about it in terms of an OODA (observe, orient, decide, act) loop or simply the desire to attack an enemy before they know you have arrived, speed can provide an advantage in modern wars. Speed is not just about the velocity of an airplane or a munition, however. Speed is about decision-making. Just as with remotely-piloted systems, aircraft “piloted” by AI and freed from the limitations of protecting a human pilot, could trade many of the advantages that come with human pilots in the cockpit for speed and maneuverability. In the case of air defense, for example, operating at machine speed could enable a system to protect a military base or city more effectively when facing saturation attacks of missiles than a person, whose reflexes could be overwhelmed, no matter how competent he or she is. This is already the principle under which Israel’s Iron Dome system operates.

Third, AI could enable a variety of new military concepts of operation on the battlefield, such as the oft-discussed “loyal wingman” idea, which posits a human airplane pilot or tank driver who could coordinate a number of uninhabited assets as well. The more complicated the battlespace, however, the more useful it will be for those “wingmen” to have algorithms that help them respond in cases where the coordinating human controller cannot directly guide them. Swarms, similarly, will likely require AI for coordination.

Clearly, militaries have incentives to research potential applications of AI that could improve military effectiveness. These incentives are not simply a matter of competitive pressure from other militaries; there are internal political and bureaucratic reasons that impel countries toward autonomous weapon systems. For democracies such as the United States, autonomous systems in theory offer the potential to achieve tasks at lower cost and risk to human personnel. For example, the US Army Robotics and Autonomous Systems Strategy, published in 2017, specifically references the ability of autonomous systems to increase effectiveness at lower cost and risk.

For more autocratic regimes such as China and Russia, AI means control. AI systems could allow autocratic nations to reduce their reliance on people, allowing them to operate their militaries while relying on a smaller, more loyal, part of the population.

This discussion of military applications of AI is broader than the question of lethal autonomous weapon systems, which the Convention on Certain Conventional Weapons of the United Nations has debated for several years. One application of AI for military purposes might be the creation of autonomous systems with the ability to use lethal force, but there are many others.

Barriers to effective uses of artificial intelligence. Military adoption of AI faces both technological and organizational challenges, and some are the types of first-order concerns about safety and reliability that could derail the enterprise so the vaunted AI-based transformation of modern militaries never really occurs. These technological challenges fall into two broad categories: internal reliability and external exploitation.

The specific character of narrow AI systems means they are trained for very particular tasks, whether that is playing chess or interpreting images. In warfare, however, the environment shifts rapidly due to fog and friction, as Clausewitz famously outlined. If the context for the application of a given AI system changes, AI systems may be unable to adapt. This fundamental brittleness thus becomes a risk to the reliability of the system. AI systems deployed against each other on the battlefield could generate complex environments that go beyond the ability of one or more systems to comprehend, further accentuating the brittleness of the systems and increasing the potential for accidents and mistakes.

The very nature of AI, which means a machine determining the best action and taking it, may make it hard to predict the behavior of AI systems. For example, when AlphaGo defeated Lee Sedol, one of the best Go players in the world, the second game included a moment when AlphaGo made a move so unusual that Sedol left the room for 15 minutes to consider what had just happened. It turned out that the move was simply something that even an elite human player would not consider, but the machine had figured out. That shows the great potential of AI to improve on decision-making processes. However, militaries run based on reliability and trust—if human operators, whether in a command center or on the battlefield, do not know exactly what an AI will do in a given situation, it could complicate planning, making operations more difficult and accidents more likely.

The challenge of programming an AI system for every possible contingency can also undermine reliability. Take an AI system trained to play the game Tetris. The researchers that developed it discovered that the AI had trained itself to pause the game anytime it was about to lose, to fulfill the command that instructed it to maximize the probability of victory with every move. This adaptation by the AI reflects behavioral uncertainty beyond what most militaries would tolerate. Challenges with bias and appropriate training data could further make reliability difficult. Explainability represents another challenge for AI systems. It is important for a system to not just be reliable, but be explainable in a way that allows others to have trust. If an AI system behaves a certain way in classifying an image or avoiding adversary radars, but cannot output why it made a particular choice, humans may be less likely to trust it.

Reliability is not simply a matter of AI system design. Warfare is a competitive endeavor, and just as militaries and intelligence organizations attempt to hack and disrupt the operations of potential adversaries in peacetime and wartime today, the same would likely be true of a world with AI systems, whether those systems were in a back office in Kansas or deployed on a battlefield. Researchers have already demonstrated the way that image recognition algorithms are susceptible to pixel-level poisoned data that leads to classification problems. Algorithms trained on open-source data could be particularly vulnerable to this challenge as adversaries attempt to “poison” the data that other countries might even be plausibly using to train algorithms for military purposes. This adversarial data problem is significant. Hacking could also lead to the exploitation of algorithms trained on more secure networks, illustrating a critical interaction between cybersecurity and artificial intelligence in the national security realm.

When will militaries use artificial intelligence? A key aspect often lost in the public dialogue over AI and weapons is that militaries will not generally want to use AI-based systems unless they are appreciably better than existing systems at achieving a particular task, whether it is interpreting an image, bombing a target, or planning a battle.

Given these problems of safety and reliability, which are amplified in a competitive environment in which adversaries attempt to disrupt each other’s systems, what promise exists for AI in the military context? Militaries are unlikely to stop researching AI applications simply because of these safety problems. But these safety and reliability problems could influence the types of AI systems developed, as well as their integration into “regular” military operational planning.

Consider three layers of military technological integration—development, deployment, and use. At each of these stages, the potential promise of improved effectiveness, lower risk to human soldiers, and lower cost will compete with the challenges of safety and reliability to influence how militaries behave.

Military research in AI is occurring in a competitive environment. China’s aggressive push into AI research across the board has many in the United States worrying, for example, about China surpassing the US in AI capabilities. Given the possible upsides of AI integration, many militaries will fear being left behind by the capacities of other actors. No country wants its fighters, ships, or tanks to be at risk from an adversary swarm, or simply adversary systems that react and shoot faster in a combat environment.

Despite the build-up, we are still at the outset of the integration of AI into military systems. Missy Cummings, director of Duke’s Humans and Autonomy Lab, argues that despite increases in research and development of autonomous systems by militaries around the world, progress has been “incremental” and organizations are “struggling to make the leap from development to operational implementation."

At the development stage, testing and integration activities should reveal potential safety and reliability challenges and make that a key area of investment for military applications of AI. Now, militaries may decide that those risks are acceptable in some cases, because of the risk of falling behind and the belief that they can correct programming challenges on the fly. Essentially, militaries will weigh the trade-off between reliability and capability in the AI space, with AI systems potentially offering greater capabilities, but with reliability risks.

When it comes to deploying or using AI systems, militaries may weigh considerations specific to a particular conflict. As the stakes of a conflict go up, if a military views defeat as more likely, it will naturally become more risk acceptant in its deployment of technologies with great promise, but also with reliability concerns. In contrast, as the stakes decline and a military believes it can win while taking less technological risk, the deployment of AI systems will lag until militaries believe that are as reliable or more reliable than existing systems.

The history of military and commercial technology development suggests both reasons for caution and reasons to believe that safety and reliability problems may not lead to a full halt in AI military integration—nor should they, from the perspective of militaries. Karl Benz produced the first motor vehicle powered by a gasoline-powered combustion engine in 1886. This occurred a generation after the first patents were filed for the gas-powered internal combustion engine, and it took another generation after Benz’s automobile for the car to overtake the horse as a dominant means of transportation. What took so long? The answer is safety and reliability. Challenges faced by the early internal combustion engine included reliability, costs, a lack of standardized machinery, manufacturing inconsistency, and constant breakdowns because of the complexity of the automobiles the combustion engine was powering.

This familiar story has been repeated whenever new technologies are invented. They face significant safety and reliability issues, while promising greater capabilities relative to the status quo. Sometimes, as with the combustion engine or the airplane, those issues are overcome. Other times, as with military uses of airships or dirigibles, the safety issues loom so large that the technology never becomes reliably more effective than alternatives.

AI and the future of war. The consequences of AI for the world will likely exceed the consequences for military power and the future of war. The impact of automation on the future of work could have massive economic and societal consequences that will occur regardless of choices that militaries make about whether to develop, or not, AI applications for specific military areas. Emmanuel Macron, the president of France, recently argued that AI will disrupt business models and jobs at a scale that requires a new French AI strategy to ensure French leadership in AI development. He sounded the alarm, though, on one potential military application of AI, stating that he is “dead against” using AI to kill on the battlefield. Yet France’s intelligence community is already using AI to improve the speed and reliability of data processing, believing that this can help improve the performance of the French military on the battlefield.

The potential promise of AI, despite safety and reliability concerns, means leading militaries around the world will certainly see the risks of standing still. From data processing to swarming concepts to battlefield management, AI could help militaries operate faster and more accurately, while putting fewer humans at risk. Or not. The safety and reliability problems endemic to current machine learning and neural network methods mean that adversarial data, among other issues, will present a challenge to many military applications of AI.

Senior leaders in the United States national security establishment seem well aware of the risks for the United States in an era of technological change. Secretary of Defense James Mattis recently stated that “it’s still early” but that he is now questioning whether AI could change “the fundamental nature of war." Yet US investments in AI are relatively modest compared to China, whose national AI strategy is unleashing a wave of investment in Chinese academic, commercial, and military circles. The United States may be more self-aware than Great Britain was a century ago about the ways that broader technological changes could influence its military superiority, but that does not guarantee success. This is especially true when one considers that the impact of any technology depends principally on how people and organizations decide to use it. From the way the longbow broke the power of mounted knights to the way naval aviation ended the era of the battleship, history is littered with great powers that thought they were still in the lead—right up until a major battlefield defeat.

We simply do not know whether the consequences of AI for militaries will be at a similar scale. Given the degree of uncertainty even within the AI research community about the potential for progress, and the safety and reliability challenge, it is possible that, two decades from now, national security analysts will recall the AI “fad.” But given its breadth as a technology, as compared to specific technologies like directed energy, and the degree of commercial energy and investment in AI, it seems more likely that the age of artificial intelligence is likely to shape, at least to some extent, the future of militaries around the world.

No comments:

Post a Comment