Pages

12 May 2021

Sun Tzu Versus AI: Why Artificial Intelligence Can Fail in Great Power Conflict

By Captain Sam J. Tangredi, U.S. Navy (Retired)

In today’s Pentagon, the promise of artificial intelligence (AI) has become the equivalent of what logistics was to Admiral Ernest King, U.S. Fleet Commander and Chief of Naval Operations during World War II. Early in the war, King reportedly stated, “I don’t know what the hell this ‘logistics’ is that [Army Chief of Staff General George] Marshall is talking about, but I want some of it.”1 Recent Department of Defense (DoD) officials—following the thinking of political and corporate leaders—appear uniformly to perceive (or at least state rhetorically) that AI is making fundamental and historic changes to warfare. Even if they do not know all it can and cannot do, they “want more of it.”

While this desire to expand military applications of AI as a means of managing information is laudable, the underlying belief that AI is a game-changer is dangerous, because it blinds DoD to the reality that today’s battle between information and deception in war is not fundamentally, naturally, or characteristically different from what it was in the past. It may be faster; it may be conducted in the binary computer language of 1s and 0s; it may involve an exponentially increasing amount of raw data; but what remains most critical to victory is not the means by which information is processed, but the validity of the information.

That is why a rush to invest in military AI capabilities—hastened by the ultrahype of technology pundits, military “transformationists,” or hopeful investors—courts potential failure if DoD loses the perspective that, as a processing tool for military information, AI is but a data spotlight in a world full of mirrors. Despite doctrinal speculation, it changes neither the character nor the nature of war. Assuming it does, primarily because Defense leaders would like it to, puts both U.S. investment and defenses at risk—particularly if DoD relies on adaptation from commercial AI.


The Character of War Vs. Commercial AI

Yes, investments in AI are justified, because the U.S. armed forces are awash in operational and tactical information. Data appears everywhere. A better means of compiling, storing, and categorizing information very rapidly is essential to make sense of the ever-expanding amount of data warfighters are collecting through digital systems. However, all data is not created equal, and, more important, much of it is unrelated, irrelevant, incomplete, or false. Moreover, as the U.S. security environment takes the form of potential great power conflict—a term that would better be phrased as “great systems conflict”—more and more of the data collected will be false.2 That needs to be emphasized: More data will be false—in a similar fashion to the on-the-spot opinions, tweets, and conspiracy theories that increasingly clog social media.

This is a significant problem for both DoD and AI development. Much of the AI used in the corporate world—particularly concerning internet-generated data—has been created with no concern about deliberate deception. However, if one assumes that a potential customer is likely to deliberately deceive a supplier as to the products or services he or she might buy, then the whole model of AI-assisted marketing crumbles. If one assumes that companies within a supply chain might deceive the assembler as to their part specifications, the supply chain cannot function. If the data is false, AI is no longer a commercial asset for marketing or production—it becomes a liability. For commercial AI to function, it must assume that it is not being deceived.

By contrast, it does not take a rereading of the works of Chinese strategist Sun Tzu to recognize that this has never been true in war. “All war is based on deception” is often credited as Sun Tzu’s primary observation.

There have been debates—reaching up to the level of the Secretary of Defense—as to whether AI will change the nature or character of war.3 If the nature of war is violence that bends an opponent to one’s will (to which most debaters agree), that is a definition AI cannot change. If the character of war—as suggested by Sun Tzu—is deception, that, too, AI cannot change. Moreover, commercial AI, the source that a chain of Defense Secretaries from Chuck Hagel to Ashton Carter to James Mattis have relied on as the eventual basis for military AI decision-assistance systems, cannot—in its current state of development—change the character of war (deception) while that character is antithetical to its usage.4

Deception has been woefully underexamined and routinely ignored by DoD in rhetoric concerning the future of AI. Recently, however, the Intelligence Advanced Research Programs Activity has initiated research in the relationship between deception and AI.5 The topic of deception is now entering technical discussions within DoD; however, deception is not a word routinely linked with the debate over AI.6

To avoid misinvestments, over-expectation, and the inevitable public disillusionment when expectations are disappointed, DoD must admit that commercial AI largely has been designed not only without any awareness of the character of war, but also without deception as an element of its environment. To adapt AI to military applications beyond administrative and maintenance functions requires more time, greater research, and more financial resources than many pundits and proponents recognize. DoD must not deceive itself about deception and AI.
Deceiving AI Algorithms is Easy

Why the alarm over deception? Because, without certifiably accurate information, AI is easily deceived. You can try it at home.

Go to a retail website on which you are registered. Sign in and start clicking on sale items in which you have no interest. Or, better yet, pick a sport you never play—tennis, perhaps. Click on racquets, balls, tennis clothes, or related gear. Soon the marketing algorithm for that company—and for the other companies to which your data is sold—will adjust your profile to include tennis. You will have to put up with the annoyance of receiving ads for tennis equipment, but, with some judicious viewing of tennis-related sites on your web browser, the digital world will profile you as a tennis player, even if you have never picked up a racquet. You have just bent the digital data on you. There is no guarantee that you will get an email invitation to the local tennis club, but—as what we call artificial intelligence becomes more ubiquitous online, or even more ubiquitous than it is already—you might. According to the algorithm, you are a tennis player.

The effects are similar in the world of national security—and they always have been. One can cite example after example of opponents being deceived by thrusts, feints, spurious maneuvers, false reports, rumors, public demonstration of capabilities, and the like. This is not just an issue of wartime reaction; peacetime planning and analyses often are led astray. Until partial access to Soviet archives was given following the end of the Cold War, the United States was unaware of how differently Soviet leaders perceived nuclear deterrence and the potential for war.7

What does this mean? AI algorithms are statistical analysis programs. From an acquisition point of view, it means that sensors are a more critical investment than the algorithm (AI) itself. Authoritarian regimes such as the Chinese Communist Party (CCP) are well aware of this, which is why—along with the development of the AI-driven social credit system—the CCP is building a network of at least one state surveillance camera per every ten people (in a nation of more than 1.4 billion people).8 Will AI allow the CCP—aided, unfortunately, by the commercial AI developers of democratic states—to make China even more authoritarian?9 Yes, indeed; but not without those cameras. In a situation where there are no sensors to validate information—such as detecting that you have physically stepped onto a tennis court—recommendations by AI are no better than human speculation.
AI algorithms are statistical analysis programs, so data-gathering sensors are a more critical investment than the algorithms themselves. Authoritarian regimes, such as the Chinese Communist Party, are aware of this, which is why China is building a network of at least one state surveillance camera per every ten people. Credit: Alamy
AI is Nothing Without Data

What we call artificial intelligence has nothing to do with intelligence as commonly construed. It is higher-level computing that can correlate huge amounts of data quickly and has demonstrated the ability to simulate two human attributes: speech and visual recognition, both of which require huge amounts of data. These abilities are based on statistical methods in which incoming information is compared with a large quantity of training data until the 1s and 0s of binary electronic calculation perceive an approximate fit. A senior vice president of Oracle has defined AI as “the set of statistical techniques that teaches software to make decisions on past data.”10 Some of the algorithms generated by these techniques have roots dating back to the 1890s, when statistical techniques were first being applied to business.

Machine learning and big data are the elements that “create” AI out of the statistical techniques. Machine learning is, in effect, the programming of the process. Big data is the information that is processed. These terms can cloud the fact that machine learning is meaningless, and AI does not function without available and accurate information. Thus, Russian President-forever Vladimir Putin was inaccurate when he famously stated that “Artificial intelligence is the future. . . . Whoever becomes the leader in this sphere will become ruler of the world.”11 The reality is that it is not the one who has the “best AI” who will dominate politico-military decision-making, but the one—all other elements of power being relatively equal—who has the most accurate, meaningful, and deception-free data.

The best algorithm or AI machine is nothing without accurate data. Thus, DoD should not invest money in any particular AI solution without considering three important questions. First, will the data on which the system depends be available in a contested environment? Second, will the AI system provide reasonable assistance to decision-making if it has only incomplete or partially inaccurate data? Third, can the AI system be designed to anticipate and identify deception in data?

The problem, simply stated, is that existing civilian AI systems have not been developed with any of those questions in mind.
The Struggle for the Data

During the Cold War—the United States’ last period of great systems competition—Admiral of the Soviet Fleet Sergey G. Gorshkov referred to the operations and maneuvering prior to hostilities, such as the gathering and processing of information, as the “struggle for the first salvo.” In this view, forces with the most accurate information, and who can position themselves to strike first, achieve victory. Late guru of naval tactics Captain Wayne Hughes termed this advantage “attacking effectively first.”12

To attack effectively first requires the ability to process accurate information faster and more correctly than the enemy. However, once again, the accuracy of the information is the underlying requirement. AI could itself be a tool in determining this accuracy, but only if it is designed to recognize that all information (not just select data) it is receiving might be manipulated.

During the Cold War, the possibility of deception as to the characteristics, capabilities, and intention of the enemy was constantly debated. Some analysts claimed there was an extensive system of Soviet strategic deception that the United States could not perceive. (Others claimed that such analysts were themselves paranoid). In any event, few would claim that the Soviets did not periodically achieve successful deception on the operational level. It was a contest of hiders and finders with almost equal abilities in both categories.

While paying lip service to this reality, the conflicts in which the United States has chosen to engage since the end of the Cold War have not been ones that involved equal abilities on the operational level. From the opening moments of Operation Desert Storm, U.S. forces have had such a tremendous advantage over their enemies in terms of available, predominantly accurate information that the possibility of strategic or operational deception was rarely taken seriously. Tactical deception was recognized as a possibility, since it remained an obvious character of war on its own level. Yet, increasing the number of sensors to gather even more information was considered a likely solution for eliminating the chance of deception.

Sensor capabilities and capacity increased to the point that some military leaders suggested they had “lifted the fog of war.” What was not emphasized, however, was that the opponents—Saddam Hussein’s Iraq, the Taliban, Slobodan Milošević’s Serbia, Muammar Qaddafi’s Libya—were not U.S. equals in the contest of hiding and finding on the operational level. They did not have the capability of deceiving our networks of sensors.

Although the idea that “the fog of war was completely lifted” lost traction in the early 2000s, the hype over AI may be restoring it. The result of this recent experience is that any latent focus on deception faded into the background of defense decision-making—at the same time that the potential for military applications for AI became evident. Thus, DoD’s comfort with its rich access to information met up with Silicon Valley’s assumption that information is inherently accurate. In gathering more and more information and applying deeper, more intricate algorithms, commercial AI assumes it cannot be deceived.

Is that a true assumption in a future security environment of great systems conflict? Absolutely not. Even if one is optimistic about AI development, potential U.S. opponents—the CCP and a Putinized Russia—have deceptive capabilities that dwarf the scale of any the United States has encountered since the Cold War. Moreover, both nations are U.S. peers in AI systems development. As conspiratorial, counterintelligence regimes capable of controlling their commercial AI development, they are well aware of the weaknesses of AI. Their ability to insert malware into U.S. commercial AI—and, by extension, commercially developed military AI applications—is prodigious.

The struggle for accurate information in great systems competition will be harder. Military AI has to be built with that as an underlying principle.
The Future is about Information—Not ‘About’ AI

To successfully manage the development of military AI, DoD must recognize that deception is a factor it must assume and for which AI must be designed from the start.

To maintain realistic expectations of military AI, DoD must remember that future effects depend on information, not on AI processing itself. AI helps process information faster—but not necessarily more accurately. More important than the computing system are the sensors. Without accurate information, AI systems produce fiction.

The strategic problem is that U.S. joint forces have become both dependent on and addicted to the trove of information they have been receiving in wars and interventions since the end of the Cold War. U.S. forces have had greater access to accurate information in the post–Cold War environment than they will have in future great systems competition. AI algorithms cannot mitigate that trend. To mitigate deception, investments in AI need to be matched or exceeded by investments in sensor capacity.

The mantra “information wants to be free”—which has continually been repeated by tech industry leaders as almost an anthem to the potential of AI—was first used at a 1984 hackers’ convention by the American writer Steward Brand. What he did not say, however, is that “information wants to be true.”

Assuming that “free” information is true may build a marketing goliath as successful as Amazon, but it also can easily build a military AI goliath as successful as the Maginot Line. That realization must make DoD more thoughtful in planning its investments in military AI (and in sensors), as well as more skeptical of AI’s strategic reliability in great systems conflict.

No comments:

Post a Comment