Pages

18 May 2018

COMPASS: a new AI-driven situational awareness tool for the Pentagon?

by Heather M. Roff 

In late March, the Defense Advanced Research Projects Agency (DARPA) held a proposers day for one of its new projects: Collection and Monitoring via Planning for Active Situational Scenarios (COMPASS). The new project’s goals are to increase a commander’s “situational awareness and reduce the ambiguity of actors and objectives in gray-zone environments”—where a “gray zone” is characterized as “limited conflict, sitting between ‘normal’ competition between states and what is traditionally thought of as war.”

To achieve these goals, the program is organized around three technical areas (TAs). The first is to create a software package to aid in decision making—a package that discovers “the intent of gray-zone actors, including goals, objectives, and desired strategies.” The second is to create a decision aid that provides estimates about “adversary campaigns, including the actors, relationships, timings, and dependencies of the adversary tactics.” The third technical area seeks to integrate the two decision aids into a common software architecture and operator interface—such that integration between the first and second TAs yields a common picture that recommends “probing actions” and that monitors, in real time, progress made by an adversary. The interface could also suggest that commanders adjust their strategy for countering such progress; that is, it would assist in planning and simulation and provide possible courses of action.

COMPASS is a novel program. Its novelty arises in part from its acceptance of a highly ambiguous, uncertain, and dynamic operating environment. Its novelty also arises from a desire to leverage a mixed strategy—one that combines traditional game-theory models of behavior with machine learning and artificial intelligence—to identify, track, and strategize about adversaries over both short- and long-term time horizons. Additionally, COMPASS acknowledges that its operating environment is likely to be “a complex adaptive system,” comprising “interconnected and interdependent physical and social environments.” Referring to this environment as “urban agglomeration,” DARPA notes that gray-zone conflicts will be waged in densely populated areas, “expressed through complex human-made physical terrain, a population of significant size and varied composition, functional infrastructure, and informational complexity.” In short, the environment will be highly urbanized, demographically heterogenous, and informationally noisy.

Such a system would indeed be useful to commanders. Robust situational awareness is prized, along with information dominance and speed, as a necessary condition to fight and win conflicts. COMPASS’s promise, then, is to provide situational awareness through the rapid analysis of vast troves of sociological and political data, and to analyze and suggest various courses of action based on its analysis. Yet before one embraces the direction in which COMPASS points, one ought to consider exactly how it works. That is, one needs to understand the technical and theoretical assumptions that orient this tool; there are four.

Areas of concern. First, the project attempts to utilize game theory to “ascertain the intent of the adversary.” Interestingly, the project assumes that all agents are rational—meaning, I believe, that agents will act to maximize their respective utilities—and that normative theories about how agents will act are important. Explicitly excluded, however, are “descriptive theories that focus on … intangible aspects such as human judgment, irrationality, biases, [and] cognitive limitations.” While this may make sense from a streamlined game-theoretic approach, it in no way reflects the real world. Indeed, even game theorists acknowledge the importance of factors that COMPASS excludes, which is why they make ample use of theories of bounded rationality, uncertainty, imperfect information, and the very notion of “adversarial.” In game theory, though, an “adversarial” situation is—more or less—one in which Agent A, in a multi-agent environment, pursues its utility at the expense of the community’s benefit (either globally or locally). Normative constraints such as conventions, protocols, or norms do not inhibit A’s actions. Given that the operating environment for COMPASS is a gray zone, where an unknown agent is coercing, deceiving, disrupting, and manipulating sociological, political and technical systems, normative constraints are weak at best. Moreover, one person’s rationality is another’s irrationality, and excluding human judgment on the basis of an assumption of pure rationality will not merely inhibit the project’s usefulness but potentially blind the system to novel strategies.

Second, one must examine closely COMPASS’s reliance on artificial intelligence (AI) and machine learning. Since COMPASS is a planning tool, it is quite sensible to utilize AI planning algorithms. However, COMPASS will face a familiar problem: computational intractability. In plain terms, this describes a situation in which the action/state space is so enormous that either a computer can’t solve problems in a useful amount of time—or the inputs and computational space are so over-simplified that outputs provide little actionable and accurate guidance. One might object that COMPASS can discover the intent of an adversary and plan an appropriate course of action by utilizing a wide variety of machine learning techniques, including reinforcement learning and genetic algorithms. Even so, two problems remain: 

On the one hand, the project’s insistence on taking into account only rational agents conflicts with its inclusion of “cognitive space” in the environment. “Cognitive space” includes “perceptions and dispositions” of a target population—but how can one claim that this cognitive space is not “irrational” and full of biases and cognitive limitations? An adversary waging gray war needs to understand the social and cognitive structures of a target population. This seems to suggest that one’s own side must have an equally strong understanding of the adversary’s cognitive construct. 

On the other hand, among its outputs, COMPASS will suggest “probing” actions to a commander so that she may continually refine her understanding of adversary intent. But probing actions ought to avoid unwanted outcomes such as escalation. For DARPA, this represents a classic control theory problem—that is, “finding an optimization function that can discover actors, relationships,” and so on in a feedback loop. The hiccup comes when one acknowledges the following: If the operating environment is a “nonlinear complex system … with partial knowledge,” controlling for unforeseen and unpredictable—i.e. emergent—behavior and avoiding undesirable outcomes is difficult at best. For one must contend with all the component parts of the target environment—economic, political, military, cultural-social, informational, physical, and their interactions, as well as an adversary’s potentially incomplete understanding of the same components, an adversary’s goals, one’s own goals, and the potential effects of any probing action. The result appears to be computational complexity of a sort that is hard to estimate. 

This leads to the third assumption underlying COMPASS—that complexity can be reduced through various mixed software architectures as long as the right kind of data is fed into the systems. According to DARPA, data for COMPASS ought to include “expert knowledge of predefined actions, states, benefits, and observations” as well as events-based data. Moreover, all textual data given to an operator ought to be expressed in natural language, particularly English. While such an approach would be useful for US commanders, even data provided in English natural language processing models are based on standard models of English that are not followed on social media websites and do not capture lexical diversity. This is not even to consider languages other than English, which could turn out to be quite faulty unless the structure, semantics, and cultural context of non-English languages are given careful consideration—especially because, as shown in Table 1, COMPASS is expected to process data from a very broad range of sources:

compass-chart.jpg


Table 1. Source: DARPA HR001118S0022 April 4, 2018.

While all the variables shown in the table are useful, they are only useful insofar as one’s model of the environment, and of the relationships among the variables, is correct. If the model itself is faulty, or if it weights variables incorrectly, it will yield biased results. To appreciate this risk, consider another algorithm—known, somewhat confusingly, as COMPAS. This algorithm was used to inform sentencing decisions by making recidivism risk predictions. However, it has been accused of producing biased and racist risk scores, with African-American individuals having a false positive rate double that of white defendants. Incorrect behavior models, as well as models based on incorrect or faulty assumptions or data, will only yield invalid conclusions.

The fourth assumption underlying COMPASS—and a fourth area of concern—involves the computing power needed to run calculations and simulations for all the data that COMPASS will gather. The stated requirements for COMPASS include the provision of a facility for processing both classified and unclassified information. Since the program is divided among three technical areas, it is likely that the processing facility will need to host all the program’s data given that TA three combines TAs one and two. But another interesting twist comes into play: The Defense Department is now preparing to award to a single company a $10-billion, 10-year cloud computing contract known as Joint Enterprise Defense Infrastructure. Several of the leading cloud hosting companies—including Amazon, Google, and Microsoft—are engaged in a struggle to host all of the Defense Department’s data. Since COMPASS makes room for “commercial cloud hosting services” as part of its remit, it seems that programs such as COMPASS will be required to utilize these same services at some point. But marrying a project such as COMPASS to large commercial companies—which themselves host large amounts of social media information and open-source intelligence—raises interesting ethical and legal questions.

Asking too much? A final point—acute attention must be paid to the human-machine interface envisioned in the project’s third technical area. As DARPA notes, COMPASS cannot “rely on solely automated methods” and humans will be needed to “resolve ambiguity” in some scenarios. Achieving such a balance may be a very difficult challenge for the program. Given the vast scope of information processes and data that will be involved in COMPASS, as well as the project’s reliance on machine learning technology that is currently opaque to users, a human operator or commander’s ability to resolve ambiguity may be quite limited. Though the project’s architecture will rely on a simplified game-theory approach and a mixture of AI techniques, human operators will still approach any situation with their own mix of heuristics, biases, and bounded rationality. This may in fact complicate, rather than resolve, ambiguity.

Ultimately, while COMPASS as a project idea is in lock step with the needs of the US military, DARPA may be asking more than can possibly be delivered. If COMPASS can deliver, it risks doing so via oversimplification of the real world or reification of existing biases. And since the entire goal of the project is to identify who adversaries are, what their intent is, and who is in their network, the amount of prior knowledge needed to construct such a system seems outside the scope of the project’s structure. Additionally, the lurking possibility that commercial services will become involved carries with it pressing questions about privacy, surveillance, and the simultaneous hosting and computing of user data and classified and unclassified military intelligence.

COMPASS seeks to point the US military in a promising direction. Whether it can do so is by no means clear.

Heather M. Roff is an associate research fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Her research interests include the ethics of emerging military technologies, such as artificial intelligence, machine learning, and autonomous systems, as well as international humanitarian law and humanitarian intervention and the responsibility to protect.

thebulletin.org · by Heather M. Roff · May 10, 2018

No comments:

Post a Comment