Pages

9 April 2021

USSOCOM: Information-Enabled Command versus AI-Enabled Command

By Dr. Mark Grzegorzewski

As the excitement around artificial intelligence applications grows, United States Special Operations Command (USSOCOM) remains on the forefront in adopting this emergent technology. Special Operations Forces (SOF) have always been the tip of the spear in fighting our nation’s wars and serve as the preeminent asymmetric force. Thus, it comes as no surprise that USSOCOM would want to incorporate the potentially game changing technology of AI into every new program. However, SOF should be careful not to become too enamored with AI tools. Rather, it should continue the focus on executing its core missions and seeing where AI applications may fit in instead of being captivated by a still brittle technology that may or may not have the impacts needed within SOF’s core missions. For the missions of the future, especially downrange in the future operating environment, highly advanced technology may not always be the weapon of choice. Therefore, we both must prepare the force for the potentialities of AI and stay focused on operating with the human domain without support from AI technologies.

Before delving further, it is critical to understand what is meant by both the Information Environment (IE) and Artificial Intelligence (AI). The Information Environment as defined by Joint Publication (JP) 3-12 is “the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information.” JP 3-12 further states the IE “consists of three interrelated dimensions, which continuously interact with individuals, organizations, and systems. These dimensions are known as physical, informational, and cognitive. The physical dimension is composed of command and control systems, key decision makers, and supporting infrastructure that enable individual and organizations to create effects. The informational dimension specifies where and how information is collected, processed, stored, disseminated, and protected. The cognitive dimension encompasses the minds of those who transmit, receive, and respond to or act on information.” The information environment is not distinct but rather crosses through each of the other five warfighting domains. Used as a tool, AI could hypothetically analyze and/or impact each of the different dimensions of the information environment: physical, informational, and cognitive.

Artificial Intelligence (AI) is a specific academic field within Computer Science that explores how automated computing functions can resemble the capabilities of humans. Subfields and applications of AI include machine learning (ML), machine vision (MV), and Natural Language Processing (NLP). The field of AI is currently producing Artificial Narrow Intelligence (also known as Weak AI) or ANI, which is AI that can execute one particular decision type well in a closed system. Artificial General Intelligence (also known as Strong AI) or AGI is still in development and if realized would be able to execute multiple decision types, though it would likely have limited application in an open system. AI can be a paradigm shifting technology, but the field is still comparatively in its early days. If AGI can be achieved, it surely will be revolutionary. However, in the near to medium term, this is almost certainly not a realistic outcome. Major AI thinkers and practitioners alike think we are at the least 25 years away from AGI, while others claim AGI will never be realized. That said, USSOCOM, and the DOD, should continue to track these developments due to the outsized influence they may one day have on the world.

Currently, almost all AI companies do not disclose how their technologies work. This “black box” method is standard in the industry and is regarded as proprietary information. This means that although an AI technology has come to a conclusion on a given problem, the user does not know how the technology reached its conclusion. The user simply has to trust the model. These black box solutions could lead USSOCOM astray causing a perpetuation and amplification of biases For example, black box solutions come with pre-trained software applications that cannot be retrained to ask how they arrived on an algarithim, nor can some be updated with new data.. As such, adopters and users need to ask hard questions on the front end before employing this tech, such as where did the training data come from?; is the training data accurate?; is there missing data?; and how was the training data labeled? Depending on how and where the AI technology is employed, the answers to these questions could lead to both tactical and operational disaster. The biases that are incorporated into AI models come in the form of the data used to train the AI and the assumptions that are incorporated into AI models. Using a more undetermined and less specific axiom like information-enabled command would allow USSOCOM to avoid some of the criticisms currently being laid at the feet of AI technologies. Further, by recognizing that USSOCOM is an information-enabled command, it identifies that the information environment is incredibly dynamic and USSOCOM has resolved to use all capabilities to get the best possible information to our commanders. In short, by not leading with the tool and being more focused on the mission will allow USSOCOM to choose the right tool for the job.

The AI field, while rapidly progressing in recent years due to increases in computing power, data availability, and comparatively cheap power supplies has entered a relative slow down, as noted in McKinsey’s “State of AI 2020 Report” wherein they state that businesses are less bullish about the applications of AI in 2020. This slowdown can be modeled in the Gartner (a technology research and advisory firm) Hype cycle wherein specific technologies mature, are adopted, and then realize a social application. Where current AI technologies are in the Gartner life cycle can be debated but it can be stated with confidence that broadly speaking AI’s initial expectations have not been met.

The excitement behind developments in the field of Artificial Intelligence (AI) causes many organizations to adopt AI technologies without a plan in place for what they want to solve or how to implement AI technology. Accordingly, there has been a corresponding increase in companies purchasing AI technologies and those same companies being disappointed in the results of that investment. This is not the fault of the AI technologies. Rather, fault lies with the AI adopter for not scoping their problem (e.g. what do you want to achieve?), determining whether or not AI technology is the correct solution, and then understanding what the AI can deliver (e.g. will I be able to do somethings but not others?). AI is the right tool for some problems but currently those problems are very specific.

Further, even for those problems in which AI is the correct tool for the job, many AI solutions fail because of (1) a lack of skilled specialists, (2) not enough data, and (3) an inability to measure results:

(1) The Joint MISO WebOps Center (JMWC) is just one example of where USSOCOM could increase the use of AI specialists. The JMWC’s medium, the information environment to include social media, is a space where AI can actually have some effects. As one specific example, predictive AI offers a plethora of threat/risk prediction potential. Coupled with a solid assessment suite and a team of scientists, it could improve any MISO program. Conversely, another tool for the same job may be employing a less technologically driven approach such as employing communication specialists with experience in linguistic, social, cultural, psychological, and physical elements to tailor a specific message.

(2) Big Data problems are relative. For USSOCOM it does not have a Big Data problem when compared to private sector organizations. Gartner defines Big Data as “data that contains greater variety arriving in increasing volumes and with ever-higher velocity.” Therefore, to have a Big Data problem, you have to satisfy meeting each of the three V’s: variety, volume, and velocity.

(3) Finally, SOF works in the human domain. What does right look like in the human domain and how can it be measured? Due to the social construction of reality, the answer will always be unsatisfying to those looking to operationalize and model human perceptions in an open system. Where cultural meaning is needed to properly understand information, AI will not be the right tool and a human with proper education and training will be the better tool for the job. Nevertheless, that is not to say that AI cannot be the correct tool of a choice for properly identified issue areas. AI systems have shown success while operating within a closed system, such as with predictive maintenance on Night Stalkers for the 160th Special Operations Aviation Regiment.

Providing cutting edge solutions to ordinary, well-understood problems is a conventional approach. SOF has a lot of the same ordinary problems as others but is more agile and able to exploit cutting edge solutions like AI. According, USSOCOM should and will get more out of AI sooner than the conventional force. However, AI is best at conventional problems, so the conventional force will eventually get all the AI benefits SOF pioneers (e.g. who benefits most from better helicopter maintenance? Hundreds of SOF aircraft or thousands of conventional forces' aircraft?). Moreover, this means that SOCOM’s Big Data problems lie with its unstructured data, due to SOF’s unique work, meaning AI solutions will not be easily applicable.

What makes SOF important is that it handles the different, unique, complex, wicked, and “special” problems. SOF specializes in problems that are not well-understood and for which data is relatively sparse or incomplete and, most importantly, where success is not clearly defined (e.g. building “competence” in the armed forces of Niger).

In conventional forces, the human mans the equipment. In SOF, it equips the human. This is stated another way in the first SOF truth: “People are more important than hardware [and software].” All this is to say is that we should not define ourselves by our tools, which will surely lead to misapplication and disappointment. While AI may be a great technology for a particular mission, especially when it involves a closed system, it may not be the best fit for many of USSOCOM’s problems. In fact, by framing USSOCOM as an AI-enabled command limits the imagination when considering tools that can be applied to a particular problem. Instead, USSOCOM should focus on its mission problem set and find areas in which AI technology is the best tool to accomplish the mission rather than having a tool in search of a mission. In some cases, AI might not be the best tool for the job. The best tool for the job may be traditional business process automation software, like Python programming, or information technology tools like cyberspace applications. In other cases of completely unstructured, qualitative data, a computer science approach might not be applicable but rather calls for a social scientist to creatively analyze the problem and articulate viable solutions.

While AI technology can mimic meaning in data, only a human can derive the original intent from the information. As such, it is a better fit for USSOCOM’s human domain mission space to think of itself as an information-enabled command. To borrow from retired-General Stanley McCrystal, to be information-enabled is to use information to understand what is happening, why is it happening, what is going to happen, and then effecting that information. This information-enabled label opens up the toolkit and allows for a broader range of options for our analysts, operators, and planners. These options allow SOF to gather historical reporting (Big Data and multi-domain intelligence), sense and respond (AI could be applied here), and then anticipate and shape the future (plan). Of course, in this last step, AI is no substitute for wisdom.

Finally, currently, there is no deliberate attempt to apply data and social science to information problem sets, nor insert AI into existing knowledge management and problem-solving workflows. This is a function of the paradigms behind both AI and Information Operations/Information Warfare adoption within wider DoD circles. As an example, little attention is given to repurposing existing unstructured data and merging it with structured data as part of the solution. This is one more reason why efforts fail, as it is viewed as a bridge too far. However, this need not be true, and could be overcome by hiring the right data science and methodological professionals, plus removing contractual restraints to merging research and data programs.

Ultimately, USSOCOM who operates in an open system known as the human domain, will have trouble making sense of information if it potentially brings the wrong tool to the fight. Moreover, in applying an AI tool to the fight, based on a ‘black box’ algorithm, it could result in a false premise at the core of initial planning. Accordingly, USSOCOM should consider thinking of itself as an information-enabled command as opposed to an AI-enabled command. This second framing opens up the toolbox and allows for other tools, to still include AI, to be more appropriately applied to its mission set.

No comments:

Post a Comment