Pages

11 May 2021

INTELLIGENCE AND THE TYRANNY OF PROCESS

Addison McLamb 

The 76ers’ erstwhile GM Sam Hinkie could be an impressive Army doctrine writer. Hinkie’s leadership mantra—“trust the process”—headlined the Philly NBA team’s operational overhaul in 2013, but resulted in his leaving ignominiously soon after the team’s 1-21 2015 start. Like Hinkie, the Army’s military intelligence (MI) branch canonizes its four-step analytical process of “intelligence preparation of the battlefield.” Unfortunately, this is often at the expense of developing creative, inductive frameworks for more abstract or asymmetric situations.

The Army’s tactical intelligence analysis manual does not contain the word “creative” nor the phrase “critical thinking.” “Think” itself (including conjugations) gets just nine mentions over the 228 pages. “Product” is strong at 128 hits, although “process” (203 hits) and “step” (330) pull ahead. And if deliverables are unclear at any point, Appendix A’s fifty-nine different checkboxes assist users in hand-railing the sequence to completion. The manual as a whole reads like a black box focused on practitioners’ efficiency in iterating inputs and outputs rather than their efficacy in solving problems. On the whole, intelligence preparation of the battlefield, or IPB, is a very basic framework for entry-level analysts. It’s a chrysalis—something to be grown out of—not an end-all liturgy to be perfected for its own sake.

Rote processes help objectify complexity and scale quickly. Templates can be made once, then easily shared. It is psychologically comforting to check boxes. But with intelligence analysis, overemphasizing structured frameworks may be hardening mental models of our soldiers just as modern war’s evolution to complex, multi-domain operations is becoming more salient. Greater training emphasis on creative thinking and inductive reasoning—especially as they relate to pattern recognition—needs to be incorporated within MI training.

A deductive thinking process is sequential—each step builds upon the other, ultimately arriving at some specific conclusion. The clearest example of deduction is a syllogism:

All tanks are enemies. (major premise)
B Company sees tanks. (minor premise)
Therefore, B Company sees the enemy. (conclusion)

Syllogisms are crisply satisfying. Conclusions are invalid if premises are invalid (perhaps not all tanks in the area are enemy, or maybe B Company misidentified tracked troop carriers as tanks), but if premises are valid and the conclusion follows, its logic stands. The IPB framework, generally speaking, is a process of using observation and investigation (in proper parlance: a “collection plan”) to test premises against reality for validating conclusions. The steps of IPB are prescriptive (“define, describe, evaluate, determine”), wherein ideal use results in deductive, testable conclusions about a combat situation.

Inductive thinking, by contrast, works from the inside out. Induction identifies commonalities within specific situations to reach conclusions. Conclusions then have varying degrees of confidence depending on the strength of evidence available—ultimately dealing more in probability than certainty. Consider this example:

The agency was just hacked.
The hack targeted sanctions information.
Countries X and Y have technology to hack the agency.
Country X is angry about upcoming sanctions.
Therefore, it is highly likely country X hacked the agency.

In the earlier deductive example, if we know with absolute certainty that all tanks are enemies, and B Company absolutely sees tanks, then the conclusion is absolutely true. But in this inductive example above, even if all statements before the conclusion are absolutely true, it does not necessarily follow that country X hacked the agency. For all we know, country Y could have hacked the agency out of retaliation for tariff increases a year prior, or perhaps X is negotiating infrastructure investment with third party Z who would renege on their offer if further sanctions were levied against X (which may be in Y’s interest). But because intelligence analysts must provide their best recommendations, a good assessment may still be country X with high confidence—not absolute certainty.

Overall, the weakness with Army intelligence education is that unconventional, asymmetric, “gray zone” threats are neither absolute nor sequential, yet MI analytical training still orbits around a deductive process incentivizing analysts to think in procedural terms for framing “actionable” (and often overconfident) recommendations. Soldiers ought to learn IPB drills augmented with substantive curricula on creative brainstorming techniques. They should be trained to reason critically about incongruent problems to identify patterns and draw probabilistic conclusions. Good MI analysts shouldn’t be cognitively hamstrung in step two or three in an arbitrary, sequential process by simply failing to check one of a long list of output boxes or achieve sufficient certainty for a particular step. It’s not the best way to solve dynamic problems.

Another key reason to emphasize creative, inductive thinking is that advances in battlefield computing may soon overmatch abilities of human intelligence analysts when processing technical data. Common tactical questions MI analysts are trained to answer (e.g., What are weather impacts to weapons sensors? Can reconnaissance drones fly today? Is terrain suitable for armored vehicles?) typically generate yes/no answers based exclusively on quantitative inputs. Once that data becomes ingestible and computable by smart systems—maybe even in the form of wearable tech—the value added by human analysts on those questions is greatly reduced.

In the realm of artificial intelligence, new-age battlefield machines aren’t silver bullets (playing off Ludwig Wittgenstein, the limits of their algorithms are the limits of their world). But the accelerating sophistication of these algorithms underscores a need for MI analysts to be trained in solving problems of human-centered and subtle complexity—often in the realm of pattern recognition—where the barriers to digitizing good heuristics are high. Pattern recognition on the battlefield has long been cited as a known factor of success or failure for experienced commanders. Carl von Clausewitz identified war’s four elements as “danger, exertion, uncertainty, and chance,” with the only solution lying in outdoing the other “in simplicity.” Formulating reliable patterns to achieve Occam’s razor in combat may be perfected by advanced algorithms. But a likelier (and nearer) future is one where MI soldiers are still asked to provide human analyses for subjective questions and collate tactical patterns into strategic hypotheses. This synthesis is best achieved with both a deductive process and inductive training.

Overall, maintaining competitive advantage often means anticipating skill sets in which our analyst teams, by tide of innovation, risk becoming impotent. Training soldiers to conclude critical takeaways from myriad quantitative and qualitative inputs—in a word, thinking more inductively—means aligning our curricula to be meaningful in the age of information. It means less focus on rote processes and more education on creatively interpreting unstructured data into cogent ways forward.

Many options exist for increasing creative and inductive components of Army analyst training. For instance, building training curricula around combat case studies from past US battles (similar to the Harvard Business School model) could helpfully frame doctrine within real-world, hard-hitting experiences. In the Army, we like to cite the frustration apocryphally expressed by one of our adversaries who complained that it was impossible to plan against American doctrine because Americans don’t read it. But we then send soldiers to professional development courses where instructors spend four to six months grading students on their verbatim dictation of doctrinal definitions. It doesn’t fit. Additionally, including more nonmilitary books on reading lists—everything from political and social theory to fiction to critical thinking primers (I recommend Richards Heuer and Barbara Minto)—would help break analysts out of their own mental models. Such multidisciplinary academic approaches grounded in military case studies seem promising.

In Greek mythology, Poseidon’s son Procrustes was oddly cruel: he invited guests to spend the night in a bed, then stretched their bodies (if too short) or cut off their legs (if too tall) to fit more perfectly. It is an infamously brutish illustration of the asinine tendency to force all things—irrespective of variety—to conform to one standard. In intelligence, solving different problems requires different methodologies, and trying to benchmark tactical problem solving to one deductive process hardens the analytical ceiling of MI soldiers. We have a deep bench of talented analysts—officer and enlisted alike—who are ready and willing to think hard in tackling the next wave of national security issues. The process to help them win might not be another process at all.

Addison McLamb is an Army intelligence captain in the special operations community at Fort Bragg. He is a leadership fellow at the Army’s Center for Junior Officers and a Schwarzman Scholar.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

No comments:

Post a Comment