7 August 2021

The Challenges of Mapping Taliban Control in Afghanistan

Jonathan Schroden

On April 14, President Biden announced that the United States would withdraw its military forces from Afghanistan no later than Sept. 11 of this year. Since that time, the Taliban have engaged in an intense offensive to capture rural territory, district centers, roads and border posts, all in an attempt to pressure the country’s cities and, by extension, its government.

The speed of the Taliban’s offensive caught many observers by surprise. This, combined with the reduced international presence in Afghanistan that has accompanied the U.S. military’s withdrawal, resulted in significant confusion as to exactly where the Taliban has made gains and what the relative balance of strength is between the insurgent group and the government. Many media outlets turned to a single source to help them understand what has been happening and illustrate it to their audiences: the Long War Journal map of “Taliban Contested and Controlled Districts in Afghanistan” (Figure 1). Since 2014, the journal has been tracking and updating who “controls” each of Afghanistan’s roughly 400 districts, using a combination of news reports and statements from the Taliban and the government of Afghanistan. The journal’s maps are color-coded for three categories of districts: those under Taliban control, those under government control and those that are contested. As of July 31, 2021, Long War Journal listed 223 districts under Taliban control, 73 under government control and 113 as contested.
Figure 1. Long War Journal’s map of Taliban contested and controlled districts in Afghanistan. Image credit: FDD’s Long War Journal.

In looking at the map shown in Figure 1, the immediate visual suggests that the Taliban has nearly conquered the country. Media headlines such as “America Leaves Afghanistan on the Brink of Collapse” have been common as the Taliban’s offensive has progressed (much to the frustration of some Afghan officials and reporters). Analysts have also asked questions about this map, such as: What does “control” really mean, and how should it be characterized? What are the appropriate indicators and data sources for assessing it? And are districts the best (or even a meaningful) unit of measure for it?

These are important questions, but they are not new. In “Embracing the Fog of War,” Ben Connable critiques the Hamlet Evaluation System (HES) used by the United States in the Vietnam War. The HES was used to measure, in the words of then-Secretary of Defense Robert McNamara, “the extent to which the population has been brought under government control and protection.” It did this by using a standard questionnaire to solicit inputs from advisers at provincial and district levels as to the conditions in more than 10,000 Vietnamese hamlets. The responses to these questionnaires were then fed to a computer, which aggregated the results and converted them to a variety of quantitative outputs. The rating scale used for the HES ranged from the letter A (“guerillas driven out [of the village]”) to the letter E (“village guerillas combat effective”).

Even in its day, the HES was a subject of some controversy, and Connable provides a number of critiques of it beyond those registered at the time. Five are worth highlighting here. First, he notes that while the HES ostensibly focused on “hamlets,” its A to E ratings were defined at the village level (a collection of hamlets formed a Vietnamese village; a collection of villages formed a district). Second, of the five rating levels, the first three (A through C) were optimistic in nature, which resulted in an unbalanced and positively biased rating scale. Third, the definitions for each rating level contained more than one criterion that were not necessarily covariant. This put advisers filling out the questionnaires in the difficult position of having to choose among multiple ratings that were simultaneously applicable. Fourth, the United States did not have advisers in every Vietnamese hamlet, which meant they were frequently being asked to assess areas on which they had little or no direct visibility. And fifth, in order to generate aggregate results for policymakers, advisers’ hamlet ratings were converted into numbers and averaged together. This had the effect of negating the nuance associated with any individual rating and creating artificial and inaccurate equivalencies. For example, one district given a rating of five Cs (and therefore an average of C) would, in the aggregate, be deemed in the same situation as one with a rating of two As, one C, and two Es. All of these issues led to results at the district, provincial, and national levels that lacked the nuance necessary to fully comprehend what was happening on the ground. And yet, as Connable illustrates, senior policymakers largely trusted the results of the HES and used it as a primary input to their decisions about the war.

In 2010, I was working at U.S. Central Command (CENTCOM) as a strategic adviser, supporting the command’s efforts to develop effective assessments processes at the theater level, as well as specific to Afghanistan. By that time, the U.S. surge in Afghanistan was underway and the main operational command there—the International Security Assistance Force (ISAF) Joint Command, known as IJC—had created a district assessment model that it used to assess about 80 districts that it considered “key terrain” for the coalition’s counterinsurgency campaign. To generate this assessment, IJC relied on both quantitative (e.g., counts of violent incidents) and qualitative data (e.g., assessments from advisers). The IJC staff used these data, in conjunction with a set of definitions comprising three to six criteria across each level of a five-point scale, to rate each key district in regards to its quality of governance, security and development. These ratings were then aggregated subjectively—taking into account guidance from the IJC commander—to generate an overall rating (and associated color code) for each district. These ratings were then plotted on a map of Afghanistan, as shown in Figure 2.

Figure 2. IJC’s district assessment map.

Not to be outdone by a subordinate command, CENTCOM’s Afghanistan-Pakistan Center of Excellence (COE) decided to produce its own district assessment map. Contrary to IJC’s approach, the COE relied mostly on intelligence reporting, used different criteria, and insisted on coloring the entire map of Afghanistan, even when it had little reporting on which to base a particular district’s rating.

Then, not to be outdone by another part of the intelligence community, the Defense Intelligence Agency also produced its own district assessment map, which ostensibly relied on the same lines of intelligence reporting as the COE but typically did not generate the same results. Other elements of the U.S. government were also producing their own versions, as were international bodies such as the United Nations, which produced maps of “accessibility” for its personnel, as shown in Figure 3.

Figure 3. The United Nations’ security accessibility map.

In early 2012, I was deployed in Kabul as a strategic adviser to ISAF. I was asked at one point to generate a memo critiquing the various district assessments that were in production at that time. The issues that I elucidated were numerous, and they included some of those described by Connable for the HES: The assessments were often conducted by people without visibility on the areas they were assessing; the rating scales were multivariate, inconsistent, and in some cases biased by definition toward positive results; and the aggregation of assessments across elements such as security, governance, and development sometimes generated nonsensical results. Further, and perhaps most importantly, it was unclear whether Afghanistan’s districts were even the most pertinent unit of measurement (as opposed to, say, population).

Fast forward to today, and we see a similar situation. As it shifted from counterinsurgency to a military pressure campaign against the Taliban in 2019, the U.S. military stopped compiling district assessments, claiming they were no longer of value for strategic decision-making. This left Long War Journal’s map as the only public assessment of its kind. Its visual clarity, combined with the conceptual simplicity of “district control” to lay audiences, made it a natural display for the media to convey to audiences the speed of the Taliban offensive. And while the journal deserves credit for continuing to produce its map even as public attention on Afghanistan has waxed and waned since 2014, the journal’s map nonetheless suffers from many of the same issues as previous maps: It relies on questionable definitions of “control,” its primary data source (news reports) is inherently more likely to register Taliban takeovers of districts than government recaptures, its assessors have no direct visibility on the areas they’re assessing, and it relies on the district as the fundamental unit of assessment. To be fair, Bill Roggio, a senior fellow at the Foundation for Defense of Democracies and the lead analyst on the map, has publicly acknowledged some of these issues. “When I first created this assessment in 2014, I was acutely aware of the inherent challenges and complexities, and I have been very open about this,” he said in a statement to Lawfare. “The assessment, when viewed over time, shows that the U.S. military’s population centric strategy failed to account for the Taliban’s very successful rural insurgency strategy.” But given the challenges involved in this type of assessment, it is not surprising that some organizations are now creating competing maps, just as they did back in 2010. The one shown in Figure 4, for example, was recently published by Radio Azadi.

Figure 4. Radio Azadi’s district control map.

Despite these critiques, I am not suggesting abandoning Long War Journal’s map, nor should we argue over whether its map is better or worse than other maps being produced. Instead, we should seek to understand the strengths and weaknesses of each one and use them as imperfect, but complementary, sources of potential understanding of what’s happening in Afghanistan.

None of these maps is conceptually pure. None of these maps is precise. All of them rely on some degree of subjectivity and aggregation that distorts what they present relative to the reality on the ground. But if they are reasonably well defined, consistently compiled over time, and viewed as imperfect abstractions subject to a host of caveats, they can still help paint an important picture of what’s happening in Afghanistan. For example, have the Taliban really seized 85 percent of the country, as they recently claimed? None of the available maps lends credence to this assertion. Is the country on the brink of collapse? Probably not, as there are sharp differences in the Taliban’s ability to take rural versus urban or mountainous versus desert terrain—features not depicted in any of the maps of control. Have the Taliban seized enough of Afghanistan to warrant concern about what’s happening there? Absolutely. If Long War Journal’s map serves as a wake-up call to the world as to what is happening in Afghanistan as the United States withdraws, then despite its flaws—and like many other forms of assessment—it will have served a useful purpose.

No comments: