18 May 2020

Don't Make the Pandemic Worse with Poor Data Analysis

by Matthew D. Baird, David G. Groves, Osonde A. Osoba, Andrew M. Parker, Ricardo Sanchez, Claude Messan Setodji
Source Link

The COVID-19 pandemic thrust world leaders into a situation where they must make public health decisions based on incomplete information. At the same time, technological developments—which have increased real-time data and the ability to share it—are creating an overabundance of information, making it easier to draw spurious conclusions.

The six of us lead research centers at the nonprofit, nonpartisan RAND Corporation that develop statistical methods and models to use large-scale data and incorporate uncertainties into decision processes using such data. What we know from such work is that situations like this are rife with statistical pitfalls. Those analyzing COVID-19 data to make policy recommendations—and journalists who report on research findings to the public—must discern when analyses have fallen into these traps.

The need for immediate answers in the face of severe public health and economic distress may create a temptation to relax statistical standards. But urgency should not preclude expert analysis and honest assessments of uncertainty. Mistaken assumptions could lead to counterproductive actions.

Two recent news stories demonstrate the potential pitfalls of incomplete analysis or insufficient data.

Novel Observational Data


The New York Times recently featured data from the website U.S. Health Weather Map, claiming “new data offer evidence, in real time, that tight social-distancing restrictions may be working.”

The website's data come from smart thermometers that record and anonymously transmit the body temperatures of individuals across the United States. The map shows when average body temperatures, by county, are higher or lower than normal and claims that “social distancing is slowing the spread of feverish illnesses across the country,” referencing correlational time-series data.


Urgency should not preclude expert analysis and honest assessments of uncertainty. Mistaken assumptions could lead to counterproductive actions.Share on Twitter

There may be good theoretical reasons to expect a causal relationship, but the website presents no statistical evidence of causality. For example, lower recorded temperatures since implementing social distancing could simply reflect that a large number of healthy people bought smart thermometers and are checking their temperature frequently out of concern for COVID-19.

Such observational data also can introduce analytical biases and weaknesses. For example, those who buy smart thermometers may be wealthier and better able to self-isolate—but they are the only people included in the map's data.

Novel data like this must be evaluated carefully. If the data are not public and transparent, the public and research communities have little ability to review, understand, and build upon such findings.
Raw Data Trends

Weeks before the Centers for Disease Control encouraged Americans to wear masks, the website Masks Save Lives was doing so and gaining traction on social media. The site features a chart from the Financial Times that plotted national raw trends of total confirmed cases. It claims that “Western countries are experiencing higher rates of COVID-19 … because of the West's aversion to wearing masks.”

Good statistical analysis may support wearing masks—but by relying only on raw data trends, the website's reasoning was overly simplistic.

We used the same data but altered the graphic to show confirmed cases per 10,000 people. Simply by accounting for the population of each country, the pattern changes and the differences between countries are less pronounced, particularly early on in the trajectory of the disease.
Reminders for Analysts, Journalists, and Policymakers

Comparisons of raw data can be misleading. Analyses should adjust for important differences such as demographics, population density, health care systems, and availability and level of testing.

Measurement error can swamp observed differences. Given different levels of testing, accuracy, and transparency of reporting, case numbers usually are not comparable across countries or data sources—and are almost certainly not available in real time. The quality of the data also depends on how individuals are selected to be tested. Random or complete samples within a geographical area provide the most reliable information.

Compliance is a factor. When sizing up the effectiveness of a policy such as face masks, ignoring the variation in the degree of public compliance may bias conclusions. Even if confidence levels in the findings cannot be calculated precisely, they can and should be described qualitatively.

Overlapping interventions complicate analysis. It is not a simple task to tease out the effectiveness of any single policy—for example, requiring masks, quarantining, or closing businesses—when multiple interventions are deployed simultaneously. There may also be differences in how aggressively various policies are enforced and compliance. It is even harder to determine causality when the data are collected through observation rather than through experiments.

Public reaction to the pandemic needs to be accounted for. Precautions taken by people in areas without spiking COVID-19 may make it appear to some like those actions were unnecessary, but that isn't necessarily the case. Further, there may be lags in the impact of a policy; just because there is no immediate change upon implementation of a policy does not mean the policy was ineffective.

Neglecting to report uncertainty undermines the policy debate. Explaining uncertainty can be complicated, leading to the temptation to omit it. This is always a mistake. Providing plausible ranges of estimates and trends, along with explanatory language of what that uncertainty means, will best position audiences to properly frame the findings.

Start from biological and epidemiological theory. Even the most careful statistical analysis may lead to incorrect conclusions if based on spurious relationships. Policy analysis should focus on the interventions that have strong underlying scientific merit.

Given the large amount of novel data sources and the free sharing of information, governments are better positioned than ever to make informed decisions. But to take advantage of these sources researchers—and journalists—need to be sure they apply the best methods possible and are always clear about the limitations of the findings.

No comments: