Adam Carolla once commented that mis-estimation of an activity's duration can be blamed on the Mike Tyson Principle: 3 minutes in a ring with Iron Mike can seem a lot longer than it really is. (note: Carolla used a much funnier, sexual joke, of course). Crassly seguing to humanitarian emergencies, there are frequent mis-estimations of the severity of a crises in the middle of one. Yet, the estimation of how many people are suffering is a crucial criteria for intervention. Heudtlass et al push the principles underlying Bayesian statistics to solve this problem - but they get the answer only half right.
To start with, Bayesian statistics uses 'pre-test' probability to get to a 'post-test' probability. It implies that just knowing the answer of a test isn't sufficient, that it has to be read in context. Many Bayesian proponents use the breast cancer analogy, but I'll use a slightly different one to help explain the concept. Imagine 3 patients with similar chest pain that come to your office:
- a 22 year old male who just got a terrible score on his medical school entrance exam (MCAT)
- a 50 year old male with hypertension and smoking
- an 80 year old male with 1 prior heart attack and subsequent triple-bypass surgery
For which patient(s) is heart disease the problem? After first ensuring that the patient isn't currently having a heart attack and stabilizing the patient, the next 'test' that should be done is a 'stress' test. The classical and intuitive way to interpret stress test results would be to say that if the test is positive, then it's the heart that's causing the pain and if it's negative, it's not the heart. But Bayesian stats would argue that you have to take the prior probability of heart disease into account. In this way, if the 22 year old had a 'positive' stress test - it's most likely an error (false positive) and he probably doesn't have heart disease. Similarly, if the 80 year old had a 'negative' test, that shouldn't be believed either as it's probably a 'false negative'. Realistically then, in clinical terms, you shouldn't do a stress test for either of these patients because your pre-test probability is already so high that no matter what the test result, it won't change what you would do for this patient: you'd still likely send the 22 year old home and hospitalize the 80 year old. Where the stress test really helps is in the 50 year old when you're not really sure.
Heudtlass shows how these concepts can improve emergency evaluations. He first uses the Doctors Without Border's survey showing an extraordinary high mortality in Yida, South Sudan in 2012. He argues that they should have argued a prior probability to come up with a more reasonable 'post-test' probability and in this way, would speak with more credibility. I agree with him. Just giving the result of a test isn't always informative, like the stress test examples before. Test results need to be put in statistical perspective and compared to the pre-test probability. But he then eschews Bayesian principles making two crucial mistakes.
The first mistake he makes is to use the general sub-Saharan African mortality rates as his prior probability. But he neglects to follow evidence which suggests that mortality increases during conflicts, from both direct war causes and non-war causes (e.g. epidemics, malnutrition, etc.). And second, Bayesian statistics are iterative. Once you have a new post-test probability, that becomes 'testable' again until it either converges to fit a hypothesis...or it you'll have to restart your thinking. In this case, it's hard to say that the MSF estimates were proper or not without further testing the hypothesis. Were there post-emergency surveys? Did malnutrition rates concord with the mortality survey? What about vaccination rates or vaccine-preventable disease outbreaks - were they consistent with such a high mortality rate? Heudtlass doesn't know because he doesn't follow up on the question.
He's right that we should move past frequentist stats and promote humanitarian thinking in a probabilistic sense. Especially in emergencies where situation knowledge is frequently opaque, probabilistic and iterative thinking should be prioritized. Unfortunately, in his examples, he chooses the appropriate pre-test probability and fails to follow the evidence to understand if it converges onto a narrower confidence interval.
Full disclosures: a. I've worked with MSF in the past and b. I don't have the technical chops to hang with his math - i'm writing more about his concepts.
Comments