A recent study on health care in Ghana has been making the rounds – first on Megan McArdle’s blog and then on Marginal Revolution and Overcoming Bias. McArdle says the study shows “little relationship between consumption and health outcomes”; the other two title their posts “The marginal value of health care in Ghana: is it zero?” and “Free Docs Not Help Poor Kids.” In other words, the blogosphere take is that this is a “scary study” showing that making primary care free doesn’t work (or even perhaps that primary care doesn’t work).
But wait a minute. Here’s what the study found:
- It followed 2,592 Ghanaian children (age 6-59 months). Half were randomly selected to receive free medical care, via enrollment in a prepayment plan. The medical care included diagnosis, antimalarials and other drugs, but not deworming.
- Children with free treatment got medical care for 12% more of their episodes (2.8 vs. 2.5 episodes per year per person).
- Health outcomes were assessed after 6 months:
- Moderate anemia (the main measure) afflicted 36 of the children who got free care, vs. 37 of the children who didn’t.
- Severe anemia afflicted 2 of the children who got free care, vs. 3 of the children who didn’t.
- There were five deaths among children who got free care, vs. 4 among children who didn’t.
- Parasite prevalence and nutrition status were also measured but not considered to be good measures of the program’s effects (since it did not include deworming or nutrition-centered care).
Would you conclude from this that the free medical care was “ineffective?” I wouldn’t – I’d conclude that the study ended up with very low sample size and low “power” because the children it studied were much healthier than expected. The researchers predicted an anemia prevalence of 10%, but the actual prevalence was just under 3%. Severe anemia and death were even rarer, making any comparison of those numbers (2 vs. 3 and 5 vs. 4) pretty meaningless. So in the end, we’re looking at a control group of 37 kids with moderate anemia and looking for a significant difference in the other group, from a 6-month program – and one that didn’t even address all possible causes of anemia (again, there was no deworming and it doesn’t appear that there was iron supplementation – the only relevant treatment was antimalarials).
Bottom line, free medical care didn’t appear to lead to improvement, but there also didn’t appear to be much room for improvement in this particular group. A similar critique appears in the journal (and points out that we don’t even know how often anemia can be expected to be attributed to malaria vs. parasites or other factors).
Some possible explanations for the relatively low levels of anemia include:
- The presence of observers led everyone to make more use of primary care (the “Hawthorne effect,” a possibility raised by a Marginal Revolution commenter).
- Less healthy people (and/or people who used primary care less) were less likely to stay enrolled in the study (7-8% dropped out), so that the people who stayed in had better health.
- Or for some other reason (selection of village?), the researchers studied an unusually or unexpectedly healthy group. Perhaps a group that already uses primary care when it’s very important to do so, such that the “extra” visits paid for by the intervention were lower-stakes ones, or just weren’t enough (again, only a 12% difference) to impact major health outcomes among the small number of afflicted children.
All of these seem like real possibilities to me, and the numerical results found don’t seem to strongly suggest much of anything because of the low power (as the critique observes).
I saw a similar dynamic play out a month ago: Marginal Revolution linked a new study claiming that vaccination progress has been overstated, but a Center for Global Development scholar raises serious methodological concerns about the study. I haven’t examined this debate enough to have a strong opinion on it, and overestimation seems like a real concern; but we want to see how the discussion and reaction plays out before jumping to conclusions from the new study.
We’re all for healthy skepticism of aid programs, and we like reading new studies. But in drawing conclusions, we try to stick to studies that are a little older and have had some chance to be reviewed and discussed (and we generally look for responses and conflicting reactions). Doing so still leaves plenty of opportunities to be skeptical, as with the thoroughly discussed New York City Voucher Experiment and other ineffective social programs.