When we started GiveWell, we were very interested in cost-effectiveness estimates: calculations aiming to determine, for example, the “cost per life saved” or “cost per DALY saved” of a charity or program. Over time, we’ve found ourselves putting less weight on these calculations, because we’ve been finding that these estimates tend to be extremely rough (and in some cases badly flawed).
One can react to what we’ve been finding in different ways: one can take it as a sign that we need to invest more in cost-effectiveness estimation (in order to make it more accurate and robust), or one can take it as a sign that we need to invest less in cost-effectiveness estimation (if one believes that estimates are unlikely to become robust enough to take literally and that their limited usefulness can be achieved with less investment). At this point we are tentatively leaning more toward the latter view, this post lays out our thinking on why.
This post does not argue against the conceptual goal of maximizing cost-effectiveness, i.e., achieving the maximal amount of good per dollar donated. We strongly support this conceptual goal; rather, we are arguing that focusing on directly estimating cost-effectiveness is not the best way to maximize cost-effectiveness. We believe there are alternative ways of maximizing cost-effectiveness – in particular, making limited use of cost-effectiveness estimates while focusing on finding high-quality evidence (an approach we have argued for previously and will likely flesh out further in a future post).
In a nutshell, we argue that the best currently available cost-effectiveness estimates – despite having extremely strong teams and funding behind them – have the problematic combination of being extremely simplified (ignoring important but difficult-to-quantify factors), extremely sensitive (small changes in assumptions can lead to huge changes in the figures), and not reality-checked (large flaws can persist unchecked – and unnoticed – for years). We believe it is conceptually difficult to improve on all three of these at once: improving on the first two is likely to require substantially greater complexity, which in turn will worsen the ability of outsiders to understand and reality-check estimates. Given the level of resources that have been invested in creating the problematic estimates we see now, we’re not sure that really reliable estimates can be created using reasonable resources – or, perhaps, at all.
The estimate is extremely simplified. It consists of
- Costs: two possible figures for “cost per child treated,” one for generic drugs and one for name-brand drugs. These figures are drawn from a single paper (a literature review published 3 years prior to the publication of the estimate); costs are assumed to scale linearly with the number of children treated, and to be constant regardless of the region.
- Drug effectiveness: for each infection, a single “effectiveness” figure is used, i.e., treatment is assumed to reduce disease burden by a set percentage for a given disease. For each infection, a single paper is used as the source of this “effectiveness” figure.
- Symptoms averted: the prevalence of different symptoms is assumed to be different by region, but the regions are broad (there are 6 total regions). Prevalence figures are taken from a single paper. The severity of each symptom is assumed to be constant regardless of context, using standard disability weights. Effective treatment is presumed to prevent symptoms for exactly one year, with no accounting for externalities, side effects, or long-term effects (in fact, in the original calculation even deaths are assumed to be averted for only one year).
- Putting it all together: the estimate calculates benefits of deworming by estimating the number of children cured of each symptom for a single year (based on the six regional figures re: how common symptoms are), converting to DALYs using its single set of figures on how severe each symptom is, and multiplying by the single drug effectiveness figure. It divides these DALY-denominated benefits into the costs, which are again done using a single per-child figure.
No sensitivity analysis is included to examine how cost-effectiveness would vary if certain figures or assumptions turned out to be off. No adjustments are made to address issues such as (a) the high uncertainty of many of the figures (which has implications for overall cost-effectiveness); (b) the fact that figures are taken from a relatively small number of studies, and are thus likely to be based on unusually well-observed programs.
In our view, any estimate this simple and broad has very limited application when examining a specific charity operating in a specific context.
The estimate is extremely sensitive to changes in inputs. In the course of examining it and trying different approaches to estimating the cost-effectiveness of deworming, we arrived at each of the following figures at one point or another:
|Cost per DALY for STH treatment||Key assumptions behind this cost|
|$3.41||original DCP2 calculation|
|$23.92||+corrected disability weight of ascariasis symptoms|
|$256||-corrected disability weight of ascariasis symptoms
+corrected prevalence interpretation for all STHs and symptoms and disability weight of trichuriasis symptoms
|$529||+corrected disability weight of ascariasis symptoms|
|$385||+incorrectly accounting for long-term effects|
|$326||-incorrectly accounting for long-term effects
+corrected duration of trichuriasis symptoms
|$138||+correctly accounting for long-term effects|
|$82.54||Jonah’s independent estimate for, implicitly accounting for long-term effects and using lower drug costs|
Our final corrected version of the DCP2’s estimate varies heavily within regions as well:
|Cost per DALY for STH treatment||Region|
|$77.39||East Asia & Pacific|
|$83.16||Latin America & Caribbean|
|$412.22||Middle East & North Africa|
|$202.69||South Asian Seas|
As we wrote previously, we believe that a helminth expert reviewing this calculation would have noticed the errors that we pointed to. This is because when one examines the details of the (uncorrected) estimate, it becomes clear that nearly all of the benefits of deworming are projected to come from a single symptom of a single disease – a symptom which is, in fact, only believed to be about 1/20 as severe as the calculation implies, and only about 1/100 as common.
So why wasn’t the error caught between its 2006 publication (and numerous citations) and our 2011 investigation? We can’t be sure, but we can speculate that
- The DALY metric – while it has the advantage of putting all health benefits in the same units – is unintuitive. We don’t believe it is generally possible to look at a cost-per-DALY figure and compare it with one’s informal knowledge of an intervention’s costs and benefits (though it is more doable when the benefits are concentrated in preventing mortality, which eliminates one of the major issues with interpreting DALYs).
- That means that in order to reality-check an estimate, one needs to look at the details of how it was calculated.
- But looking at the details of how an estimate is calculated is generally a significant undertaking – even for an estimate as simple as this one. It requires a familiarity with the DALY framework and with the computational tools being used (in this case Excel) that a subject matter expert – the sort of person who would be best positioned to catch major problems – wouldn’t necessarily have. And it may require more time than such a subject matter expert will realistically have available.
In most domains, a badly flawed calculation – when used – will eventually produce strange results and be noticed. In aid, by contrast, one can use a completely wrong figure indefinitely without ever finding out. The only mechanism for catching problems is to have a figure that is sufficiently easy to understand that outsiders (i.e., those who didn’t create the calculation) can independently notice what’s off. It appears that the DCP2 estimates do not pass this test.
Our point here isn’t about the apparent lack of formal double-check in the DCP2’s process (though this does affect our view of the DCP2) but about the lack of reality-check in the 5 years since publication – the fact that at no point did anyone notice that the figure seemed off, and investigate its origin.
And the problem pertains to more than “catching errors”; it also pertains to being able to notice when the calculation becomes out of line with (for example) new technologies, new information about the diseases and interventions in question, or local conditions in a specific case. An estimate that can’t be – or simply isn’t – continually re-examined for its overall and local relevance may be “correct,” but its real-world usefulness seems severely limited.
It currently appears to us that the general structure of these estimates is too simplified and sensitive to be reliable without relatively constant reality-checks from outsiders (particularly subject matter experts), but so complex and esoteric that these reality-checks haven’t been taking place.
Improving the robustness and precision of the estimates would likely have to mean making them far more complex, which in turn could make it far more difficult for outsiders (including subject matter experts) to make sense of them, adapt them to new information and local conditions, and give helpful feedback.
The resources that have already been invested in these cost-effectiveness estimates are significant. Yet in our view, the estimates are still far too simplified, sensitive, and esoteric to be relied upon. If such a high level of financial and (especially) human-capital investment leaves us this far from having reliable estimates, it may be time to rethink the goal.
All that said – if this sort of analysis were the only way to figure out how to allocate resources for maximal impact, we’d be advocating for more investment in cost-effectiveness analysis and we’d be determined to “get it right.” But in our view, there are other ways of maximizing cost-effectiveness that can work better in this domain – in particular, making limited use of cost-effectiveness estimates while focusing on finding high-quality evidence (an approach we have argued for previously and will likely flesh out further in a future post).