The GiveWell Blog

How GiveWell uses cost-effectiveness analyses

Our cost-effectiveness analysis plays a critical role in the recommendations we make to donors. For example, as a direct result of our cost-effectiveness calculations, we place a higher priority on filling funding gaps at the charities we recommend that work on deworming programs and distributing malaria nets than we do directing funding to GiveDirectly, a GiveWell top charity that distributes direct cash transfers. We believe that GiveDirectly is the strongest organization we’ve ever seen, but according to our analysis, cash transfers are less cost-effective in terms of impact per dollar donated than deworming treatments and malaria nets.

Accordingly, cost-effectiveness analysis is a major part of GiveWell’s research process. We dedicate a large part of a full-time staff member (Christian Smith)’s capacity to this work and others involved with GiveWell research spend a considerable amount of time engaging with our cost-effectiveness model throughout the year. We consider this analysis a key part of our output and publish our model online so that anyone can check our calculations, enter their own inputs, and see if they agree with our approach and outputs.

This post will provide some basic information about how our cost-effectiveness analyses inform our charity recommendations.

Summary

  • We don’t believe our cost-effectiveness estimates should be taken literally, because they involve (1) subjective judgment calls; (2) educated guesses; and (3) simplifications to make them understandable and able to be vetted internally and externally.
  • When comparing charities’ relative cost-effectiveness, we look for differences of 2-3x or more. If we find a difference of less than 2-3x, we feel unsure whether such a difference truly exists, due to the above-mentioned uncertainty. Donors’ intuitions on the relevant difference in cost-effectiveness may vary.
  • Beyond prioritizing funding gaps, our cost-effectiveness analyses help us think through major questions related to charities’ work.

We don’t view our cost-effectiveness estimates as literally true.

Cost-effectiveness is arguably the single most important input into GiveWell’s charity recommendations. GiveWell is looking for charities that have the greatest impact per dollar donated, and this is the metric upon which we base our funding recommendations. Within GiveWell’s list of top charities, we further parse the value of funding according to its cost-effectiveness and what it would enable a charity to do.

However, we think it would be a mistake to take our cost-effectiveness estimates as a high-confidence precise estimate of the actual value a charity accomplishes:

  1. We may miss factors or make errors in our model. For example, in the past, we did not adjust the cost-effectiveness of malaria nets to account for the possibility that in some cases where the Against Malaria Foundation does not distribute nets, other funders would take its place. We have since added this adjustment to our cost-effectiveness model (see cell A58).
  2. We rely on a number of subjective inputs:
    • GiveWell’s top charities implement a number of different interventions, with different expected benefits. We recommend organizations that distribute malaria nets and implement seasonal malaria chemoprevention due to strong evidence suggesting these interventions reduce child mortality due to malaria. We recommend deworming because we think there is a possibility that children who receive deworming treatments have higher incomes later in life. We recommend cash transfers due to their impact on consumption. In order to compare the relative cost-effectiveness of these organizations’ work, we use a highly subjective conversion factor that enables us to compare years of healthy life with increases in income.

      We use subjective value judgments to make this comparison, about which people may reasonably disagree. There are also large disagreements among individuals who are involved with GiveWell research and fill out the cost-effectiveness model; you can see that in the “Moral Weights” sheet here.

      We plan to write more about the subjective moral value judgments in our cost-effectiveness analyses in future blog posts.

    • We make other adjustments based on educated guesses. For example, we make a “replicability adjustment” for deworming to account for the fact that the consumption increase in a major study we rely upon may not hold up if it were replicated (see cell A8). If you are skeptical that such a large income increase would occur, given the limited evidence for short-term health benefits and generally unexpected nature of the findings, you may think that the effect the study measured wasn’t real, wasn’t driven by deworming, or relied on an atypical characteristic shared by the study population but not likely to found among recipients of the intervention today, as one staff member pointed out in a comment (see cell E8). This adjustment is not well-grounded in data.
  3. We often rely on poor-quality data that may change significantly from year to year. For example, a key input into our cost-effectiveness analysis for anti-malaria interventions is malaria mortality data: How many people are dying of malaria each year? Two of the most well-respected global health groups disagree. The World Health Organization (WHO) estimates that 394,000 people died in Africa in 2015 from malaria (see page 43); the Institute for Health Metrics and Evaluation says the figure is 629,945—or approximately 1.6 times as many. (Differences in their methodology for counting were discussed, with slightly older figures, in a 2012 blog post.)
  4. We aim to balance accuracy with developing a model that can be vetted, both internally and externally. We may, in our quest for simplicity, leave out some relevant factors, even though we’re trying to model the most significant ones. We would guess that the benefit of others being able to check our work outweighs the benefit of including a large number of additional but small factors in our model.
  5. We don’t model everything. For example, potential upside—we discuss this in our charity reviews, and in the past included it in a table listing our recommended charities—isn’t incorporated into our cost-effectiveness model. We also don’t model organizational strength; for example, we don’t explicitly model the effect that GiveDirectly’s organizational strength (one of the best we’ve ever seen) has on its program implementation, nor do we model the effect that the Schistosomiasis Control Initiative’s (in our opinion, weaker) organizational strength has on its. In general, we exclude flow-through effects from our model due to uncertainty over how best to account for them.

In practice, we look for significant differences in cost-effectiveness to guide our decisions.

Due to the uncertainties and imprecision described above, we look for very large differences in modeled cost-effectiveness when making decisions about which charities to investigate or recommend.

Historically, GiveWell has looked for differences of 2-3x or more as significant, although this has varied from person to person working on our model. We typically won’t move forward with a charity in our process if it appears that it won’t meet the threshold of at least 2-3x as cost-effective as cash transfers. We think cash transfers are a reasonable baseline to use due to the intuitive argument that if you’re going to help someone with Program X, Program X should be more cost-effective than just giving someone cash to buy that which they need most.

Another benefit of doing cost-effectiveness analyses.

We also believe that intensely modeling cost-effectiveness helps us by causing us to ask—and quantify—the importance of questions that could affect our view of a charity. We believe that time spent on cost-effectiveness analyses sharpens our thinking on our recommendations and our review process, and encourages internal debate and reflection.