The GiveWell Blog

How GiveWell and mainstream policymakers compare the “good” achieved by different programs

In a previous blog post, we described how we use cost-effectiveness analyses when deciding which charities to recommend to donors.

Today, we published a report that discusses how GiveWell and other actors, such as governments and global health organizations, approach one of the most subjective and uncertain inputs into cost-effectiveness analyses: how to morally value different good outcomes.

How GiveWell uses cost-effectiveness analyses

Our cost-effectiveness analysis plays a critical role in the recommendations we make to donors. For example, as a direct result of our cost-effectiveness calculations, we place a higher priority on filling funding gaps at the charities we recommend that work on deworming programs and distributing malaria nets than we do directing funding to GiveDirectly, a GiveWell top charity that distributes direct cash transfers. We believe that GiveDirectly is the strongest organization we’ve ever seen, but according to our analysis, cash transfers are less cost-effective in terms of impact per dollar donated than deworming treatments and malaria nets.

Accordingly, cost-effectiveness analysis is a major part of GiveWell’s research process. We dedicate a large part of a full-time staff member (Christian Smith)’s capacity to this work and others involved with GiveWell research spend a considerable amount of time engaging with our cost-effectiveness model throughout the year. We consider this analysis a key part of our output and publish our model online so that anyone can check our calculations, enter their own inputs, and see if they agree with our approach and outputs.

This post will provide some basic information about how our cost-effectiveness analyses inform our charity recommendations.

Maximizing cost-effectiveness via critical inquiry

We’ve recently been writing about the shortcomings of formal cost-effectiveness estimation (i.e., trying to estimate how much good, as measured in lives saved, DALYs or other units, is accomplished per dollar spent). After conceptually arguing that cost-effectiveness estimates can’t be taken literally when they are not robust, we found major problems in one of the… Read More

Some considerations against more investment in cost-effectiveness estimates

When we started GiveWell, we were very interested in cost-effectiveness estimates: calculations aiming to determine, for example, the “cost per life saved” or “cost per DALY saved” of a charity or program. Over time, we’ve found ourselves putting less weight on these calculations, because we’ve been finding that these estimates tend to be extremely rough… Read More

Errors in DCP2 cost-effectiveness estimate for deworming

Two notes on this post: This post discusses flaws in a particular published cost-effectiveness estimate for deworming. It should not be taken as a general argument against deworming as a promising intervention, and it does not address various other publications on deworming including the 2003 paper by Edward Miguel and Michael Kremer. Prior to publication,… Read More

Why we can’t take expected value estimates literally (even when they’re unbiased)

While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas. The latter group tends to critique us – or at least disagree with us – based… Read More