The GiveWell Blog

Updated thoughts on our key criteria

For years, the 3 key things we’ve looked for in a charity have been (a) evidence of effectiveness; (b) cost-effectiveness; (c) room for more funding. Over time, however, our attitude toward all three of these things – and the weight that we should put on our analysis of each – has changed. This post discusses why

  • On the evidence of effectiveness front, we used to look for charities that collected their own data that could make a compelling case for impact. We no longer expect to see this in the near future. We believe that the best evidence for effectiveness is likely to come from independent literature (such as academic studies). We believe that if a program does not have a strong independent case, there is unlikely to be a charity that can demonstrate impact with such a program.
  • We have continually lowered our expectations for how much role cost-effectiveness analysis will play in our decisions. We still believe that doing such analysis is worthwhile when possible – partly because of the questions it raises – but we believe the cases where it can meaningfully distinguish between two interventions are limited.
  • We have continually raised our expectations for how much role room for more funding analysis will play in our decisions. Questions around “room for more funding” are now frequently the first – and most core – questions we ask about a giving opportunity.

Evidence for effectiveness
In our 2007-2008 search for outstanding charities, we took applications and asked charities to make their own case for impact. In 2009, we identified evidence-backed “priority programs” using independent literature, but still actively looked for charities (even outside these programs) with their own evidence of effectiveness. In 2011, we continued this hybrid approach.

In all of these searches, we’ve found very little in the way of “charities demonstrating effectiveness using their own data.”

We believe the underlying dynamic is that

  • Evidence on these sorts of interventions is very difficult and expensive to collect.
  • It’s particularly difficult to collect such evidence in a way that addresses various common concerns that we believe to be very common and important in the context of evaluating charitable programs.
  • Studies that can adequately address these issues are generally “gold-standard” studies, and are therefore of general interest (and can be found by searching independent/academic literature).

Accordingly, our interest in “program evaluation” – the work that charities do to systematically and empirically evaluate their own programs – has greatly diminished. We are skeptical of the value of studies that fall below the “gold standard” bar that usually accompanies high-reputation independent literature.

This shift in our thinking has greatly influenced how our process works and what we expect it to find. Rather than putting a lot of time into scanning charities’ websites for empirical evidence, as we did previously, we now are focused on identifying the evidence-backed interventions, then finding the vehicles by which donors can fund these interventions.

The ultimate goal of a GiveWell recommendation is to help a donor accomplish as much good as possible, per dollar spent. Accordingly, we have long been interested in trying to estimate how much good is accomplished per dollar spent, in terms such as lives saved per dollar or DALYs averted per dollar.

Over the years, we’ve put a lot of effort into this sort of analysis, and learned a lot about it. In particular:

  • In sectors outside of global health and nutrition, it is generally impractical to connect measurable outcomes to meaningful outcomes (for example, we may observe that an education program raises test scores, but it is very difficult to connect this to something directly related to improvements in quality of life). Not surprisingly, the vast majority of attempts to do cost-effectiveness analysis (including both GiveWell’s attempts and others’ attempts) have been in the field of global health and nutrition.
  • Within global health and nutrition, even the most prominent, best-resourced attempts at cost-effectiveness analysis have had questionable quality and usefulness.
  • Our own attempts to do cost-effectiveness analysis have turned out to be very sensitive to small variations in basic assumptions. Such sensitivity is directly relevant to how much weight we should put on such estimates in decision-making.
  • That said, we continue to find cost-effectiveness analysis to be very useful when feasible, partly because it is a way of disciplining ourselves to make sure we’ve addressed every input and question that matters on the causal chain between interventions (e.g., nets) and morally relevant outcomes (e.g., lives saved). In addition, cost-effectiveness analysis can be useful for extreme comparisons, identifying interventions that are extremely unlikely to have competitive cost-effectiveness (for example, see our comparison of U.S. and international aid).

While we still intend to work hard on cost-effectiveness analysis, and we still see value in it, we do not see it as holding out much promise for helping to resolve difficult decisions between one giving opportunity and another. We find other criteria to be easier to make distinctions on – criteria such as strength of evidence (discussed above) and room for more funding (discussed below).

Room for more funding
For the first few years of our history, we knew that the issue of room for more funding was important, but we made little headway on figuring out how to assess it. We tried asking charities directly how additional dollars would be used, but didn’t receive very helpful answers (see applications received for our 2007-2008 process).

In 2010, as a result of substantial conversations with VillageReach, we developed the basic approach of scenario analysis, and since then we’ve used this approach to reach some surprising conclusions, such as the lack of short-term room for more funding for the Nurse-Family Partnership and recommending KIPP Houston rather than the KIPP Foundation due to “room for more funding” issues.

By now, room for more funding is in some ways the “primary” criterion we look at, in the sense that it’s often the first thing we ask for and sits at the core of our view on an organization. This is because

  • Asking “what activities additional dollars would allow” determines what activities we focus on evaluating.
  • Many of the charities and programs that may seem to have the most “slam-dunk” case for impact also seem – not surprisingly – to have their funding needs already met by others. We’ve found it relatively challenging to find activities that are both highly appealing and truly underfunded.
  • In the absence of reliable explicit cost-effectiveness analysis, an alternative way of maximizing impact is to look for the most appealing activities that have funding gaps. The analytical, “sector-agnostic” approach we bring to giving seems well-suited to doing so in a way that other funders can’t or won’t.

Many people – including us early in our history – may be inclined to think that maximizing impact consists of laying out all the options, estimating their quantified impact-per-dollar, and ranking them. We’ve seen major limitations to this approach (though we still utilize it). We’ve also, however, come across another way of thinking about maximizing impact: finding where one can fit into the philanthropic ecosystem such that one is funding the best work that others won’t.


  • Ben Gilbert on September 22, 2012 at 2:41 pm said:

    I am a little confused by this post. As I understand it, you are saying that, given the difficulties and weaknesses of cost-effectiveness analysis, it is often better to look for the most appealing activities that have funding gaps. What confuses me is trying to understand the difference between these two things.

    One assumes that the question of cost-effectiveness is the question, what concrete outcomes can I expect from giving an additional dollar (or larger sum) to a certain organisation. It would then include the question of which activities will be funded by that additional dollar. I understand the point that this sort of analysis is often impossible to do in a robust way. But then the question is, if it isn’t possible, how do you decide which activities are ‘appealing’?

    I expect that you have some criteria for this. My question would then be, are these criteria proxies for the impossible cost-effectiveness analysis, ie guides for making the best guess as to the most cost-effective place to give money to? In which case, the tension between cost-effectiveness and room for more funding seems a false one; it’s just a matter of choosing the best tools, amongst the more or less rigorous or concrete possibilities, for answering the same question (ie where will my dollar be most effective). Or is it that you think that there are other grounds for deciding an activity is appealing outside of the question of cost-effectiveness? In which case, it would be interesting if you could explain more about what they are and why you think they are important.

    For someone like me starting out on thinking about these questions, the idea of trying to make a cost-effectiveness analysis of the choices is, as you wrote, an obvious place to start. You have been wrestling with these questions for years so it would be instructive to know why you feel it can be a misleading way of going about things.

  • Ben, what you say is consistent with how we’re thinking about the question: “most appealing” is still defined (conceptually) through cost-effectiveness, and so in a conceptual sense there is no distinction here. The distinction is one of emphasis: it’s a matter of (a) putting most of one’s effort into comparing cost-effectiveness between many interventions that appear to have room for more funding, vs. (b) putting most of one’s effort into identifying the gaps left by other funders, while using rough cost-effectiveness analysis or proxies for it to select the most appealing of these gaps.

  • Philip J McQuillan on January 15, 2014 at 12:28 pm said:

    That is almost, but not quite, a hairsplitting analysis that very, very few people will GET.

    In most people’s minds determining and reporting the Percent of the charity’s budget spent on the programs and services it delivers = bang for the buck.

    I have to agree with Samuel Lee’s comment posted at

    Unless you can come up with something simple that will resonate as much as a “bang for the buck” approach does, yours will be the tougher row to hoe.

    I do think you’re deeper analysis is important. I just don’t think you’ll gain wide acceptance. Thank you for the deeper analysis of motherstomothers. It is a fine example of the value of your work.

Comments are closed.