The GiveWell Blog

Robin Hood, Smile Train and the “0% overhead” donor illusion

For an organization focused on financial metrics, the American Institute of Philanthropy can be very interesting. I can’t do justice to this excellent article on Smile Train with an excerpt, and I urge you to read it all.

It thoroughly debunks an alleged claim by Smile Train that “100% of your donation goes toward programs — 0% goes toward overhead.” Smile Train currently seems to have backed off this claim at least somewhat, although Steven Levitt of Freakonomics appears to have been sold this story (and to have bought it) in 2006.

If identifying effectiveness with “low overhead” is silly, the idea of “0% overhead” simply seems absurd. As the article shows, it doesn’t (and can’t) mean that there are no operating costs affecting the total costs of the program. Rather, it’s another case of zooming in on “your” money, rather than discussing the true total costs of the program you’re supporting the existence of. It makes no sense in an analytical framework; it’s a feel-good gimmick.

That’s why we were surprised when we first saw this gimmick prominently displayed by a group that many consider to be the epitome of hard-nosed, analytical giving: Robin Hood.

Robin Hood’s financials make the situation look similar to Smile Train’s (minus the questionable reclassification of funds that AIP attributes to the latter). About 11% of Robin Hood’s total expenses are “Administration salaries and overhead” or “Fundraising and Public Information,” but because Board member donations are earmarked to those expenses, everyone else can be told their donations are “overhead-free.”

If your goal were to minimize overhead, the fact that Robin Hood tags funds this way shouldn’t be very relevant to you. Robin Hood could allocate more of those Board donations to programs if it spent less on overhead. If you gave to another organization, you could be scaling up an overall lower-overhead operation.

Bottom line: The “0% overhead” claim is promoting the wrong metric (low overhead) and offering a false way to accomplish it.

Comments on this blog

Lately we have seen a surge in thoughtful and interesting comments on this blog. To those participating in the discussions, thank you and please keep them coming.

We try to respond to any comment that is substantive and critical, though when there are as many as there have been lately, we may go a few days at a time without responding. I’ve just posted responses to the latest batch, excluding the comments on our latest post (“a conflict of Bayesian priors”) which I will get to later.

A conflict of Bayesian priors?

This question might be at the core of our disagreements with many:

When you have no information one way or the other about a charity’s effectiveness, what should you assume by default?

Our default assumption, or prior, is that a charity – at least in one of the areas we’ve studied most, U.S. equality of opportunity or international aid – is falling far short of what it promises donors, and very likely failing to accomplish much of anything (or even doing harm). This doesn’t mean we think all charities are failing – just that, in the absence of strong evidence of impact, this is the appropriate starting-point assumption.

Many others seem to have the opposite prior: they assume that a charity is doing great things unless it is proven not to be. These people are shocked that we hand out “0 out of 3 stars” for charities just because so little information is available about them; they feel the burden of proof is on us to show that a charity is not accomplishing good. When someone asks them to give to a charity, they usually give.

This puts us in an odd situation: we have very little interest in bad charities, yet others are far more interested in us when we talk about bad charities. To us, credible positive stories are surprising and interesting; to others, credible negative stories are surprising and interesting.

A good example is Smile Train. Nothing in our recent post is really recent at all – we did all of that investigation in 2006. We’ve known about the problem with Smile Train’s pitch for over three years, and have never written it up because we just don’t care that much.

Since day one of our project, our priority has been to identify top charities. When we see a promising charity, we get surprised and excited and pour time into it; when a charity looks bad, we move on and forget about it. Yet others find a report on a great charity to be mind-numbing, and are shocked and intrigued when they hear that a charity might be failing.

So which prior is more reasonable? Before you have evidence about a charity’s impact, should you assume that it’s living up to its (generally extravagant) promises? Or should you assume that it’s well-intentioned but failing?

We’ll offer up a few points in defense of our position in a future post. If you have the opposite position, please share your thoughts in the comments.

When is a charity’s logo a donor illusion?


When the charity is Nothing But Nets.

Peter Singer has explained the problem with the “net = life” equation, and any other serious analysis we’ve seen of insecticide-treated nets agrees.

Why does this matter? Because Nothing But Nets also prominently states that the total cost of each net is $10. For donors looking to maximize “bang for the buck,” $10 per life saved would probably be the best option available – if it were a real option.

As it happens, distributing bednets is one of the most cost-effective programs we know of (at least when it works) – it’s just nowhere near the $10 figure.

There are probably many reasons that impact-focused giving is so rare, but we think that at least one factor is that when a donor does go looking for information, so many of the “numbers” and “facts” they run into are exaggerations and illusions. It’s a frustrating experience that leaves some jaded about the whole endeavor, and thus missing out on real chances to have an enormous impact.

By default, assume aid projects aren’t reaching the poorest

If you don’t have evidence one way or the other, should you assume an aid’s projects benefits are reaching the poorest?

We think it’s fair to assume the people with the most need are the people with the least power. We’d also guess that, in general, the people with the most power are best positioned to get anything valuable (training, materials, loans, or whatever else) a charity is subsidizing.

There are ways a charity can get around this dynamic, such as:

  • Working in an area where everyone is in need. We believe that such areas exist, but just because an area has low average incomes, high disease rates, etc. doesn’t mean it has no relatively privileged and powerful members.
  • Running programs that are only appealing to those in need. Some health programs work this way (it’s hard to be treated for tuberculosis unless you have tuberculosis). Microfinance may work this way when interest rates are competitive with (not highly subsidized relative to) alternative sources of credit.
  • Carefully targeting those in need. We should expect this to be very difficult. Charity is inherently about coming into a community from the outside; targeting those in need will generally mean trying to outmaneuver at least some locals. The more locals genuinely share the charity’s mission, the better, but how is the charity to know which it’s dealing with?

Unfortunately (as usual), there isn’t much information out there about how often charities actually succeed in targeting those in need. In conducting our grant application process, we’ve found that systematic assessment of this question is relatively rare. But there is certainly room for concern, as shown by a World Bank review that we’ve quoted before:

The frequent tendency for participatory projects to be dominated if not captured by local elites is highlighted by several case studies. Katz and Sara (1997), in a global review of water projects, find numerous cases of project benefits being appropriated by community leaders and little attempt to include households at any stage … even well trained staff are not always effective in overcoming entrenched norms of exclusion. In a study of community forestry projects in India and Nepal that worked reasonably well, Agarwal (2001) reports that women were systematically excluded from the participatory process because of their weak bargaining power. Rao and Ibanez (2003) find that in the participatory projects in their Jamaican case study, wealthier and better networked individuals dominated decision making. In a similar case-based evaluation of social funds in Jamaica, Malawi, Nicaragua, and Zambia, the World Bank (2002) Operations Evaluation Department concludes that the process was dominated by “prime movers.”

Abraham and Platteau (2004) present evidence on community participation processes in Sub-Saharan Africa based largely on anecdotal evidence from their work in community-based development and on secondary sources. They argue that rural African communities are often dominated by dictatorial leaders who can shape the participation process to benefit themselves because of the poor flow of information. (40-41)

Why are we always criticizing charities?

Recently, we’ve criticized (in one way or another) many well-known, presumably well-intentioned charities (Smile Train, Acumen Fund, UNICEF, Kiva), which might lead some to ask: should GiveWell focus on the bad (which may discourage donors from giving) as opposed to the good (which would encourage them to give more)? Why so much negativity and not more optimism?

The fact is, we are very optimistic about what a donor can accomplish with their charity. Donors can have huge impact — save a life, improve equality of opportunity, or improve education. Our research process, and our main website, are (and always have been) built around identifying outstanding charities.

GiveWell hasn’t set a bar that no charity can meet. Six international charities have met and passed the bar. Where most charities fall short, they succeed.

The problem is: because the nonprofit sector is saturated with unsubstantiated claims of impact and cost-effectiveness, it’s easy to ignore me when I tell you (for example), “Give $1,000 to the Stop Tuberculosis Partnership, and you’ll likely save someone’s life (perhaps 2 or 3 lives).” It’s easy to respond, “You’re just a cheerleader” or “Why give there when Charity X makes an [illusory] promise of even better impact?”

We don’t report on Smile Train, Kiva, Acumen Fund, UNICEF, or any others for the sake of the criticism; we write about them to show you how much more you can accomplish with your gift if you’re willing to reconsider where you’re giving this year.

Unless you have strong reason to believe otherwise, I’d recommend you choose a great charity as opposed to one that’s merely better-than average. If you only have a fixed amount to give, why not support the very best?