The GiveWell Blog

Small, unproven charities

Imagine that someone came to you with an idea for a startup business and offered you a chance to invest in it. Which of the following would you require before taking the plunge?

  • Familiarity with (or at least a lot of information about) the people behind the project
  • Very strong knowledge of the project’s “space” (understanding of any relevant technologies, who the potential customers might be, etc.)
  • As much information as possible about similar projects, both past and present

Unless you’re an unusually adventurous investor, you probably answered with “All of the above.” After all, there’s a risk of losing your investment – and unlike with established businesses (which have demonstrated at least some track record of outdoing the competition), here your default assumption should be that that’s exactly what will happen.

Now what is the difference between this situation and giving to a startup charity?

One difference is that with a charity, you know from the beginning that you won’t be getting your donation back. But this doesn’t mean there isn’t risk – the risk just takes a different form. Presumably, your goal in donating to a charity is to improve the world as much as possible. If the startup charity you help get off the ground ends up being much less impactful (on a per-dollar basis) than established charities, then your support was a mistake. If it ends up having no meaningful impact, you’ve lost your shirt.

And in my opinion, the worst case possible is that it succeeds financially but not programmatically – that with your help, it builds a community of donors that connect with it emotionally but don’t hold it accountable for impact. It then goes on to exist for years, even decades, without either making a difference or truly investigating whether it’s making a difference. It eats up money and human capital that could have saved lives in another organization’s hands.

As a donor, you have to consider this a disaster that has no true analogue in the for-profit world. I believe that such a disaster is a very common outcome, judging simply by the large number of charities that go for years without ever even appearing to investigate their impact. I believe you should consider such a disaster to be the default outcome for an new, untested charity, unless you have very strong reasons to believe that this one will be exceptional.

So when would I consider appropriate for a donor to invest in a small, unproven charity? I would argue that all of the following should be the case:

  1. The donor has significant experience with, and/or knowledge regarding, the nonprofit’s client base and the area within which it’s working. For example, a funder of a new education charity should be familiar with the publicly available literature on education, as well as with the specific school system (and regulations) within which the project is working. A funder of a project in Africa should be familiar with past successes and failure in international aid in general, and should spend time in the area where the project will be taking place.
  2. The donor has reviewed whatever information is available about past similar projects and about the assumptions underlying this project. If similar, past projects have failed, the donor has a clear sense of why they failed and what about the current project may overcome those obstacles.
  3. The donor has a good deal of confidence in the people running the nonprofit, either because s/he know them personally or because s/he has an excellent sense for their previous work and past accomplishments. (Enough confidence in the people can lower the need for the above two points, to some extent.)
  4. The donor feels that the organization is doing whatever it reasonably can to measure its own impact over time. The donor is confident that– within a reasonable time frame – if the project succeeds, it will be able to prove its success; if it fails, it will see this and it will fold. Until impact is demonstrated, there is no need for the kind of scale that comes with taking many donations from casual donors. As stated above, I believe that the overwhelming majority of charities do not meet this criterion.

If you know a lot about cars, you might try to build your own car. But if you don’t, you’re much better off with a name brand. Likewise, casual donors are better off funding charities that have track records; experimental charities should start small and accumulate track records. This is why we are comfortable with our bias toward larger charities.

Road safety

From the abstract of a new study from the Center for Global Development:

In the experiment, messages designed to lower the costs of speaking up were placed in a random sample of over 1,000 minibuses in Kenya. Analysis of comprehensive insurance data covering a two year period that spanned the intervention shows that insurance claims for treated vehicles decreased by one-half to two-thirds, compared with the control group. In addition, claims involving an injury or death decreased by at least 50 percent. Passenger and driver surveys indicate that passenger heckling contributed to this reduction in accidents

I haven’t read this paper (just the abstract), largely because we haven’t seen any major charities focusing on interventions like this one. Note that the Disease Control Priorities Project sees “increased speeding penalties, enforcement, media campaigns, and speed bumps” as having high potential cost-effectiveness (see this table).

Qualitative evidence vs. stories

Our reviews have a tendency to discount stories of individuals, in favor of quantitative evidence about measurable outcomes. There is a reason for this, and it’s not that we only value quantitative evidence – it’s that (in our experience) qualitative evidence is almost never provided in a systematic and transparent way.

If a charity selected 100 of its clients in a reasonable and transparent way, asked them all the same set of open-ended questions, and published their unedited answers in a single booklet, I would find this booklet to be extremely valuable information about their impact. The problem is that from what we’ve seen, what charities call “qualitative evidence” almost never takes this form – instead, charities share a small number of stories without being clear about how these stories were selected, which implies to me that charities select the best and most favorable stories from among the many stories they could be telling. (Examples: Heifer International, Grameen Foundation, nearly any major charity’s annual report.)

A semi-exception is the Interplast Blog, which, while selective rather than systematic in what it includes, has such a constant flow of stories that I feel it has assisted my understanding of Interplast’s activities. (Our review of Interplast is here.)

I don’t see many blogs like this one, and I can’t think of a particularly good reason why that should be the case. A charity that was clear, systematic and transparent before-the-fact about which videos, pictures and stories it intended to capture (or that simply posted so many of them as to partly alleviate concerns about selection) would likely be providing meaningful evidence. If I could (virtually) look at five random clients and see their lives following the same pattern as the carefully selected “success stories” I hear, I’d be quite impressed.

But this sort of evidence seems to be even more rare than quantitative studies, which are at least clear about how data was collected and selected.

Philanthropy Action points to more evidence on education interventions

Board member Tim Ogden writes,

Mathematica Policy Research has conducted a multi-year randomized controlled trial of sixteen educational software programs (covering both reading and math) aimed at elementary and middle school students. The products selected were generally those that had at least some evidence of positive impact … the educational software didn’t make much difference.

The second-year study included 3280 students in 77 schools across 23 districts (page xvi – details on sample sizes on pages 4 and 9) in first, fourth and sixth grade (page 70), and randomly assigned classrooms (page 65) to incorporate or not incorporate one of ten software programs (see page 70). Effects on test scores (details of tests on page xviii) had not been statistically significant for any grade in year 1 (page xviii-xx); second-year effects were not significantly different for first- and fourth-graders, and were mixed (better in one case; worse in another) for sixth-graders (page xx).

The results are consistent with the fairly substantial set of evidence that developed-world education is an extremely difficult area to get significant results in. (Including research discussed in recent blog posts here and here, as well as more examples of failed programs discussed on GiveWell.net)

Note that the second-year study was released a couple of months ago, though we learned of it via Mr. Ogden’s recent blog post. Also note that we haven’t thoroughly examined it, as it does not point to a new promising approach, but rather adds more evidence to a theme we’ve noted many times.

Mr. Ogden also discusses research on education in the developing world, about which we’ll have more to say later.

The most important problem may not be the best charitable cause

I recently ran across a charity called Project AK-47 that declares:

Over 100,000 kids are carrying machine guns in the armies of Southeast Asia. Instead of walking to school, they march to war. Instead of playing, they train to kill. If we don’t intervene, most of these children will be soldiers for at least 7 more years…assuming they survive.

We have been rescuing as many of these child soldiers as possible. But right now, without more help, we have to turn many child soldiers away. Your $7 can make the difference between life and death for a child soldier.

A kid or a killer…you decide.

It’s a powerful emotional appeal, and if I could make the purchase they advertise, I would (many times over). There’s just one problem: after carefully examining the entire website, I cannot determine what this organization does.

It mentions paying for “7 days of food,” “7 days of quality education,” “play clothes to replace a child’s army uniform,” and “supplies for a child’s initial urgent medical care and hygiene” … but what is the plan to prevent them from becoming soldiers? Is this nonprofit hiring mercenaries to conduct armed rescues? Coming into peaceful communities and hoping that its help will discourage children from turning to the military? Or something else? And whatever it is, is it doable and does it work? I couldn’t find the answer.

It’s an extreme example of a style of argument common to nonprofits: point to a problem so large and severe (and the world has many such problems) that donors immediately focus on that problem – feeling compelled to give to the organization working on addressing it – without giving equal attention to the proposed solution, how much it costs, and how likely it is to work. Another example is the massive support for organizations such as the Save Darfur movement, despite serious questions about what exactly Save Darfur is trying to do (questions that I doubt most of its supporters have looked into).

Many of the donors we hear from are passionately committed to fighting global warming because it’s the “most pressing problem,” or to a particular disease because it affected them personally – even while freely admitting that they know nothing about the most promising potential solutions. I ask these donors to consider the experience related by William Easterly:

I am among the many who have tried hard to find the answer to the question of what the end of poverty requires of foreign aid. I realized only belatedly that I was asking the question backward … the right way around [is]: What can foreign aid do for poor people? (White Man’s Burden pg 11)

As a single human being, your powers are limited. As a donor, you’re even more limited – you’re not giving your talent or your creativity, just your money. This creates a fundamentally different challenge from identifying the problem you care most about, and can lead to a completely different answer.

In my case: I would rather close the achievement gap than fight developing-world disease, but my giving goes to the latter because it’s a problem that I can do much more to address.

The truth is that you may not be able to do anything to help address the root causes of poverty or cure cancer or solve the global energy crisis.* But you probably can save a life, and insisting on giving to the “biggest problem” could be passing up that chance.


*I haven’t looked into the latter two, and it’s possible that they are more tractable. If you know something about their tractability, I encourage you to share it.