# Evaluating organizations vs. practices

Sean Stannard-Stockton wants to see more research focused on particular nonprofits, rather than on “techniques” for helping people; his reasoning is that this would be more useful to donors.

I don’t believe it’s possible to evaluate a nonprofit as an organization, completely in isolation from what it does and whether it works. Especially if I’m trying to make a case to individual donors who don’t know me or the people running the nonprofit. (I’ve argued this more fully in the past).

Phil Steinmeyer is more interested in techniques than in nonprofits; his reasoning is that differences in the effectiveness of different techniques are large enough to overwhelm organizational differences. (One example of this that I’d give is the question of fighting diarrhea by building wells/latrines or focusing on promotion of oral rehydration therapy; there is little obvious synergy between the two, and little reason to believe that they’d be similar in terms of effectiveness.)

I believe there is some value in evaluating “techniques” in the abstract, but doing so is not sufficient if you’re trying to figure out where to donate. The devil is in the details: it’s essential to know whether a nonprofit is carrying out a “technique” in a manner and context that match up with the “technique” you’ve read about. I don’t know of any “techniques” that are so simple, and so clearly effective, that I would bet on a charity simply because of formal adherence to such “techniques,” regardless of where, when, and with whom (and how faithfully) it’s adhering to them.

That’s why it’s crucial that we look at specific charities, judging them on what they do and what the evidence is that it works. It’s not the only analysis we do (we also look at independent research), and it has been the most intensive and expensive part of our process, but we see it as necessary for anyone trying to produce truly valuable and actionable information for individual donors.

• Dario Amodei on April 15, 2008 at 4:52 am said:

I think you need to get both techniques and organizations right. One out of two often gives very little payoff–an organization using an ineffective method won’t accomplish much good regardless of how well-run they are, and, as you point out, even the simplest techniques are hard to execute and a poorly-run program is likely to get them wrong.

It’s been my experience, however, that there is much more useful information available on techniques than organizations. There are a number of techniques out there (e.g. bednets, microfinance, condom distribution) that one can fairly characterize as having a decent chance of being really effective, even though it may be difficult to confirm their effectiveness in a rigorous way. But on organizations there’s much less information, and I’ve had a hard time doing much better than random guessing. So a focus on organizations makes sense to me, not because they are intrinsically more important than techniques, but because as far as I can tell, they are currently the weaker link in the informational chain.

• Dario Amodei on April 15, 2008 at 4:54 am said:

Oops. Only meant to italicize those first four words.

• Phil Steinmeyer on April 15, 2008 at 1:33 pm said:

Dario – If it is easier to evaluate techniques than organizations, perhaps it would be better for GiveWell, to increase emphasis on techniques, given limited resources devote to research, and limited bandwidth to convey a message to the outside world.

i.e. Pick the low hanging fruit first…

• Dario Amodei on April 16, 2008 at 3:46 am said:

Phil – if you have two areas that need work, and progress on either of the two would produce a lot of benefit, then it makes sense to go for the easier task, i.e. the low hanging fruit. But if you need to succeed at both areas to really get anywhere, and one of the two is a difficult bottleneck, then it may be better to focus attention on the bottleneck despite the difficulty.

Of course, my claim that “you need to succeed at both areas” is an arguable one. It’s possible that even mediocre nonprofits are able to muddle through and produce a fair amount of good, so long as they pick a method with high efficacy. But I worry that disorganization, lack of attention to detail, and a failure to properly verify results, can easily drive an organization down to zero or near-zero effectiveness. Also, absence of rigorous evaluation of individual organizations can lead to lack of accountability, which in itself can cause low effectiveness.

• Holden on April 16, 2008 at 7:52 am said:

I’m largely in agreement with Dario here. It’s also worth noting that in addition to the issues he raises re: competence and execution, the context of the intervention is important as well. An intervention that works in one time and place may not work in another; you need the details of when, where and (especially) for whom a charity is operating.

• I think two examples will help illustrate the point Holden’s making, both of which present cost effectiveness divergences within the developing world that are similar to cost divergences we’ve seen between the developing and developed worlds:

• Insecticide treated bednets. They’ve been proven to significantly reduce incidence of malaria, but the circumstances in which the “technique” is implemented lead to significant divergences in cost per impact. We’ve detailed some of these differences in our review of the cost-effectiveness of Population Services International’s malaria prevention program. Given the specifics of PSI’s program (which they supported with materials in our applicacation process), we think the cost-effectivenss of their program could easily range from ~$600 to ~$2,400 per malaria death averted based on variations in:
1. % of nets sold to high-risk (rural) areas
2. % of nets wasted,
3. % of people who own nets that use them
4. the number of at-risk people that sleep under each net

For an organization about which we know very little (as we would if focused solely on technique), those numbers could vary even further, easily as high as $10,000 per malaria death averted. At the same time, an organization that implements this technique trying to maximize each factor could likely avert a death from malaria for as little as$200.

• Hygiene promotion programs to prevent diarrhea. Although we haven’t studied water supply and sanitation interventions in depth (because none of our finalists focused heavily on those programs), a paper evaluating the cost-effectiveness of different interventions aiming to prevent deaths from diarrhea found that the technique of implementing hygiene promotion programs in areas where adequate water supply and sanitation facilities already exist could range from $50-$5,000 per death from diarrhea averted depending on the intervention’s effectiveness and the cost of labor for the hygiene promotion program. (Pg 624)

None of this is meant to imply that “cost per life saved” is the only number that matters, but I’ve focused on this in order to give a relatively simple illustration. Looking more deeply at impacts only adds to the complexity and the importance of understanding the context, not just the high-level description, of an intervention.