The GiveWell Blog

Cleft lip/palate charities: What does one surgery really accomplish?

It’s clear why donating to charities that fix cleft palates and other deformities – such as SmileTrain or Interplast – is popular among donors: the donation’s impact seems extremely tangible. A donor can see “before” and “after” pictures of children, and feel that the donation helps a child with serious problems become a “normal” child. But in our view, those “after” pictures don’t fully represent what’s going on.

To see why, consider these profiles of cleft repair patients in the U.S. Going through the profiles starting with “A” (33 of them), we see 11 mentions of multiple surgeries (including nine in one case and seven in another) and 6 other profiles that mention the prolonged use of equipment such as a NAM device. An additional 3 mention other major birth defects, and one states that a single surgery “has not helped [the child’s] speech.” One child’s treatment is chronicled in a 27-page journal.

By contrast, it appears that cleft palate charities (both those that conduct surgical missions and those that pay local doctors to perform surgeries) often provide only one surgery for each child, with no follow-up. (See, for example, question 26 of our interview with a surgeon.)

How much good does performing one cleft surgery actually accomplish?

I think it probably accomplishes some good, but I think it’s fair to say that it probably doesn’t accomplish what donors expect: transforming a child that would have lived a very difficult life as something of an outsider into a fully “normal” child.

“A” for effort?

Sean at Tactical Philanthropy has continued his discussion of “high-performing” vs. “high-impact” organizations, which we previously commented on. The message he is sending (see posts here and here) is partly that we need to take the emphasis off of “funding organizations that have shown results” and put it on “funding organizations that seem ‘on the way’ to proving results.”

I believe there is a place for funders who invest as Sean advocates. However, I think that when taken too far, the idea of rewarding charities for being “on the way” is damaging – and the idea is currently being taken too far.

As we’ve written before, our experience is that there are far more nonprofits with impressive evaluation processes and evaluation plans than there are nonprofits with impressive evaluation results. The ratio is so out of whack that it actually appears to be systematic, not an accident of timing.

When you see – as Sean does – that “very, very few nonprofits have ever gone through extensive analysis that has proven that their programs have impact,” you can react in one of two ways. You can hold up those few as the best targets for more funds (especially from casual donors), or you can decide that the “high-impact” bar is too high altogether. The problem with the latter approach – at least when too many funders take it – is that there are no financial incentives for charities to show actual results, as opposed to showing impressive processes and plans.

We believe that what gets rewarded is what gets done. We hope to reward proven impact, leading to more proven impact. We believe that rewarding promises will lead to more promises.

There is also a place for funders who reward the nonprofits that are “on the way” – as Sean observes, without such funding no nonprofits could even get off the ground and become high-impact. But someone has to save their donations for the charities that have actually gotten results – and for reasons we outlined before, we think that someone can and should be individual donors.

A couple of other observations on this discussion:

  • It’s refreshing to see widespread acknowledgement that “high-impact nonprofits” – nonprofits that can truly demonstrate past success – are incredibly rare. It’s worth keeping in mind next time you are confronted with traditional nonprofit marketing.
  • Sean believes that identifying high-performance nonprofits can be easy. We disagree, but rather than getting into a theoretical debate, we prefer that Sean (or someone else) try to apply the proposed method to actual charities, and make recommendations for giving within certain causes. At that point it should be easier to assess how viable this approach is.

CARE evaluations

How transparent is CARE?

On one hand, it maintains a site at www.careevaluations.org that currently lists 448 project evaluation documents (352 of which are in English). We haven’t found anything comparable for any other of what we call the “household name” charities – enormous, well-known, aggressively fundraising international aid charities (usually members of the InterAction network) that conduct a huge array of different programs in different places.

On the other hand, it does not appear to link to this website anywhere from its main website – in fact, there appear to be only four external links to the site anywhere on the Web.

Looking through the evaluations provides an interesting example of what one of these “household name charities’” operations and impact evaluation look like. The variety of the projects and of the evaluations is huge. Some evaluations examine measures quite relevant to “impact,” such as reported behavior change and children vaccinated (example); others are looser, mentioning regional trends in disease burden but focusing on qualitative generalizations (example); others do not examine life outcomes at all, but simply make qualitative observations about strengths and weaknesses of the program evaluated (example). The quality and tone of the studies varies considerably as well. The use of “control groups” to assess impact is rare but occasional; none that we examined have what we consider to be a high level of rigor, but many appear encouragingly honest about program weaknesses as well as strengths.

Note that this set of evaluations appears to be far from comprehensive: CARE currently lists 845 active projects, whereas the database (which in some cases includes more than one evaluation per project, and goes back to 1991) contains only 448 evaluations as of this writing.

This isn’t the level of impact evidence that we see from our recommended charities, but some evaluation is better than no evaluation and non-publicized disclosure is better than no disclosure.

As a side note, CARE appears to be the only “household name charity” that turned down government funds during the debate over US-provided food aid. We aren’t sure whether they have the right side of this debate, but the turning away substantial money is unusual among charities, and suggests that CARE’s staff aren’t always putting fundraising first.

Bottom line: we’d recommend these charities over CARE, but we’d recommend CARE over other “household name” charities.

GiveWell grant: Open application

We welcome applications for $250,000 in funding for economic empowerment in sub-Saharan Africa, to be disbursed by 12/31/2009. Interested charities should read the full details of our application process and then submit our first-round application.

Why we are making this grant: in 2008 we received $250,000 earmarked specifically for regranting to a top organization working on “economic empowerment” (I.e., raising incomes directly as opposed to focusing on education or health outcomes) in sub-Saharan Africa. This grant was associated with an institutional donor that prefers anonymity.

Our recent work on international aid has concluded that economic empowerment is not a particularly promising area for a donor, and we have found no charities in this area that are as promising as our top charities or that have met either of our criteria for further investigation. More at our current writeup on economic empowerment.

We are committed to honoring our donor’s intentions, and with $250,000 to grant, we feel it is possible that we will get access to information we haven’t been able to get access to before. (This serves as something of a “reality check” on the approach we used in our recent research report, which used the information available on charities’ websites as a key indicator of how promising they were – details here.) Thus, we are conducting an application process for this area and this funding in particular.

What we hope to accomplish with this grant: we are

  • Looking to expand a proven, cost-effective, scalable program rather than to fund an “experimental” proposal with no empirical supporting evidence to date.
  • Looking to help people in sub-Saharan Africa go from extreme poverty to moderate self-sufficiency. (See our definition of these terms.)
  • Planning to publicly post as much as possible of the materials, and reasoning, behind our decision so that other donors, particularly individual donors, can benefit from the work we do. Because of the larger-than-usual (for us) grant, we recognize that we may be sent more confidential materials than usual, but have a preference for charities that also want to share as much information as possible. All applicants are being strongly encouraged to (and must be explicit if they don’t want to) share their application materials publicly.
  • Trying to minimize the application/reporting burden on all but the top contenders. Our first-round application does not ask everything we need to know; it is intended to filter out those charities that cannot or will not provide fairly basic, but in our experience fairly rare, information about (a) income/standard of living of clients; (b) details (particularly financials) of past successes in creating self-sustaining operations. Our framework for economic empowerment explains why these two pieces of information are so vital.
  • Open to funding new research, rather than a program, if we feel it’s necessary. We will most likely award all of the grant money to one or more charities, and are doing what we can to maximize our odds of finding a strong one. If, however, we feel that we truly cannot have confidence in any applicants, we have permission from our funder to use the funds on a formal study of the effectiveness of an existing program (to be carried out by an external evaluator such as the Poverty Action Lab). Note that if we do go this route, we will still be granting all of the $250,000 to one or more other organizations by 12/31/2009 (i.e., any research project we fund will be carried out by other organizations, not by us).

We make grants and recommendations based on substantive and (whenever possible) shareable information, not based on personal relationships, and so we are casting the net as wide as possible. If you know of any great organizations in this area, please make sure they know about our grant.

High-impact nonprofits are rare, but worth funding

Following up on Thursday’s Alliance for Effective Social Investing meeting, Sean at Tactical Philanthropy writes:

A high performance nonprofit is a very well run organization. It has outstanding leadership, clear goals, an ethic of monitoring performance and making adjustments as needed, and it is financially healthy.

A high impact nonprofit is one whose efforts have been proven to cause sustainable, positive change.

Impact can be seen only in retrospect. Often many years later. Performance can be directly observed.

I think high impact nonprofits are the holy grail of philanthropy. But like any holy grail, it is something to journey towards, not something you demand now.

Sean goes on to argue that funders should put more focus on “high-performing,” as opposed to “high-impact,” nonprofits. At GiveWell, we focus on “high-impact” nonprofits, in that we look for evidence of past impact and not just future promise. Our response to Sean:

1. Assessing “high-performance” is much harder than assessing “high-impact.” This isn’t to say that either is easy. But we feel it’s very doable for charities to take the “form” of a “high-performance” nonprofit – collecting large amounts of data, executing activities competently, and describing those activities in a compelling and money-raising way – without actually being on a path toward impact (which requires that the data be the right data and that the activities be the right activities for the goal).

We see many charities with impressive-looking evaluation systems; far fewer with actual past outcomes to show. If anything, this makes us suspect that other funders are looking for the form and appearance of good evaluation, without holding charities accountable for actual results.

2. Because of this, funding “high-performance nonprofits” is not something that casual donors (as opposed to subject matter experts) are well positioned to do. This point parallels our argument that casual donors aren’t well positioned to fund the unproven and innovative. Like funding a small and unproven charity, funding a “high-performance” but not “high-impact” charity means trying to do something that hasn’t been done before, and introduces a greater need for understanding the full context of a program.

3. “High-impact” nonprofits might be rare – but that doesn’t make them overfunded. We believe that our top-rated charities can productively use more funds than they’re currently getting. As long as that’s the case, why should a casual donor give to a charity without past impact rather than a charity with past impact?

A small charity that meets our criteria

As we’ve written before, we tend – deliberately – not to focus on charities that are small and/or “experimental” in nature. From what we’ve seen, these charities rarely can demonstrate that their program has “worked” (in the sense of changing lives) before, and so the only way to evaluate them is to have a deep understanding of the environment they’re entering, the history of similar projects, etc. Our basic response to such charities is: “Get funding from people who are better positioned to evaluate whether this risk is worth taking; evaluate yourself over time; once you can demonstrate impact, a GiveWell recommendation becomes possible.”

However, there’s nothing about our requirements that’s incompatible with relatively small, innovation-focused charities, and VillageReach is a case in point. VillageReach is not a global conglomerate working toward universal coverage of tuberculosis drugs or insecticide-treated nets; it is aiming to transform health system logistics, and to date has completed one pilot project in one district of Mozambique. But it has emphasized monitoring and evaluation from the start, and its pilot project has impressive (if not fully conclusive) results. That distinguishes it from any other “transformative” or “experimental” charity we’ve seen.

In this case, we feel that enough information is available to show us – and individual donors – that this particular risk is worth taking. We feel that VillageReach’s approach has most likely “worked” (impressively and cost-effectively) in the first area where it was tried, and that future attempts are likely to be thoroughly evaluated as well.

There are important differences between VillageReach and the larger charities we recommend. As with any small/innovation-focused charity, giving to VillageReach is a high-risk, high-upside proposition, and we don’t mean our endorsement to indicate otherwise. To date it only has results from one project in one area, and future settings could differ in any number of ways. (The flip side of this risk is hope – hope that if a still-experimental approach proves to be repeatedly and demonstrably successful, it will impact the way other organizations operate, creating effects far beyond its own expenses.)

The choice between VillageReach and a larger charity like the Stop Tuberculosis Partnership is not a straightforward one. There are inherent concerns both with small charities (limited track record; no clear “pattern of success”) and with larger ones (so many activities in so many places that it’s hard to be confident in the organization as a whole), and neither of our top-rated charities has a case ironclad enough to dismiss these concerns. But we feel that on balance, both are extremely “good bets” (and better than any other we’re aware of) for a donor.

This post is based on a discussion of these issues on our public email list.