The GiveWell Blog

Microfinance evidence of impact

David Roodman posts a review of recent high-quality studies of microfinance.

Note that prior to these fairly recent studies, most impact studies in this area had serious flaws, as Mr. Roodman notes and as we argue in our year-old review of this area.

The studies Mr. Roodman discusses are randomized controlled trials, and so their conclusions are far more trustworthy. But the conclusions are also generally less encouraging.

  • A consumer loan program in South Africa (i.e., loans that were not specifically for business expansion) had very positive effects: “Six to twelve months after they applied for the four-month loans, unrejected applicants were 10 percentage points more likely to have a job, 7 points less likely to be below the poverty line, and 6 points less likely to report that someone in the household had gone to bed hungry in the last month than those rejected.” (This study is included in our year-old review of this area.)
  • A Philippines microcredit program targeting the middle class found “no changes in household income, spending, or diet 1–2 years later.”
  • A savings program “appeared to help [women] accumulate money for investments such as stock for their stores, leading eventually to greater prosperity.”
  • The one study of a traditional microfinance program targeting the very poor found “no impact on total income, spending, health, or school enrollment rates” overall, though some subgroups appeared to benefit.

It seems to us that rigorous studies have not shown the impact implied by success stories, and that the most encouraging effects have come from programs that are not centered around business expansion loans.

GiveWell grants awarded

Note: this post does not refer to the economic empowerment grant whose process we opened on August 4. That process is still ongoing, and the funds we will grant there are in addition and distinct from the funds discussed below.

Having released our updated recommendations for international aid, we will be making grants to the two charities we have designated as “top-rated.” These grants are being made from restricted funds: donations made to GiveWell and earmarked for regranting to top charities.

We are splitting the available funds between our top two charities, and are therefore granting a total of $85,047.10: $42,523.55 to VillageReach and $42,523.55 to the Stop Tuberculosis Partnership.

We are also recommending that our donor-advised fund grant $12,523.94 to VillageReach and $12,523.94 to the Stop Tuberculosis Partnership. (In general, we make grants from this fund according to donor wishes, as specified at our GiveWell Advance Donation description, but these grants are attributed to the startup funds that GiveWell put into the fund.)

That makes a total of $110,094.98 that we are directing (through grants or grant recommendations) to top charities. This does not include GiveWell Pledges, GiveWell Advance Donations, or any other case where donors give using our research.

We still have $250,000 that is earmarked for regranting specifically within economic empowerment, and are conducting an open application in order to identify the grantee.

Tactical Philanthropy Advisors

Sean Stannard-Stockton of Tactical Philanthropy has launched a philanthropic advisory service for giving customized advice to major donors.

According to Give and Take, the firm is already advising roughly $35 million worth of future gifts.

This is good news because Sean is, among other things, a longtime advocate for extreme transparency in the nonprofit sector. We hope – and believe – that he will share as much as he publicly can about the giving processes he’s helping with, and that this step represents an increase in the influence he’s able to exert as a transparency advocate.

Cleft lip/palate charities: What does one surgery really accomplish?

It’s clear why donating to charities that fix cleft palates and other deformities – such as SmileTrain or Interplast – is popular among donors: the donation’s impact seems extremely tangible. A donor can see “before” and “after” pictures of children, and feel that the donation helps a child with serious problems become a “normal” child. But in our view, those “after” pictures don’t fully represent what’s going on.

To see why, consider these profiles of cleft repair patients in the U.S. Going through the profiles starting with “A” (33 of them), we see 11 mentions of multiple surgeries (including nine in one case and seven in another) and 6 other profiles that mention the prolonged use of equipment such as a NAM device. An additional 3 mention other major birth defects, and one states that a single surgery “has not helped [the child’s] speech.” One child’s treatment is chronicled in a 27-page journal.

By contrast, it appears that cleft palate charities (both those that conduct surgical missions and those that pay local doctors to perform surgeries) often provide only one surgery for each child, with no follow-up. (See, for example, question 26 of our interview with a surgeon.)

How much good does performing one cleft surgery actually accomplish?

I think it probably accomplishes some good, but I think it’s fair to say that it probably doesn’t accomplish what donors expect: transforming a child that would have lived a very difficult life as something of an outsider into a fully “normal” child.

“A” for effort?

Sean at Tactical Philanthropy has continued his discussion of “high-performing” vs. “high-impact” organizations, which we previously commented on. The message he is sending (see posts here and here) is partly that we need to take the emphasis off of “funding organizations that have shown results” and put it on “funding organizations that seem ‘on the way’ to proving results.”

I believe there is a place for funders who invest as Sean advocates. However, I think that when taken too far, the idea of rewarding charities for being “on the way” is damaging – and the idea is currently being taken too far.

As we’ve written before, our experience is that there are far more nonprofits with impressive evaluation processes and evaluation plans than there are nonprofits with impressive evaluation results. The ratio is so out of whack that it actually appears to be systematic, not an accident of timing.

When you see – as Sean does – that “very, very few nonprofits have ever gone through extensive analysis that has proven that their programs have impact,” you can react in one of two ways. You can hold up those few as the best targets for more funds (especially from casual donors), or you can decide that the “high-impact” bar is too high altogether. The problem with the latter approach – at least when too many funders take it – is that there are no financial incentives for charities to show actual results, as opposed to showing impressive processes and plans.

We believe that what gets rewarded is what gets done. We hope to reward proven impact, leading to more proven impact. We believe that rewarding promises will lead to more promises.

There is also a place for funders who reward the nonprofits that are “on the way” – as Sean observes, without such funding no nonprofits could even get off the ground and become high-impact. But someone has to save their donations for the charities that have actually gotten results – and for reasons we outlined before, we think that someone can and should be individual donors.

A couple of other observations on this discussion:

  • It’s refreshing to see widespread acknowledgement that “high-impact nonprofits” – nonprofits that can truly demonstrate past success – are incredibly rare. It’s worth keeping in mind next time you are confronted with traditional nonprofit marketing.
  • Sean believes that identifying high-performance nonprofits can be easy. We disagree, but rather than getting into a theoretical debate, we prefer that Sean (or someone else) try to apply the proposed method to actual charities, and make recommendations for giving within certain causes. At that point it should be easier to assess how viable this approach is.

CARE evaluations

How transparent is CARE?

On one hand, it maintains a site at www.careevaluations.org that currently lists 448 project evaluation documents (352 of which are in English). We haven’t found anything comparable for any other of what we call the “household name” charities – enormous, well-known, aggressively fundraising international aid charities (usually members of the InterAction network) that conduct a huge array of different programs in different places.

On the other hand, it does not appear to link to this website anywhere from its main website – in fact, there appear to be only four external links to the site anywhere on the Web.

Looking through the evaluations provides an interesting example of what one of these “household name charities’” operations and impact evaluation look like. The variety of the projects and of the evaluations is huge. Some evaluations examine measures quite relevant to “impact,” such as reported behavior change and children vaccinated (example); others are looser, mentioning regional trends in disease burden but focusing on qualitative generalizations (example); others do not examine life outcomes at all, but simply make qualitative observations about strengths and weaknesses of the program evaluated (example). The quality and tone of the studies varies considerably as well. The use of “control groups” to assess impact is rare but occasional; none that we examined have what we consider to be a high level of rigor, but many appear encouragingly honest about program weaknesses as well as strengths.

Note that this set of evaluations appears to be far from comprehensive: CARE currently lists 845 active projects, whereas the database (which in some cases includes more than one evaluation per project, and goes back to 1991) contains only 448 evaluations as of this writing.

This isn’t the level of impact evidence that we see from our recommended charities, but some evaluation is better than no evaluation and non-publicized disclosure is better than no disclosure.

As a side note, CARE appears to be the only “household name charity” that turned down government funds during the debate over US-provided food aid. We aren’t sure whether they have the right side of this debate, but the turning away substantial money is unusual among charities, and suggests that CARE’s staff aren’t always putting fundraising first.

Bottom line: we’d recommend these charities over CARE, but we’d recommend CARE over other “household name” charities.