The GiveWell Blog

Haiti earthquake relief seems less cost-effective than everyday international aid

The disaster in Haiti – and the media coverage of it – pull at the emotions in a way that everyday suffering in the developing world does not. However, our rough calculations suggest that in fact, a donor can have a bigger impact for less money by funding top charities’ everyday activities to reduce unnecessary death and debilitation.

We estimate a “generous” cost-effectiveness figure for a donation to Haiti is by considering (a) the total amount given and (b) the total number of people affected by the disaster.

Total amount given: It’s hard to find definitive figures for the amount of money already donated to the relief efforts, but ReliefWeb provides what appears to be a reasonable lower-bound (i.e., conservative) estimate. As of January 27, ReliefWeb reports that $1.2 billion had been given or pledged to Haiti relief efforts. (See document A on ReliefWeb’s Haiti Earthquake: Appeals and Funding page.) Note that its numbers clearly do not include all charities or all revenue sources: for example, it lists only $2.8 million raised by Partners in Health, whereas the Chronicle of Philanthropy’s roundup – up as of two days ago but seemingly down at this writing – lists $40 million (and also includes many charities that don’t appear at all in Reliefweb’s stats.)

Number of people affected: Haitian authorities estimated that 1 million people were left homeless by the quake (according to ABC World News) and perhaps 3 million were “affected” (according to the Guardian). For the sake of our calculuation, we’ll assume a range of 1-3 million people affected.

Expenses per person affected: using the Reliefweb number (assuming that nothing is excluded from it and that no more money is forthcoming, both assumptions that clearly understate the funds) yields a current level of $395-$1,185 pledged or donated per person affected. (For some interesting context, we’ve seen estimates that the 2004 Indian Ocean Tsunami affected 2,000,000 people and donors gave approximately $14 billion — funding per person of $7,000.)

We estimate that our top-rated charities save a life for approximately that amount. While it’s clearly not possible to directly compare the impact of relief efforts to the impact of saving an individual’s life, our feeling is that saving an individual’s life is likely to have a significantly larger effect on that person than the future change likely to occur in the average life of a Haitian due to the relief efforts.

Can choosing the right charity double your impact?

Reader Evan writes:

I’ve been thinking about how best to donate to Haiti, and I reviewed some of the materials on your website and found them pretty helpful and persuasive. So thank you! But then my law firm announced that it would match donations to the Red Cross or Doctors Without Borders. Given that, I think I have to donate to one of those orgs: even if my money would probably be better spent elsewhere, it’s hard to imagine that it would be more than twice as well spent. Do you disagree?

My intuition here is different than Evan’s. My guess would be that giving to one of our top-rated charities could easily accomplish more than twice as much good as supporting the efforts of the Red Cross or Doctors Without Borders in Haiti.

This guess is largely based on two factors:

  1. The large divergence in relative cost-effectiveness of different programs (which can approach a factor of 1,000, not just a factor of 2) combined with the reasonable position that disaster relief is not among the most cost-effective avenues for charitable funds.
  2. A back-of-the-envelope calculation for cost-effectiveness of efforts in Haiti which puts it well below the cost-effectiveness of our top charities.

In this post, I’ll look at the first factor. I’ll post more on the second issue in a future post.

Cost-effectiveness for different approaches to helping people varies widely

The most cost-effective programs are so much more impactful per dollar than other programs that a much smaller donation to a top program will likely help significantly more people. We’re careful about our use of cost-effectiveness figures, and the Disease Control Priorities Report’s (DCP) in particular (which we think constitute “best-case” scenarios rather than what a donor can expect from his donation), but we do think they give a reasonable basic sense of the differences between different kinds of programs.

Figures 2.2 and 2.3 on pages 41-2 of the DCP report provide cost-effectiveness estimates for many common programs charities run. (These are all presented using the $/DALY metric. For more information on what this is, see our overview for interpreting the DALY metric.) Some of the most cost-effective programs are deworming programs ($3/DALY), expanding immunization coverage ($7/DALY), and bednets to prevent malaria ($11/DALY).

Some of the least cost-effective (but common among charities) programs are improved water and sanitation to prevent diarrhea ($4,185/DALY), some types of maternal and neonatal care packages ($1,060/DALY), and Antiretroviral therapy to treat HIV/AIDS ($922/DALY).

These examples are not meant to demonstrate that the less cost-effective programs are necessarily less worthy, but they do illustrate that the impact per dollar a donor can expect from his gift can easily vary by 2-3 orders of magnitude, even under assumptions that programs are essentially being carried out as intended. Of course, if some programs are poorly executed or simply ineffective, the difference can be much larger still.

With that context, when choosing which charity to support, I wouldn’t trade much confidence-in-an-organization to merely double the size of my donation.

As we’ve discussed before, with limited information we’d tentatively guess that disaster relief funds are closer to the less-cost-effective end of the range rather than the most-cost-effective end. With that in mind, I’d guess that a gift to VillageReach or Stop TB could easily accomplish more than twice as muc good as a gift supporting the Red Cross or Doctors without Borders in Haiti.

The above is very general and though relevant, not at all specific to the situation in Haiti. In a future post, I’ll post more on some specifics regarding Haiti and why I think it offers further support to the notion that donors can accomplish more good by giving to our top charities, even if they give less.

Two other small notes

There are a couple other factors that contribute (though in a relatively small way) to my conclusion here:

  • It doesn’t seem appropriate to consider causing one’s company to give a donation to be equivalent to doubling one’s cost-effectiveness. The firm may have taken matching funds from a pool already allocated to charitable giving, or the partners may have given the funds to charities themselves. Even if the funds wouldn’t otherwise go to charity, the firm likely has another motive for giving, which should lead you to consider how this program differs from other embedded giving programs, which we think are of dubious additional value.
  • Giving to a charity because it has demonstrated effectiveness has the additional benefit of signaling to other charities that effectiveness matters to donors. A core belief of ours at GiveWell is that rewarding charities for effectiveness in changing lives will incentivize other charities to improve their programs to compete for those donor funds. Proactive giving (i.e., trying to choose the best charity available) furthers this dynamic; passive giving (choosing from a predefined list) hampers it. It’s also quite possible that in a very direct sense telling your company that you’ve chosen to give in this way could influence them to adjust their own giving towards more considered and effective charities.

More on the microfinance “repayment rate”

We are concerned about the way repayment rates are often reported. We’ve written about this issue before, arguing that different delinquency indicators can easily be misleading and pointing to one example we found where a microfinance institution’s reported repayment rate substantially obscures the portion of its borrowers that have repaid loans.

Following the links from David Roodman’s recent post about Richard Rosenberg, we found another paper Mr. Rosenberg authored making all the same points, much better than we did. The paper is Richard Rosenberg’s. “Measuring microcredit delinquency: ratios can be harmful to your health.” CGAP Occasional Paper #3. 1999. Available online here (pdf).

Relevant quotes from Mr. Rosenberg’s paper

The importance of using the “right” delinquency measure:

MFIs use dozens of ratios to measure delinquency. Depending on which of them is being used, a “98 percent recovery rate” could describe a safe portfolio or one on the brink of meltdown. (Pg 1)

The measure we’ve been asking for seems to be equivalent to what he calls the “collection rate.”

Most of the discussion will be devoted to three broad types of delinquency indicators: (a) Collection rates measure amounts actually paid against amounts that have fallen due. (b) Arrears rates measure overdue amounts against total loan amounts. (c) Portfolio at risk rates measure the outstanding balance of loans that are not being paid on time against the outstanding balance of total loans. (Pg 2)

It’s essential to not only know which measure is being used, but precisely how an MFI calculates its version of the measure:

But the reader must be warned that there is no internationally consistent terminology for portfolio quality measures—for instance, what this paper calls a “collection rate” may be called a “recovery rate,” a “repayment rate,” or “loan recuperation” in other settings. No matter what name is used, the important point is that we can’t interpret what a measure is telling us unless we understand precisely the numerator and the denominator of the fraction. (Pg 2)

Mr. Rosenberg describes different tests to which MFIs should subject various delinquency measures to determine which is most appropriate. For GiveWell’s purposes, one of the key tests is the “smoke and mirrors” test:

Can the delinquency measure be made to look better through inappropriate rescheduling or refinancing of loans, or manipulation of accounting policies? This is our smoke and mirrors test. (Pg 3)

The practice of rescheduling and renegotiating loans:

When a borrower runs into repayment problems, an MFI will often renegotiate the loan, either rescheduling it (that is, stretching out its original payment terms) or refinancing it (that is, replacing it—even though the client hasn’t really repaid it—with a new loan to the same client). These practices complicate the process of using a collection rate to estimate an annual loan loss rate. Before exploring those complications and suggesting alternative solutions for dealing with them, the author needs to issue a warning: any reader looking for a perfect solution will be disappointed. The suggested approaches all have drawbacks. It is important to recognize that heavy use of rescheduling or refinancing can cloud the MFI’s ability to judge its loan loss rate. This is one of many reasons why renegotiation of problem loans should be kept to a minimum—some MFIs simply prohibit the practice. (Pg 10)

The strengths of PAR (“portfolio at risk”) as a measure:

The international standard for measuring bank loan delinquency is portfolio at risk (PAR). This measure compares apples with apples. Both the numerator and the denominator of the ratio are outstanding balances. The numerator is the unpaid balance of loans with late payments, while the denominator is the unpaid balance on all loans The PAR uses the same kind of denominator as an arrears rate, but its numerator captures all the amounts that are placed at increased risk by the delinquency. (Pg 13)

And its weaknesses:

Like many other delinquency measures, the PAR can be distorted by improper handling of renegotiated loans. MFIs sometimes reschedule—that is, amend the terms of—a problem loan, capitalizing unpaid interest and set- ting a new, longer repayment schedule. Or they may refinance a problem loan, issuing the client a new loan whose proceeds are used to pay off the old one. In both cases the delinquency is eliminated as a legal matter, but the resulting loan is clearly at higher risk than a normal loan. Thus a PAR report must age renegotiated loans separately, and provision such loans more aggressively. If this is not done, the PAR is subject to smoke and mirrors distortion: management can be tempted to give its portfolio an artificial facelift by inappropriate renegotiation. (Pg 16)

PAR can also be misleading in a situation where an MFI is growing rapidly (a key argument of our past posts):

Another potential distortion in PAR measures is worth mentioning. Arguably the PAR denominator should include only loans on which at least one payment has fallen due, so that late loans in the numerator are compared only to loans that have had a chance to be late. Nevertheless, it is customary to use the total outstanding loan balance for the denominator. The distortion involved is usually not large for MFIs, because the period before the first payment is a small fraction of the life of their loans. For instance, for a stable portfolio of loans paid in 16 weekly installments with no grace period, a PAR of 5.0 percent measured with the customary denominator (total outstanding portfolio) would rise only to 5.3 percent using the more precise denominator (excluding loans on which no payment has yet come due.) However, if a portfolio is growing very fast, or if there is a grace period or other long interval before the first payment is due, then the customary PAR denominator can seriously understate risk. Pg 17

Table 6 on Pg 19 summarizes the strengths of weaknesses of different measures:

Why is this important?

Given how complicated this all is, we think that MFIs need to be clear and transparent about (a) which measures they use and (b) precisely how they calculate them.

However, this isn’t the case. For example, we aren’t confident that most MFIs normally report rescheduled and renegotiated loans as at-risk in PAR measures.

On the one hand, Commenter Ben writes, “Best practice is to treat all loans that have been rescheduled as PAR.” (This is consistent with MixMarket’s glossary, which indicates that, “[A PAR measure] also includes loans that have been restructured or rescheduled.”

Nevertheless, “best practice” may not correlate with “in practice.”

  • This Kiva document (its “Partnership Application”) is explicit in the definition of PAR 30: “The value of loans outstanding that have one or more repayments past due more than 30 days. This includes the entire unpaid balance of the loan, including both past due and future installments, but not accrued interest or renegotiated loans.” (emphasis mine) Note that, to Kiva’s credit, it explicitly asks for renegotiated loans separately in the application.
  • As Holden recently commented, “At least one MFI has indicated to us that it does not report [renegotiated loans in its PAR measures].”

The definition you read today isn’t necessarily the one that MFIs are using.

What measure do we use and why?

We’ve written before that our preferred measure is what the paper discussed above calls the collection rate. While the collection rate measure fails to provide a warning to MFIs that their portfolio is in danger, it is the strongest on Mr. Rosenberg’s “Bottom-line” test because it simply and clearly measures failed repayments. It’s therefore less susceptible to obfuscation and manipulation.

For GiveWell’s purposes, we need a delinquency measure that most clearly reports borrowers’ situations. While PAR measures provide information, it’s clear that PAR measures are more valuable to evaluating the risk of an MFI’s portfolio, which while relevant is not our key concern.

Haiti earthquake donations

Update: see our official page on Haiti earthquake relief, which consolidates advice from us and a few other sources we have high opinions of.

Reader Brigid writes:

    “I would love to hear any thoughts you have on contributions in light of the crisis in Haiti. My sense is that now of all times is when people give significantly without due diligence into a charity’s impact and that donor have more illusions than generally (i.e., My gift is going directly to a hurt Haitian). It seems inevitable the news piece several months after the event when donors are surprised/angry to learn their gifts were not used as they believed they were.
    Is there any way for an average donor to help the crisis in Haiti right now? Is there a way to “capture” the generosity that these events inspire while still focusing on impact? Specifically to GiveWell: would your team consider focusing quick efforts on analyzing charities that are addressing the crisis in Haiti (i.e. would you shift your mission at this moment)? Or, would you say: despite the current crisis in Haiti, any contribution an individual donor wants to make will still impact more people if, for example, given to fighting tuberculosis through Stop TB Partnership.”

A few notes:

Some stats on GiveWell’s web traffic and influence on donations

Before we start giving our answers to the questions of this post, I wanted to share some raw data that we look at to gauge how things are going.

The charts/tables below cover the following:

  • “Money moved,” i.e., donations made to GiveWell-recommended charities due to GiveWell’s research
  • Website traffic.

This is just a subset of the information we have. We’ll be releasing a more complete set of charts/tables/data shortly.

The table below shows the support each of our recommended charities received in 2009. Update, 1/8/2010: VillageReach sent us an updated file that includes donors through the end of 2009. The updated table is below. Note: VillageReach’s total fell as we discovered that we had erroneously double-counted some funds.

You can view the original table we posted here.

  • “Pledgers” refers to people who made GiveWell Pledges (advance commitments to give based on our research) in 2008, before our recent report was completed, and followed through on these commitments in 2009.
  • “Large gifts” refers to donors who made large gifts, and directly told us (and the charities they were giving to) that GiveWell’s research had been the key factor in where they gave.
  • “Economic empowerment grant” refers to a grant made directly by GiveWell, with funds from a single donor.
  • “Through website” refers to gifts made through the “Donate Now” buttons on GiveWell.net (some through Google and some through Network for Good (NFG)).
  • “Grants” refers to grants made directly by GiveWell, mostly with funds that were restricted by donors for regranting (we also granted just under $15,000 in unrestricted funds).
  • “Direct to charity” refers to donations that VillageReach received, not through GiveWell’s website, and believes it can confidently attribute to GiveWell (this is due to the fact that VillageReach is a relatively small organization that does not get many donations from unfamiliar individuals). We are still awaiting data from 12/22/09-12/31/09, so we expect the final version of this number be higher.

The following two charts show the amount donated and number of donors through the GiveWell site, comparing 2007-2009 (and the beginning of 2010).


Finally, we show monthly web traffic to the GiveWell site and blog. Two notes: (1) we unfortunately lost tracking for much of 2008 — that explains the lack of data during that period. (2) We had an immense spike on 12/20/2007 due to media coverage; we’ve purposefully set the left-axis as it is to make it easier to view the rest of the chart.

GiveWell’s self-evaluation and plan

Our current top priority is assessing the state of GiveWell: what we’ve accomplished, where we stand, and where we should focus our limited resources next. Over the coming weeks, we’ll be trying to examine ourselves as dispassionately and critically as possible, and sharing our self-review in something close to real time via this blog.

GiveWell’s mission is to find outstanding charities and publish the full details of our analysis to help donors decide where to give. The ultimate goal is to have significant impact on the flow of donations, moving toward a world in which donors reward charities for success in improving lives. The major questions about GiveWell, as I see them, are as follows.

Questions for “customers,” i.e., people considering using GiveWell’s research to decide which charities to support

  • Does GiveWell provide quality research that highlights truly outstanding charities in the areas it has covered?
  • Is it practical for donors to evaluate and use GiveWell’s research in the areas it has covered?
  • Has GiveWell covered enough areas to be useful?

Additional questions for stakeholders, i.e., people considering giving their time, money and other support directly to GiveWell (these include the GiveWell Board and staff)

  • Is GiveWell’s research process “robust,” i.e., can it be continued & maintained without relying on the co-Founders?
  • Does GiveWell present its research in a way that is likely to be persuasive and impactful (i.e., is GiveWell succeeding at “packaging” its research)?
  • Does GiveWell reach a lot of potential customers (i.e., is GiveWell succeeding at “marketing” its research)?
  • Is GiveWell a healthy organization with an active Board, staff in appropriate roles, appropriate policies and procedures, etc.?
  • What is GiveWell’s overall impact, particularly in terms of donations influenced? Does it justify the expense of running GiveWell?

For all of these questions, we intend to discuss

  • The progress we’ve made since November 2008 (when we last laid out a business plan)
  • Where we stand today, relative to where we need/hope to be to consider GiveWell a success
  • What we can do to improve

Our self-review won’t be entirely comparable to the reviews we perform of other organizations. The latter tend to be focused on the “end product,” as we stay agnostic on progress other organizations have made and how they can improve. When evaluating ourselves, it is essential that we examine “intermediate indicators” as well as our ultimate impact, and think critically about the different paths we can take to improve.