The GiveWell Blog

Stats on GiveWell’s money moved and web traffic

This post provides summary metrics we look at to gauge GiveWell’s influence and growth. Our full annual review and plan will follow over the next few days.

Summary

  • We tracked over $1.5 million in donations to top charities in 2010, compared to just over $1 million in 2009.
  • Our website traffic nearly doubled from 2009 to 2010, and donations through the website nearly tripled. Our overall increase in money moved appears to be driven mostly by (a) a gain of $200,000 in six-figure donations; (b) new donors, largely acquired via search engine traffic and the outreach of Peter Singer.
  • Our growth in online donations to recommended charities was significantly faster than that of the more established online donor resources (Charity Navigator and GuideStar); our total online donations remain lower than these resources’, but are now in the same ballpark.

Total money moved

One of our primary metrics is “money moved”: donations to our top charities that we can confidently identify as being made on the strength of our recommendation. We are generally very conservative in identifying “money moved”; full details of how we track it are at the end of this post.

The tables below show GiveWell’s 2010 money moved by (a) organization and (b) size of donation. They show a total of just over $1.5 million, the vast majority of which went to our top-rated charity, VillageReach.

Note: these figures do not match up exactly to the above total money moved figures because this table does not include donations made directly to our recommended charities that the charities informed us about and for which we do not know the individual donation sizes.

Donations through the website

While the aggregate money moved figure (which includes all money donated to charities due to GiveWell’s recommendation) is ultimately the more meaningful measure of GiveWell’s impact, we believe that donations by website visitors whom GiveWell staff don’t know personally is a more meaningful measure of GiveWell’s progress as these donations represent (a) use of the GiveWell tool as we ultimately envision it (i.e., retail donors coming to the site and using it to give) and (b) are less susceptible to large, one-off circumstances that lead to large variations in the aggregate money moved figure.

The following charts show monthly donations through the website, for 2008-2010. (We include January 2011 data.) We’ve attempted to strip out donors we have had personal contact with (we aren’t able to see the source of all donations, so some could be included, although we were able to see the source of – and thus strip out – the bulk of donations for which this applies).

The following two charts show data just for December, the month in which we see significantly more donation activity.

Comparison to Charity Navigator and GuideStar

As shown above, donations through our website were substantially higher in 2010 than in 2009. We used the public Network for Good tickers for Charity Navigator and GuideStar to compare our growth (and level) to theirs, to see how much of our growth can be attributed to GiveWell’s improvement in particular vs. more interest in online charity evaluators / online giving in general. (We have confirmed that we are looking at the right data with Charity Navigator; we are still waiting for confirmation from GuideStar, but the numbers we got from this ticker match almost exactly with the independent numbers they sent us for 2010.)

It’s also worth noting the levels of (not just changes in) these figures. Though Charity Navigator had about 30x as many donations as GiveWell in 2010, and GuideStar about 9x as many, the average donation for GiveWell was over 4x higher (~$450 for GiveWell; closer to $100 for each of the other two). Net result: the total online donations for GiveWell were about 13% of those for Charity Navigator and 42% of those for GuideStar.

There are many possible interpretations of these numbers. One could argue that GuideStar’s and Charity Navigator’s numbers understate the actual impact, more so than they do for GiveWell, because people are more likely to use those sites as only one step in the process and end up giving via the charity’s own website. One could also argue the reverse: available information implies that more people are using GiveWell’s research to make large donations, and people often prefer to give these donations using donor-advised funds and checks rather than online.

One thing we do feel is the case, however, is that a donation given through GiveWell likely represents a much bigger impact – in terms of changing a donor’s actions from what they would have done otherwise – than a donation given through GuideStar or Charity Navigator. GiveWell recommends only 10 charities, while Charity Navigator lists over 1600 charities with its highest four-star rating and GuideStar allows donors to give to just about any charity they’d like. GiveWell is designed to make a strong recommendation of where to give; the others are designed to help people make one final check on the charity they had already planned on supporting. So while donations through the website remain lower for GiveWell than for Charity Navigator and GuideStar, they are now in the same ballpark, and there is an argument that the overall impact of GiveWell is at least as high.

What changed between 2009 and 2010?

  • Six-figure gifts: two individual donors, both of whom have been following GiveWell for a long time, each made six figure gifts in 2010. The two donations totaled around $460,000. These donors had both refrained from large gifts to recommended charities in the past; in 2009, their giving to our top charities totaled $10,000. On the other hand, we did not repeat the 2009 economic empowerment grant (funded by an anonymous donor). So we gained a net of about $200,000 on six-figure gifts.
  • New donors: $387,585 came from donors who hadn’t given before, while we lost $118,793 via donors who had given in 2009 and did not repeat in 2010. So the net from donor turnover was about +$270,000. This figure is about the same size as, though conceptually different from, the increase we saw in donations through the website.

We break down the 2009-2010 changes in more detail at the bottom of this post.

Web traffic

Our web traffic roughly doubled in 2010 vs. 2009. The following shows web traffic by source.

Two notes:

  • The largest driver of growth in web traffic in 2010 was increased organic (i.e., non-AdWords) search traffic.
  • The charts include data from January 2011 and the recent dips are a function of normal seasonality — i.e., we have more traffic during December’s giving season.

Money moved vs operating expenses over time

The following chart shows GiveWell’s total money moved each year (2007-2010) relative to our operating expenses. A major question for GiveWell is whether the value of our research is worth the cost of producing it. Money moved has continued to grow significantly relative to operating expenses.

Note that the above chart reports lower 2009 and higher 2008 money moved figures than our 2010-vs-2009 comparison above and our previous report. This is due to the way we reported data in 2008 and 2009. We spent a significant portion of 2008 soliciting funds for organizations we intended to recommend in the future. We ultimately made these recommendations in mid-2009. The funds were actually donated in 2009 but were committed in 2008. We believe that counting the funds when they were committed provides the most accurate picture of changes in GiveWell’s influence relative to operating expenses over time (though counting the funds when they were given makes it easier for us to track what changed between 2009 and 2010).

What we count as “money moved”:

  • Donations made to top charities directly through our website. Though these donations go directly to top charities, we are able to track them and verify that they went through our website. (Example: VillageReach donate page)
  • Donations that our recommended charities report back to us as being attributable to GiveWell (we have a high standard for this – we count only cases where (a) the donor explicitly stated that their donation was on the strength of GiveWell’s recommendation or (b) the donor gave to Nurse-Family Partnership and stated that they heard about it from a Nicholas Kristof column; Mr. Kristof has informed us that he included NFP in the column on our recommendation).
  • Donations that donors report to us (informally or using our donation report form) as donations that they made on the strength of our recommendation. We cross-reference our data with recommended charities’ data, when necessary, to eliminate double-counting.
  • Donations made directly to GiveWell and earmarked for re-granting. We count donations made and restricted in year X, and then granted in year Y, as “money moved” for year X, not year Y.

More details on what changed between 2009 and 2010:

  • Two individual donors, both of whom have been following GiveWell for a long time, each made six figure gifts in 2010. The two donations totaled around $460,000. These donors had both refrained from large gifts to recommended charities in the past; in 2009, their giving to our top charities totaled $10,000. On the other hand, we did not repeat the 2009 economic empowerment grant (funded by an anonymous donor). So we gained a net of about $200,000 on six-figure gifts.
  • $387,585 came from donors who hadn’t given before, while we lost $118,793 via donors who had given in 2009 and did not repeat in 2010. So the net from donor turnover was about +$270,000. Some additional context on these numbers:
    • 13 new large donors, giving between $8-30k each, accounted for $163,449 of the $387,585 in gains. The rest came from nearly 500 smaller donors ($6k and under).
    • Of the $387,585 in new donations, we can (using a combination of web analytics and optional “Where did you hear about us?” surveys) attribute $56,672 to organic search (i.e., people searching for things like “best charities” – not for GiveWell itself – and not including Google AdWords); $30,903 to the outreach of Peter Singer; $19,691 to “word of mouth”; and $18,517 to Google AdWords.
    • We do not know the source of the other ~$195k in new donations. We can guess at what the distribution looks like using our survey data. Of the 2010 users who responded to surveys about where they heard about us, 13% found us via search, 23% found us through media, 25% found us through word of mouth or links from other sites, and 38% found us through the outreach of Peter Singer.
    • Of the ~$118k in lost donations, $73k can be attributed to donors who simply gave early in 2011 instead of late in 2010. The remaining ~$45k comes from donations of $5,000 and under, and the fact that we lost this many small donors is a source of some concern.
  • Donors who gave in both 2009 and 2010 gave less in 2010: $411,787 vs. $354,900 for a net loss of about $57,000. Three very large donors lowered their donations by a total of over $100k, for reasons that we believe to be related to financial circumstances, while another donor increased his by $30k; the rest of the changes netted out to a slight gain.
  • Donations to GiveWell, earmarked for regranting, fell by a net of $38,326. About $46,000 of this loss represented donors who switched over to giving to our top charities, or to GiveWell unrestricted, instead of giving to GiveWell restricted. There was another ~$29,000 in losses, $9,000 of which came from one donor and the rest of which came largely from people who had donated immediately after our 2007 launch and not returned, offset by ~$36,000 in gains, ~$34,000 of which came from 3 donors.
  • We had $238,988 in donations that we weren’t able to attribute to specific donors (donations that went through Network for Good plus donations reported to us by charities as coming on the strength of GiveWell’s recommendation), up from $56,200 in 2010, for a net gain of about $180,000 in this “mystery” category.

GiveWell’s annual self-evaluation and plan: A big picture change in priorities

As we did last year, we’re going to be posting our annual self-evaluation and plan as a series of blog posts.

Our self-evaluation starts with a set of critical questions about GiveWell, intended to examine how well we’re accomplishing our high-level goals. For the most part, our high-level goals are the same as they were last year, and thus our questions for GiveWell are the same as the questions we laid out last year. However, there is one major change in how we see the goals of GiveWell: we no longer assign high importance to the number of causes covered / number of options we provide to donors. Instead, we just want to focus on finding outstanding charities that donors can be confident in, and closing those charities’ funding gaps to the extent that we can.


We see charity evaluation groups as falling into the following possible categories:

  1. Groups that rate as many charities as possible. Donors come to them already having a particular charity in mind to give to, and search for that charity.
  2. Groups that suggest charities for as many causes as possible. Donors come to them knowing what sort of cause they want to support (U.S. education, global health, etc.) but not which charity, and get a recommendation.
  3. Groups that simply focus on finding outstanding charities. Donors come to them looking for outstanding giving opportunities (they are often issue-agnostic).

We started GiveWell as issue-agnostic donors looking for the best giving opportunities we could find, and we have always primarily been interested in #3. We’ve never had a serious interest in #1 above – distinguishing between “worst,” “bad,” “good,” and “better” is too much of a distraction when what we care about is “best.” But for most of our history, we’ve seen ourselves as possibly being on a trajectory toward becoming #2 in addition to #3. We’ve talked about covering a broad array of causes to interest as many donors as possible, thus increasing our influence and visibility. Many of those who have criticized us have focused on the small number of causes we’ve covered, and expressed the hope that we will eventually cover many more.

A couple of things have made us rethink this goal:

1. We see it as urgent, and difficult, to find more “gold medal” charities than we have so far.

When the 2010 holiday season was approaching, and we started thinking about our strategy for outreach, we realized that we only had one charity we could personally feel really good about aggressively raising money for. While we think all our recommended charities stand well above other donors’ options, it was only VillageReach for which I felt I could sit down with someone face-to-face about and say, “Give as much as you can to this one.”

Why? Because not only does VillageReach have outstanding evidence of effectiveness; it has outstanding bang-for-the-buck (in absolute terms, not just relative terms) and most importantly, it has a concrete plan for additional funding.

We personally don’t like raising money for groups that we can’t say all these things about. We’re happy to provide our recommendations for someone looking for the best microfinance charity, but we can’t honestly say that we think dollars given to it are accomplishing as much good as possible.

VillageReach had a “stretch” fundraising target for the year of $1.5 million. That gap has now been more than closed. VillageReach needs a total of $4.4 million for its multi-year project, so we are happy to leave it as our top-rated charity for now, but if our growth continues, the entire expansion could be funded by the end of the year. Thus, we are starting to get dangerously close to the point where the number of dollars we influence exceeds the number of dollars we know how to allocate very effectively.

We see this as an urgent situation. It would be a major problem for GiveWell if we essentially had more demand for our research (i.e., donors interested in following our recommendations) than supply (i.e., charities able to absorb this funding effectively). Thus, one of our top priorities for 2011 is finding more “gold medal” charities that we can give a wholehearted recommendation. We feel it would be a major distraction – and mistake – to try to find the “best of the bunch” within causes where there are no groups that really shine by our criteria.

2. The benefits of covering extra causes don’t seem very large.

The argument we’d always used for covering more causes was that covering more causes means appealing to more donors. In theory, this is true, but for the specific audience we seem to be attracting, the value of breadth seems surprisingly low.

The following charts show the percentage of our 2010 “money moved” (donations given to our recommended charities as a result of our research, which we track in a variety of ways) to different categories of charity. A future post will go into more detail on how we track “money moved” and what the figures were for 2010.


Note that VillageReach is our top-rated charity; Against Malaria Foundation is our only recommended charity that allows donors to get tax deductions in a variety of countries outside the U.S. (Canada, Australia and more); Nurse-Family Partnership (NFP) is our top-rated U.S. charity and was featured in a Nicholas Kristof column (we count donors coming from that column as “money moved” because Mr. Kristof has informed us that he included NFP in the column on our recommendation).

Bottom line – it looks like the overwhelming majority of our donors follow our recommendations regarding the promising cause and the most promising organization.

Even our top-rated U.S. charity didn’t attract nearly as much funding from our donors as our global health charities did. Meanwhile, the work we’ve put into accommodating donors interested in narrower causes – education and microfinance – has barely registered in terms of increased “money moved.”

And education and microfinance are still pretty broad, popular causes. I’d expect a “cancer research” or “global warming” recommendation to be pretty comparable to education and microfinance in terms of new money moved; I’d expect a narrower cause like “homelessness in New York City” to bring much less.

This doesn’t mean that all donors are issue-agnostic or global health fans. It means that our audience is. And we need to focus on serving our audience as well as possible.

Implications

We aren’t dropping the idea of “offering recommendations within lots of causes” entirely. We aren’t completely sure why our audience is the way it is, and recognize that over time the situation may change. Thus,

  • We plan to maintain/update our existing research on all causes, including the causes above plus disaster relief (the newest addition).
  • We plan to investigate more causes for the purpose of finding more outstanding charities. We will be writing up as much as we can of what we learn in these investigations, which may (along with our do-it-yourself evaluation guide) be helpful to donors interested in particular causes.
  • We may provide donors a way to “lobby us” to cover more causes. I’ve thought about the idea of offering donors the chance to commit to give $X to our top-rated charity in a given cause, conditional on our covering that cause; if we got enough commitments for a given cause we would cover it.

However, we no longer consider “number of causes covered” a relevant metric for GiveWell and will be replacing it with “room for money moved,” i.e., the total amount of room for more funding of our very top-rated charities.

Some may be disappointed by this decision, seeing it as a retreat from the opportunity to be a resource for as many people as possible. But we’re not sure how much sense it really makes for an operation like ours to maximize its breadth. Giving decisions are deeply personal, and the kind of work we do is as well. We’ve mostly followed our personal values in making recommendations, and we’re attracting an audience that seems comfortable with these values.

This doesn’t mean that there’s no place for a GiveWell-type operation that focuses on (for example) U.S. causes; but perhaps we aren’t the people to run that operation. Perhaps it takes someone who truly believes that U.S. charities represent the best chance to help people (we do not) to do compelling research and attract the right audience for that cause. We’d be more than happy to see such a group spring up. For our part, we’re happy to remain a niche operation for a niche audience, as long as the niche is big enough. And it appears to be growing.

Global Fund: Best failure disclosure we’ve seen yet?

Over the last year or so, The Global Fund has disclosed instances of apparent fraud in Zambia, Mali, and Mauritania among other countries.

This week, The Associated Press reported on these disclosures with the hostile headline: “Fraud plagues global health fund.” The Global Fund has been defended with the valid observation that the total fraud found accounts for a tiny percentage of its overall grants. But we’d like to do more than defend the Global Fund; we’d like to praise it for these disclosures.

The idea that disclosing failures should be rewarded has gained steam in our corner of the sector, particularly with the new AdmittingFailure.com site (see coverage by Good Intentions Are Not Enough and Tactical Philanthropy). We agree with the idea (previous posts on the topic here and here). But scanning the compilation at AdmittingFailure.com, we don’t see any admissions of failure that are as concrete, specific, and risky as the fraud disclosures the Global Fund has been making with some regularity.

To us, the key points regarding the Global Fund (aside from the fact that the percentage of funding reported as misused is small) are that

  • Disclosing these issues is a choice made by the Global Fund. In our investigations of large charities, we’ve seen no evidence that donors are auditing them in a way that would force disclosure of lost funds. We’ve also seen no evidence that these organizations have any way of even knowing when funds have been misused. Take UNICEF for an example: if UNICEF lost millions of dollars to fraud, it isn’t clear to us how (or whether) anyone would find out about it.
  • Other large charities could easily be seeing as much fraud, or more. We believe it is difficult to get anything done overseas without significant local help. Though it can be hard to tell, we believe that large organizations usually work with governments and with smaller community-based organizations, giving both opportunities to misappropriate funds. Even when they are officially executing their own programs, there’s a large number of degrees of separation between donors in the U.S. and local/locally connected people assigned to execute on the ground. We don’t see any reason to expect a systematically higher level of honesty from these people than from the people the Global Fund has worked with.

Bottom line – we feel that any large bureaucratic organization, particularly one that does a lot of grantmaking, could be losing a lot of money to fraud; what’s unusual about The Global Fund is that it is actively searching for these cases, disclosing them publicly, and then discussing (also publicly) how they can be addressed (PDF).

The Global Fund is not one of our top-rated charities; we think there are groups with even stronger reporting, and/or less complexity and bureaucracy to their activities, that we prefer for maximum impact. However, this incident reinforces our belief that The Global Fund has outstanding transparency compared to similar organizations. We think its disclosures of fraud deserve a place on AdmittingFailure.com and praise, not hostile headlines.

Free legal services – helpful or harmful?

The Harvard Legal Aid Bureau provides free legal services, via volunteer Harvard Law students, to “low-income people in civil (non-criminal) matters in order to ensure equal access to justice and to remove legal barriers to economic opportunity.”

It’s about as intuitive an intervention as any. Good legal services are expensive; the services provided by Harvard Law students are probably good; so it seems they would be beneficial to the low-income people receiving them. But a recent (unpublished) study calls this into question, implying that the benefits of the program (at least the service studied) are small and that there are possible harms as well.

Before getting into the details of the study, it’s important to credit the Harvard Legal Aid Bureau for participating in this study. Many organizations would likely shy away from a study that could find negative results. The Bureau deserves praise for participating in the study, and our opinion of them is now higher for their participation.

The study uses a randomized controlled trial methodology to evaluate the impact of offering legal services to clients seeking unemployment benefits. The paper finds that:

  • An offer of services had no statistically significant effect on the likelihood of an individual’s receiving government benefits. 76% of those offered services (the treatment group) by the Harvard group received benefits; 72% of those who were not offered services (the control group) received benefits.
  • An offer of services had a statistically significant effect on the delay until an individual received benefits. The treatment group received services more than 13 days later than the control group. The paper argues that those applying for unemployment benefits often have an immediate need for cash, and this delay is a significant negative impact of the program.

What explains the results? The authors offer a number of suggestions.

  • The authors note that they used an intent-to-treat methodology in this paper. That is, they assessed the impact of the offer of services, not services, themselves. Ultimately, 90%+ of the treatment group received services while only 49% of the control group did. The fact that nearly half the control group received services may explain the lack of a statistically significant effect of offering services. We believe that using an intent-to-treat approach was the correct approach here, especially given the question most important to us at GiveWell: what is the marginal impact of a particular organization? In the absence of the Harvard Legal Aid Bureau, it seems that approximately half of the clients they serve would be served elsewhere.
  • The Harvard Legal Aid Bureau is staffed by law students (under the supervision of a practicing attorney). The authors consider the possibility that because of their limited experience, the students are less effective than practicing attorneys would be. Nevertheless, the authors conclude — largely it seems based on their observations of the students and experience with similar organizations — that the students are likely competent and this is unlikely a cause of the lack of impact.
  • The authors suggest that the additional delay experienced by students’ clients may be due to the students’ schedules and the fact that they only commit part-time to the legal aid bureau.

It’s only one study and it shouldn’t be taken too far – it addresses the impact of one service offered by one particular legal aid organization, and doesn’t show that free legal services for low-income people are unnecessary. But it does hint that

  • From a donor perspective, legal aid charities may not have room for more funding. (Here it looks like the supply was enough that the existence of a particular group, Harvard’s, couldn’t be connected to strong impact.)
  • A sensible-seeming program can have unintended consequences. In this case, matching low-income people with law students at a prestigious institution might also mean matching them with people who don’t have enough time to address their issues promptly.

Disaster relief report published; Doctors Without Borders, Partners In Health, and Direct Relief International stand out

Our new report grades major disaster relief organizations on their transparency and accountability to donors. It provides detailed reports on each charity, a summary table with our conclusions, and the full details of our process.

We conclude that Doctors Without Borders, Partners in Health and Direct Relief International stand out for the clarity with which they discuss their activities and expenditures.

Over the coming months, we’ll be adding to this report by examining information from the response to the 2004 Asian tsunami. We’ll also be providing some more discussion of non-transparency-related factors behind the donation decision. For example, Direct Relief International and Doctors Without Borders are very different in their focus – the former focuses on distributing supplies while the latter largely facilitates and supplies medical treatment. Personally, I find the latter to be a more appealing vehicle for addressing pressing and challenging needs in a relief situation. Other donors may prefer to support organizations with broader mandates, which are more likely to play direct roles in (for example) providing shelter and assisting with longer-term reconstruction.

That said, we feel that what we have now is a substantial improvement over the information and analysis previously available to donors. It consolidates the clearest and most detailed information provided by each organization, and it can help donors take a first step toward creating incentives for organizations to raise money by being more transparent (and not just more pushy in fundraising) than their competitors.

While we haven’t assessed the quality of individual organizations’ relief efforts, we have accompanied our report with a general overview of what the overall relief effort has and hasn’t accomplished in the year since the disaster, and for how much.

GiveWell’s report on major disaster relief organizations

What is the situation in Haiti a year after the earthquake? What have and haven’t charities accomplished so far?

Yesterday we discussed how much has been raised and spent for Haiti relief. Today we’ll summarize what we know about how the relief effort has progressed over the last year.

Our detailed and sourced account of the relief effort as a whole will be available by the end of today, and linked here when it is. Update: this page is now available. (Our take on individual organizations will be published tomorrow.) For now, the big picture as we see it is that the relief effort has reached a lot of people with some basic necessities, but that conditions in the camps are still extremely poor, and that there is a dire need to halt the ongoing outbreak of cholera and clear more of the rubble.

  • The relief effort provided immediate shelter assistance, mostly in the form of tarps, within three months of the earthquake, although there has been some criticism that this was slower than it needed to be due to coordination issues. There has also been criticism of the emphasis on tarps as opposed to tents.
  • Conditions in settlement camps, while varied, have generally been extremely poor. One study involving visits to over 100 camps concluded that

    seven months following the earth-quake, 40 percent of … camps do not have access to water, and 30 percent do not have toilets of any kind. An estimated 10 percent of families have a tent; the rest sleep under tarps or even bed sheets. In the midst of the hurricane season with torrential rains and heavy winds a regular occurrence, many tents are ripped beyond repair. Only a fifth of camps have education, health care, or psycho-social facilities on site.

  • Construction of transitional shelters (higher-quality living spaces compared to camps) has been far slower than hoped. 9 months after the earthquake, only about 60,000 people were living in such shelters (out of likely over 1 million people left homeless by the earthquake). The number of transitional shelters has reportedly tripled since then, so things may be improving on this front. Rubble and confusion over land rights have been major obstacles to transitional shelter construction.
  • Water and sanitation efforts have been hampered by the difficulty of operating in a crowded urban area, and have generally been poor, especially in terms of sensitivity to privacy. A massive outbreak of cholera began in October and has led to over 3000 deaths and 171,000 infections nationwide, and is ongoing.
  • A large number of people have been reached with medical assistance and food aid, and we have not seen major criticisms of the relief effort on these fronts. We have also seen no assessments of the quality of medical care or of medical outcomes after the earthquake (i.e., deaths/complications not directly related to the earthquake itself or the cholera outbreak).
  • Rubble removal has been a major problem, and at least 80% (possibly much more) of rubble remains un-managed. Property rights and coordination issues have been obstacles on this front; the difficulty of navigating narrow roads has been an issue as well.

Overall, we’d say that the progress of relief has been disappointing.

One of the questions we’ve been thinking about is whether relief in a situation like this is over- or under-funded relative to everyday aid. I see a few possible interpretations of the disappointing relief effort in Haiti:

  1. Relief organizations aren’t spending money fast enough – they are selfishly/irrationally holding money for later projects that they should be spending now. If they would spend more, the above problems would be alleviated.
  2. Relief organizations are wisely conserving their funds for necessary later rebuilding efforts. If donors gave more generously, relief organizations would be spending more now, and still have enough left over for later rebuilding.
  3. Relief organizations are wisely withholding funds because money isn’t the bottleneck to better outcomes. The logistical and political problems can’t be solved simply by spending more money, and any spending above current levels could be wasteful and even harmful.

Although #1 seems to be the most common narrative in the media, I find it the hardest to believe. All of the public pressure seems to be on nonprofits to spend faster and get quicker, more tangible results. Spending money now seems to be the best move from a public relations standpoint; if it were also the best move from an outcomes standpoint, I don’t see what motivation relief organizations would have for doing otherwise.

#2 seems possible. We have acknowledged that rebuilding Haiti could take all the money that’s been given and more.

However, given the direness and urgency of the current needs – particularly the cholera outbreak and the rubble situation – it seems to me that any effective investment in getting better outcomes now ought to more than pay off later. (Haiti can’t be rebuilt without clearing the rubble or stopping the cholera outbreak; the sooner these are done, the better.) Because of this, and in view of the large amounts given/spent in the context of Haiti’s economy, I lean toward explanation #3: Haiti earthquake relief doesn’t have immediate room for more funding (though this would not preclude having significant needs for more long-term rebuilding funds).