The GiveWell Blog

Charity isn’t about helping?

One person who’s more critical of charity than we are or than David Hunter is is the economist Robin Hanson. He has stated that “charity isn’t about helping” and spelled out this view somewhat in a post about the founder of Rite Aid:

    when folks like Alex spend their later years trying to “do good” with the millions they were paid for actually doing good, they usually end up pissing it away. We already have too much medicine and academia, because such things are mainly wasteful signals. We didn’t need and shouldn’t be thankful for more hospital wings or lecture halls. Imagine how much more good could have been done instead via millions spent trying to make more innovative products or organizations.
    Of course most innovations attempts fail, and that wouldn’t have looked so good for Mr. Grass. I’m sure those hospital wings and lecture halls came with grand ceremonies attended by folks in his social circle, saying what a great guy he was. And I expect people in his social circle are more likely than most to actually use those hospital wings and lecture halls; he was showing loyalty to his clan by buying such things.
    But when I think of all the good that could be done by philanthropists who actually wanted more to do good than to look good, it makes me sad. At it doesn’t make me sympathetic toward the tax deductions and other social support our society offers for these wasteful signals.

Prof. Hanson tends to imply that charitable giving should be essentially ignored in favor of pro-poor causes like allowing more immigration.

What response can the nonprofit sector marshal to arguments like this? I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.

Perhaps ironically, if you want a good response to Prof. Hanson’s view, I can’t think of a better place to turn than GiveWell’s top-rated charities. We have done the legwork to identify charities that can convincingly demonstrate positive impact. No matter what one thinks of the sector as a whole, they can’t argue that there are no good charitable options – charities that really will use your money to help people – except by engaging with the specifics of these charities’ strong evidence.

Valid observations that the sector is broken – or not designed around helping people – are no longer an excuse not to give.

Because our Bayesian prior is so skeptical, we end up with charities that you can be confident in, almost no matter where you’re coming from.

Comment on Barron’s “25 Best Givers” list

In concept, I like the idea of showering praise on people based on their philanthropic impact, not merely dollars given (or dollars made).

But I am skeptical as to whether Barron’s did the research necessary to base its piece on facts as opposed to guesses.

Taking a look at this list, what jumps out at me is that #8 on the list has been recognized for founding the Robin Hood Foundation, and I simply cannot imagine what information this could be based on.

Estimating the cost-effectiveness of microfinance charity

Note: I’ve responded to the most recent batch of comments.

A lot of work has been put into estimating the “bang for your buck” in health initiatives. In the area of microfinance, though, things appear very murky.

Microfinance advocates say things like “As our clients repay the loans, the money is loaned again and again to help many more entrepreneurs. It’s giving that keeps going.” Skeptics reply that much of the cost of lending is in operating institutions, not simply loan capital. We should be able to agree that the cost-effectiveness of microlending is not literally infinite, but what’s the right ballpark? Does the impact per dollar dwarf that of health?

We can take a very rough – and very generous to microfinance – cut by looking at some global estimates by CGAP. Notes before we get to the numbers:

  • We are trying to get a number that we can put alongside existing estimates of health cost-effectiveness, just to see whether the microfinance sector as a whole has a clear and large advantage in cost-effectiveness. The estimate will be extremely rough and will not apply to any given microfinance charity, but rather to the area of microfinance as a whole.
  • Our estimate is essentially a “best-case scenario” for what microfinance cost-effectiveness would be if (a) there were a direct link between donations and people served (b) microfinance could reach an enormous “target population” at the same level of donation funding that’s being provided now.

CGAP looks at both dollars invested in microfinance (PDF) and people served. According to these links,

  • $11.7 billion of funding went to microfinance in 2008, of which 19% – or ~$2.2 billion – was grants (not loans, not investments, not guarantees).
  • There are currently between 130 and 190 million microfinance borrowers worldwide.
  • CGAP implies a “target number” of borrowers: “Given that almost 3bn people live on less than two dollars a day, clearly the battle to bring financial access to as many people as possible is a very long way from being won.” I have major issues with this target – for one thing, I’m not sure that people living under $2/day should all be targets, or are the only targets, of MFIs.

A couple of ways to look at the “costs per MFI client”:

  • A lot of money is spent on microfinance. $2 billion in grants is about 10% as much as the total official development aid of the U.S. government (according to the 2008 Index of Global Philanthropy (PDF)).
  • We’re currently spending $12-$17 in grants alone for every MFI borrower. Of course, the grants could be paying for a lot more than borrowing (including savings), and could be made with the aim of expanding future services rather than maintaining existing ones.
  • If you believe that microfinance will eventually reach the entire CGAP “target population” (or a population that size, which would be around half the population of the world) and that the current level of grants will be maintained (say, growing only at the rate that the size of the target population grows), then at the point where microfinance is reaching its entire “target population,” the grants per person reached will be about $0.75. While this figure could be overstating the costs per person served if grants eventually create self-sustaining institutions and become unnecessary, I think it is far more likely that it understates the cost because (a) those who can most practically be reached in a profitable/sustainable way are likely to be those already reached, and the hardest people to reach are more likely to require continued subsidies; (b) there is a huge amount of other investment in microfinance, and we have very little sense of the role that grants play in enabling the expansion of services; (c) 3 billion clients is an extremely ambitious goal – around 20x the number of people actually being reached today, and around half the world’s current population.

A couple of ways to think about the comparison with health:

I would answer both of these questions mostly with a shrug. Certainly, under this extremely generous estimate of what microfinance could cost, it is “competitive” with the best health programs.

But this is assuming that all of that money going to microfinance is going to eventually succeed in reaching half the world, and also making the even bigger assumption that grants are the key factor. We think it’s very possible that much of microfinance’s reach has very little, or even literally nothing, to do with charitable support. (The less generous cost-effectiveness estimate of $12-$17 is fairly clearly not competitive with the best health programs: compare 12-17 person-years of financial services vs. 1 life saved, or 1 person-year of financial services vs. 3-5 person-years of extra school attendance due to improved health.)

Bottom line: we don’t see cost-effectiveness or “multiplying the impact of your dollar” as a strong argument for funding microfinance over health, on a general sector-level basis. This is the case even under the most generous model of the microfinance figures we’ve come up with.

Global Giving’s spot check and why it should worry you

Aid Watch:

    “Local people may be the experts, but for outsiders deciding where their donations can do the most good, getting access to local knowledge and acting on it appropriately requires real-time feedback loops that most aid projects lack.
    Over a little more than a year, GlobalGiving combined staff visits, formal evaluation, third-party observer reports called visitor postcards, and internet feedback from local community members to create a nuanced, evolving picture of a community-based youth organization in Western Kenya that had received $8,019 from 193 individual donors through the GlobalGiving website.
    Initially, youth in Kisumu were happy with the organization. Among other things, the founder used the money to fund travel and equipment for the local youth soccer team. But the first tip-off that something was going wrong came when a former soccer player complained through GlobalGiving’s online feedback form that “currently the co-ordinator is evil minded and corrupt.” The view that the founder had begun stealing donations and was stifling dissent among his members was expanded upon by other community members, visitors to the project, and a professional evaluator.
    In the end, a splinter group broke off and started a new sports organization, and the community shifted their support to the new group. Reflecting the local consensus, GlobalGiving removed the discredited organization from its website.” (Emphasis mine)

Aid Watch stresses the “new way to evaluate a project” angle on this story, and we think it’s a good angle. But another angle is that most aid projects don’t receive this level of scrutiny, and the project that was put under this particular microscope – more as a way of testing the microscope than because there were concerns about the project – turned out discredited.

This is a story that I feel should affect your default assumption about whether an aid project is working.

The comments on Aid Watch’s post are also worth reading. One problem with the “funding a project at a time from many different organizations” approach is that it isn’t clear what one does with evaluation and feedback, when it’s available. Knowing how a project went is certainly better than not knowing, but the ultimate goal is to translate knowledge of what’s working into improved performance.

That’s an argument for focusing on the organization rather than the project level. Organizations can be given incentives to learn from their mistakes and improve their projects. Funding tiny organizations for one-off projects, it isn’t clear how to impose any kind of accountability.

LAPO (Kiva partner) and financial vs. social success

We recently looked at Kiva’s largest partner MFI, LAPO (Lift Above Poverty Organization), as part of our evaluation process for an economic empowerment grant in sub-Saharan Africa.

In brief, we found two surprising pieces of information:

  • LAPO is very profitable.
  • There’s good reason to be concerned about LAPO’s social impact.

As Natalie recently described on our research list, we’ve contacted a handful of individual microfinance institutions in Sub-Saharan Africa to assess whether one might be able to answer the key questions we ask to evaluate a microfinance organization.

One of the steps we took was to look at Kiva’s largest MFI partners. Because Kiva partners are both (a) relatively well-known (due to its presence on Kiva) and (b) underwent Kiva’s due diligence process, we guessed that they might be a reasonable place to begin our search.

When we looked closely at LAPO, we found the following, all of which concerned us (Note: we haven’t yet contacted LAPO as our aim, at this point, was to identify the most promising organizations, not confidently dismiss any particular organization. Because our brief review of LAPO opened several relatively large questions, we chose to move on, as we often do).

  • In the last 3 years (2006-2008), LAPO had significant profit margins (23-28%).
  • In its Mix Market Social Performance Report (xls), LAPO reported a 49% dropout rate. As Holden wrote in our post on evaluating a microfinance charity, dropping out of a program may indicate participants “voting with their feet” and choosing to leave a program that they don’t find beneficial. It is also possible that “drop outs” instead consist of those who “graduate” from the program, i.e., improve their incomes/credit to the point where they can access credit from elsewhere (or no longer want/need credit). However, my instinct is that it’s unlikely that close to 50% of participants are quickly moving up to access more formal sources of credit.
  • LAPO’s Client Exit Study report (doc) reports that individuals need manager approvals to withdraw savings, and that managers investigate the reason for withdrawal before approving (Pg 3). This seems to undermine many of the benefits of saving, which presumably aims to help people deal with risk and unexpected situations.

Does LAPO sound like an institution that needs (or should receive) Kiva’s interest-free funding?

Its appears highly profitable, but its social impact is much less clear given the high drop-out rate, significant hurdles for depositors to withdraw savings. These facts paint a slightly worrying picture of LAPO as an organization that may be earning significant profits through relatively restrictive regulations for clients while getting interest-free funding through Kiva. Perhaps there is a special arrangement here as with Xac Bank, but it certainly raises a concern about charity-minded capital funding profits.

What we know about Robin Hood (almost nothing)

One of the charities we’re often asked about is the Robin Hood foundation. Partly because we used to work at a hedge fund and Robin Hood is big in the hedge fund world; partly because we emphasize analytical, impact-focused giving and Robin Hood has a reputation for analytical, impact-focused giving.

Robin Hood works mostly in the area of U.S. equality of opportunity. We believe this is a very difficult area where well-intentioned and well-executed programs often fail.

Robin Hood’s website does not appear to link to any evidence regarding its impact. Its content seems like that of a typical charity to us, heavy on anecdotes and making use of the “0% overhead” gimmick.

We have asked Robin Hood many times for evidence regarding its impact. We have not only called Robin Hood but have, on more than one occasion, gotten to know Robin Hood donors (giving tens of thousands of dollars) who have asked on our behalf for evidence of impact. We have not been provided with any evidence regarding impact.

We have been provided, on more than one occasion, with a paper on Robin Hood’s conceptual approach to metrics, but this paper discusses how charities would be evaluated given a set of hypotheticals and does not discuss any actual evidence of impact from Robin Hood grantees.

At one time, I was able to have a phone conversation with a high-up staff member at Robin Hood. (I believe it was Michael Weinstein, but I’m not sure – this was before GiveWell was a full-time project, and I didn’t keep good records.) What I am allowed to say about the conversation:

  • I asked if Robin Hood could share any evidence regarding its impact. I was told that all information regarding grantee performance is confidential.
  • I asked if Robin Hood might share its general views about the most promising program types, without discussing specific grantees. I was told that Robin Hood does not intend to publish such views because they would be held to high standards of academic rigor, standards he felt were not appropriate for the practical work that Robin Hood does. (As a side note, I believe organizations like Poverty Action Lab and Innovations for Poverty Action to be both academically rigorous and practically effective.)
  • He did informally share some views on effective programs with me, but asked that I keep the specifics of these views confidential.
  • I asked if he might informally advise us in our research. I was told that time constraints did not permit it.

Despite this extreme commitment to confidentiality, we have ended up seeing some evaluation materials from Robin Hood. The reason is that its grantees have sent them to us as part of their applications for our funds. In general, we have found these materials to provide insufficient evidence to determine impact, much less quantify it in the way Robin Hood’s “metrics” paper describes.

  • Groundwork Inc. submitted two reports from Philliber Research Associates that its main application stated were funded by Robin Hood. (All three documents are available at that link.) These reports report on student improvement on various metrics without addressing “counterfactual”-relevant questions such as “What would be expected of nonparticipant students by these same measures?”
  • SEIU-League Grant Corporation submitted a Robin Hood progress report (available at that link) discussing training program completion and job placement and retention (though data on the latter was not yet available).
  • Other documents were sent for our eyes only, including one that one appeared to directly compare job placement/retention figures across organizations serving seemingly significantly different populations.
  • More generally, we’ve independently evaluated many Robin Hood grantees and found insufficient evidence of impact (see, for example, our report on employment assistance, which includes many Robin Hood grantees).

Bottom line

If I were to guess what I think of Robin Hood’s methodology, I would guess that it is much more focused on quantifying good accomplished – and less focused on being skeptical about whether there was a real impact – than I am. I would defend my view by pointing to past programs in similar areas shown to have no or negligible effects (despite apparent effects using simpler, more selection-bias-prone methodologies). I would argue that rewarding proven impact provides incentives to make sure lives are really being changed for the better, while focusing on quantification provides incentives for charities to embrace selection bias and other subtle factors that can skew non-rigorous studies in their favor. I would also argue that Robin Hood is not helping the populations that could benefit most from its funds.

But I don’t have much to go on. What I am much more confident of is that Robin Hood has essentially no transparency, and essentially no accountability, to the public and to its donors (at least the smaller donors, i.e., those giving tens of thousands of dollars).