The GiveWell Blog

Global Giving’s spot check and why it should worry you

Aid Watch:

    “Local people may be the experts, but for outsiders deciding where their donations can do the most good, getting access to local knowledge and acting on it appropriately requires real-time feedback loops that most aid projects lack.
    Over a little more than a year, GlobalGiving combined staff visits, formal evaluation, third-party observer reports called visitor postcards, and internet feedback from local community members to create a nuanced, evolving picture of a community-based youth organization in Western Kenya that had received $8,019 from 193 individual donors through the GlobalGiving website.
    Initially, youth in Kisumu were happy with the organization. Among other things, the founder used the money to fund travel and equipment for the local youth soccer team. But the first tip-off that something was going wrong came when a former soccer player complained through GlobalGiving’s online feedback form that “currently the co-ordinator is evil minded and corrupt.” The view that the founder had begun stealing donations and was stifling dissent among his members was expanded upon by other community members, visitors to the project, and a professional evaluator.
    In the end, a splinter group broke off and started a new sports organization, and the community shifted their support to the new group. Reflecting the local consensus, GlobalGiving removed the discredited organization from its website.” (Emphasis mine)

Aid Watch stresses the “new way to evaluate a project” angle on this story, and we think it’s a good angle. But another angle is that most aid projects don’t receive this level of scrutiny, and the project that was put under this particular microscope – more as a way of testing the microscope than because there were concerns about the project – turned out discredited.

This is a story that I feel should affect your default assumption about whether an aid project is working.

The comments on Aid Watch’s post are also worth reading. One problem with the “funding a project at a time from many different organizations” approach is that it isn’t clear what one does with evaluation and feedback, when it’s available. Knowing how a project went is certainly better than not knowing, but the ultimate goal is to translate knowledge of what’s working into improved performance.

That’s an argument for focusing on the organization rather than the project level. Organizations can be given incentives to learn from their mistakes and improve their projects. Funding tiny organizations for one-off projects, it isn’t clear how to impose any kind of accountability.

LAPO (Kiva partner) and financial vs. social success

We recently looked at Kiva’s largest partner MFI, LAPO (Lift Above Poverty Organization), as part of our evaluation process for an economic empowerment grant in sub-Saharan Africa.

In brief, we found two surprising pieces of information:

  • LAPO is very profitable.
  • There’s good reason to be concerned about LAPO’s social impact.

As Natalie recently described on our research list, we’ve contacted a handful of individual microfinance institutions in Sub-Saharan Africa to assess whether one might be able to answer the key questions we ask to evaluate a microfinance organization.

One of the steps we took was to look at Kiva’s largest MFI partners. Because Kiva partners are both (a) relatively well-known (due to its presence on Kiva) and (b) underwent Kiva’s due diligence process, we guessed that they might be a reasonable place to begin our search.

When we looked closely at LAPO, we found the following, all of which concerned us (Note: we haven’t yet contacted LAPO as our aim, at this point, was to identify the most promising organizations, not confidently dismiss any particular organization. Because our brief review of LAPO opened several relatively large questions, we chose to move on, as we often do).

  • In the last 3 years (2006-2008), LAPO had significant profit margins (23-28%).
  • In its Mix Market Social Performance Report (xls), LAPO reported a 49% dropout rate. As Holden wrote in our post on evaluating a microfinance charity, dropping out of a program may indicate participants “voting with their feet” and choosing to leave a program that they don’t find beneficial. It is also possible that “drop outs” instead consist of those who “graduate” from the program, i.e., improve their incomes/credit to the point where they can access credit from elsewhere (or no longer want/need credit). However, my instinct is that it’s unlikely that close to 50% of participants are quickly moving up to access more formal sources of credit.
  • LAPO’s Client Exit Study report (doc) reports that individuals need manager approvals to withdraw savings, and that managers investigate the reason for withdrawal before approving (Pg 3). This seems to undermine many of the benefits of saving, which presumably aims to help people deal with risk and unexpected situations.

Does LAPO sound like an institution that needs (or should receive) Kiva’s interest-free funding?

Its appears highly profitable, but its social impact is much less clear given the high drop-out rate, significant hurdles for depositors to withdraw savings. These facts paint a slightly worrying picture of LAPO as an organization that may be earning significant profits through relatively restrictive regulations for clients while getting interest-free funding through Kiva. Perhaps there is a special arrangement here as with Xac Bank, but it certainly raises a concern about charity-minded capital funding profits.

What we know about Robin Hood (almost nothing)

One of the charities we’re often asked about is the Robin Hood foundation. Partly because we used to work at a hedge fund and Robin Hood is big in the hedge fund world; partly because we emphasize analytical, impact-focused giving and Robin Hood has a reputation for analytical, impact-focused giving.

Robin Hood works mostly in the area of U.S. equality of opportunity. We believe this is a very difficult area where well-intentioned and well-executed programs often fail.

Robin Hood’s website does not appear to link to any evidence regarding its impact. Its content seems like that of a typical charity to us, heavy on anecdotes and making use of the “0% overhead” gimmick.

We have asked Robin Hood many times for evidence regarding its impact. We have not only called Robin Hood but have, on more than one occasion, gotten to know Robin Hood donors (giving tens of thousands of dollars) who have asked on our behalf for evidence of impact. We have not been provided with any evidence regarding impact.

We have been provided, on more than one occasion, with a paper on Robin Hood’s conceptual approach to metrics, but this paper discusses how charities would be evaluated given a set of hypotheticals and does not discuss any actual evidence of impact from Robin Hood grantees.

At one time, I was able to have a phone conversation with a high-up staff member at Robin Hood. (I believe it was Michael Weinstein, but I’m not sure – this was before GiveWell was a full-time project, and I didn’t keep good records.) What I am allowed to say about the conversation:

  • I asked if Robin Hood could share any evidence regarding its impact. I was told that all information regarding grantee performance is confidential.
  • I asked if Robin Hood might share its general views about the most promising program types, without discussing specific grantees. I was told that Robin Hood does not intend to publish such views because they would be held to high standards of academic rigor, standards he felt were not appropriate for the practical work that Robin Hood does. (As a side note, I believe organizations like Poverty Action Lab and Innovations for Poverty Action to be both academically rigorous and practically effective.)
  • He did informally share some views on effective programs with me, but asked that I keep the specifics of these views confidential.
  • I asked if he might informally advise us in our research. I was told that time constraints did not permit it.

Despite this extreme commitment to confidentiality, we have ended up seeing some evaluation materials from Robin Hood. The reason is that its grantees have sent them to us as part of their applications for our funds. In general, we have found these materials to provide insufficient evidence to determine impact, much less quantify it in the way Robin Hood’s “metrics” paper describes.

  • Groundwork Inc. submitted two reports from Philliber Research Associates that its main application stated were funded by Robin Hood. (All three documents are available at that link.) These reports report on student improvement on various metrics without addressing “counterfactual”-relevant questions such as “What would be expected of nonparticipant students by these same measures?”
  • SEIU-League Grant Corporation submitted a Robin Hood progress report (available at that link) discussing training program completion and job placement and retention (though data on the latter was not yet available).
  • Other documents were sent for our eyes only, including one that one appeared to directly compare job placement/retention figures across organizations serving seemingly significantly different populations.
  • More generally, we’ve independently evaluated many Robin Hood grantees and found insufficient evidence of impact (see, for example, our report on employment assistance, which includes many Robin Hood grantees).

Bottom line

If I were to guess what I think of Robin Hood’s methodology, I would guess that it is much more focused on quantifying good accomplished – and less focused on being skeptical about whether there was a real impact – than I am. I would defend my view by pointing to past programs in similar areas shown to have no or negligible effects (despite apparent effects using simpler, more selection-bias-prone methodologies). I would argue that rewarding proven impact provides incentives to make sure lives are really being changed for the better, while focusing on quantification provides incentives for charities to embrace selection bias and other subtle factors that can skew non-rigorous studies in their favor. I would also argue that Robin Hood is not helping the populations that could benefit most from its funds.

But I don’t have much to go on. What I am much more confident of is that Robin Hood has essentially no transparency, and essentially no accountability, to the public and to its donors (at least the smaller donors, i.e., those giving tens of thousands of dollars).

Robin Hood, Smile Train and the “0% overhead” donor illusion

For an organization focused on financial metrics, the American Institute of Philanthropy can be very interesting. I can’t do justice to this excellent article on Smile Train with an excerpt, and I urge you to read it all.

It thoroughly debunks an alleged claim by Smile Train that “100% of your donation goes toward programs — 0% goes toward overhead.” Smile Train currently seems to have backed off this claim at least somewhat, although Steven Levitt of Freakonomics appears to have been sold this story (and to have bought it) in 2006.

If identifying effectiveness with “low overhead” is silly, the idea of “0% overhead” simply seems absurd. As the article shows, it doesn’t (and can’t) mean that there are no operating costs affecting the total costs of the program. Rather, it’s another case of zooming in on “your” money, rather than discussing the true total costs of the program you’re supporting the existence of. It makes no sense in an analytical framework; it’s a feel-good gimmick.

That’s why we were surprised when we first saw this gimmick prominently displayed by a group that many consider to be the epitome of hard-nosed, analytical giving: Robin Hood.

Robin Hood’s financials make the situation look similar to Smile Train’s (minus the questionable reclassification of funds that AIP attributes to the latter). About 11% of Robin Hood’s total expenses are “Administration salaries and overhead” or “Fundraising and Public Information,” but because Board member donations are earmarked to those expenses, everyone else can be told their donations are “overhead-free.”

If your goal were to minimize overhead, the fact that Robin Hood tags funds this way shouldn’t be very relevant to you. Robin Hood could allocate more of those Board donations to programs if it spent less on overhead. If you gave to another organization, you could be scaling up an overall lower-overhead operation.

Bottom line: The “0% overhead” claim is promoting the wrong metric (low overhead) and offering a false way to accomplish it.

Comments on this blog

Lately we have seen a surge in thoughtful and interesting comments on this blog. To those participating in the discussions, thank you and please keep them coming.

We try to respond to any comment that is substantive and critical, though when there are as many as there have been lately, we may go a few days at a time without responding. I’ve just posted responses to the latest batch, excluding the comments on our latest post (“a conflict of Bayesian priors”) which I will get to later.

A conflict of Bayesian priors?

This question might be at the core of our disagreements with many:

When you have no information one way or the other about a charity’s effectiveness, what should you assume by default?

Our default assumption, or prior, is that a charity – at least in one of the areas we’ve studied most, U.S. equality of opportunity or international aid – is falling far short of what it promises donors, and very likely failing to accomplish much of anything (or even doing harm). This doesn’t mean we think all charities are failing – just that, in the absence of strong evidence of impact, this is the appropriate starting-point assumption.

Many others seem to have the opposite prior: they assume that a charity is doing great things unless it is proven not to be. These people are shocked that we hand out “0 out of 3 stars” for charities just because so little information is available about them; they feel the burden of proof is on us to show that a charity is not accomplishing good. When someone asks them to give to a charity, they usually give.

This puts us in an odd situation: we have very little interest in bad charities, yet others are far more interested in us when we talk about bad charities. To us, credible positive stories are surprising and interesting; to others, credible negative stories are surprising and interesting.

A good example is Smile Train. Nothing in our recent post is really recent at all – we did all of that investigation in 2006. We’ve known about the problem with Smile Train’s pitch for over three years, and have never written it up because we just don’t care that much.

Since day one of our project, our priority has been to identify top charities. When we see a promising charity, we get surprised and excited and pour time into it; when a charity looks bad, we move on and forget about it. Yet others find a report on a great charity to be mind-numbing, and are shocked and intrigued when they hear that a charity might be failing.

So which prior is more reasonable? Before you have evidence about a charity’s impact, should you assume that it’s living up to its (generally extravagant) promises? Or should you assume that it’s well-intentioned but failing?

We’ll offer up a few points in defense of our position in a future post. If you have the opposite position, please share your thoughts in the comments.