The GiveWell Blog

What we know about Robin Hood (almost nothing)

This post is more than 15 years old

One of the charities we’re often asked about is the Robin Hood foundation. Partly because we used to work at a hedge fund and Robin Hood is big in the hedge fund world; partly because we emphasize analytical, impact-focused giving and Robin Hood has a reputation for analytical, impact-focused giving.

Robin Hood works mostly in the area of U.S. equality of opportunity. We believe this is a very difficult area where well-intentioned and well-executed programs often fail.

Robin Hood’s website does not appear to link to any evidence regarding its impact. Its content seems like that of a typical charity to us, heavy on anecdotes and making use of the “0% overhead” gimmick.

We have asked Robin Hood many times for evidence regarding its impact. We have not only called Robin Hood but have, on more than one occasion, gotten to know Robin Hood donors (giving tens of thousands of dollars) who have asked on our behalf for evidence of impact. We have not been provided with any evidence regarding impact.

We have been provided, on more than one occasion, with a paper on Robin Hood’s conceptual approach to metrics, but this paper discusses how charities would be evaluated given a set of hypotheticals and does not discuss any actual evidence of impact from Robin Hood grantees.

At one time, I was able to have a phone conversation with a high-up staff member at Robin Hood. (I believe it was Michael Weinstein, but I’m not sure – this was before GiveWell was a full-time project, and I didn’t keep good records.) What I am allowed to say about the conversation:

  • I asked if Robin Hood could share any evidence regarding its impact. I was told that all information regarding grantee performance is confidential.
  • I asked if Robin Hood might share its general views about the most promising program types, without discussing specific grantees. I was told that Robin Hood does not intend to publish such views because they would be held to high standards of academic rigor, standards he felt were not appropriate for the practical work that Robin Hood does. (As a side note, I believe organizations like Poverty Action Lab and Innovations for Poverty Action to be both academically rigorous and practically effective.)
  • He did informally share some views on effective programs with me, but asked that I keep the specifics of these views confidential.
  • I asked if he might informally advise us in our research. I was told that time constraints did not permit it.

Despite this extreme commitment to confidentiality, we have ended up seeing some evaluation materials from Robin Hood. The reason is that its grantees have sent them to us as part of their applications for our funds. In general, we have found these materials to provide insufficient evidence to determine impact, much less quantify it in the way Robin Hood’s “metrics” paper describes.

  • Groundwork Inc. submitted two reports from Philliber Research Associates that its main application stated were funded by Robin Hood. (All three documents are available at that link.) These reports report on student improvement on various metrics without addressing “counterfactual”-relevant questions such as “What would be expected of nonparticipant students by these same measures?”
  • SEIU-League Grant Corporation submitted a Robin Hood progress report (available at that link) discussing training program completion and job placement and retention (though data on the latter was not yet available).
  • Other documents were sent for our eyes only, including one that one appeared to directly compare job placement/retention figures across organizations serving seemingly significantly different populations.
  • More generally, we’ve independently evaluated many Robin Hood grantees and found insufficient evidence of impact (see, for example, our report on employment assistance, which includes many Robin Hood grantees).

Bottom line

If I were to guess what I think of Robin Hood’s methodology, I would guess that it is much more focused on quantifying good accomplished – and less focused on being skeptical about whether there was a real impact – than I am. I would defend my view by pointing to past programs in similar areas shown to have no or negligible effects (despite apparent effects using simpler, more selection-bias-prone methodologies). I would argue that rewarding proven impact provides incentives to make sure lives are really being changed for the better, while focusing on quantification provides incentives for charities to embrace selection bias and other subtle factors that can skew non-rigorous studies in their favor. I would also argue that Robin Hood is not helping the populations that could benefit most from its funds.

But I don’t have much to go on. What I am much more confident of is that Robin Hood has essentially no transparency, and essentially no accountability, to the public and to its donors (at least the smaller donors, i.e., those giving tens of thousands of dollars).

Comments

  • brooklynchick on December 8, 2009 at 2:22 pm said:

    1. Your argument that Robin Hood is not helping the populations that could benefit most is not relevant, as Robin Hood’s mission is to end poverty in *New York*.

    2. Regarding selection bias: you endorse KIPP, which surely must have selection bias given what it takes to be a KIPP student (and family!).

    3. Nurse Family Partnership has terrific data on impact – but that data has been collected over more than 20 years and at what cost? Most local organizations cannot do that kind of rigorous evaluation and frankly, as a donor, I don’t want them spending the money and time.

    Bottom Line
    For an local organization serving a few hundred people, I’ll settle for an *estimate* of impact.

    Robin Hood donor (and Groundwork donor)

  • brooklynchick on December 8, 2009 at 2:25 pm said:

    Probably better said:

    “The endgame is creating better stakeholders. Let’s learn from the field of socially responsible investing. A lot has gone into ratings of public companies, and the track record is decidedly mixed: attempts to make ratings more sophisticated have generally resulted in less transparency and objectivity.

    The real power of these ratings comes not from the ratings themselves (with retail shareholders and consumers somehow knowing if companies are good or bad), it comes from shareholder activism – as information becomes more available, passionate followers create a dialogue with companies that can turn the dial on accountability. If ratings can create real dialogue (hopefully with much less acrimony than exists with public companies) we’ll make more real progress (and for this to happen, the raters have to be held accountable too…)”

    http://sashadichter.wordpress.com/2009/12/08/burying-a-blunt-instrument/

  • Jonah Sinick on December 8, 2009 at 11:30 pm said:

    Brooklynchick,

    Concerning your first post, some of your points (1-3) seem like good points to me on the face of things. Whether or not (3) is compelling depends on how effective nonprofit organizations that have not been rigorously studied are at accomplishing their goals. If they are mostly effective then I agree that it’s not so important for them to be evaluated rigorously, if they are mostly ineffective then probably it’s reasonable for donors to demand more rigorous evidence. In this connection, I would be interested in your thoughts on the December 5th post titled “A conflict of Bayesian priors?” Implicit in your remark “as a donor, I don’t want them spending the money and time” seems to be the idea that it’s reasonable to assume that a charity that’s not being evaluated rigorously is getting good results. Is this your position? If so why? If not, what sorts of nonrigorous evaluation mechanisms do you believe to be good ones?

    To be clear about where I’m coming from, I don’t have a very strong opinion on this matter. Though I find GiveWell’s position plausible, I am open a change in attitude if faced with a compelling argument.

    Also, one aspect of Holden’s post that you don’t address is the Robin Hood’s alleged lack of transparency. I know nothing about Robin Hood (having just learned about it from this blog), but do you think that Holden’s claim that Robin Hood “has essentially no transparency, and essentially no accountability, to the public and to its donors (at least the smaller donors, i.e., those giving tens of thousands of dollars)” is fair? If not why? If so, do you see this issue as unimportant?

  • Brooklynchick:

    • I don’t think our observation about target populations is “not relevant” to a donor thinking of giving to Robin Hood. The criticism is of their mission.
    • We have no problem with running programs that attract particularly capable/motivated people. What we have a problem with is evaluations that do not acknowledge or address this fact, taking outcomes differences at face value as evidence for impact. Our analysis of KIPP explicitly acknowledges the issue of selection bias and discusses at length why we think there is a reasonable (though far from airtight) case for impact even with this issue in mind.
    • I agree with much of the Sasha Dichter passage you cite, but don’t see how this supports the idea that evaluation is a waste of money. To me it seems the opposite: with no information about how things are going, there’s nothing to analyze, critique, discuss, learn from, etc. All an evaluation-less nonprofit can do is blindly continue its programs, essentially as they are / as gut instincts suggest, in perpetuity. The price of evaluation seems low if it improves the capacity to learn as one goes.

Comments are closed.