The GiveWell Blog

Self-evaluation: GiveWell as a donor resource

This post is more than 14 years old

This is the first post (of four) we’re planning to make over the next two weeks focused on our self-evaluation and future plans.

This post answers the first section of questions we posed to ourselves in January about the state of GiveWell as a donor resource. For each question, we discuss

  • Our progress over the last year (specifically, since our last business plan in 11/2008);
  • Where we stand (compared to where we eventually hope to be);
  • What we can do to improve from here.

Does GiveWell provide quality research that highlights truly outstanding charities in the areas it has covered?

This is in some ways the most difficult question for us to give a meaningful answer on. “Quality” and “outstanding” are concepts that depend a lot on your worldview, so when we review the research that we ourselves have created, you can expect our opinion to be skewed toward the positive. That said, we can discuss how our research compares to what we feel it would ideally be.

At this point we are satisfied with the quality of our international aid report, but see a need to cover more causes and to be more systematic about getting meaningful critical feedback.

Progress since 11/08

As of 11/08,

  • We had published research on U.S. equality of opportunity and international aid, but felt our research process had substantial room for improvement (among other things, we felt it relied too much on the arbitrariness of how charities responded to our grant application, and that it didn’t make enough use of publicly available independent analysis). We planned to “re-do” the cause of international aid, with substantial changes to the process (see our June 2008 plan).
  • We had essentially nothing in the way of critical feedback on our research, aside from very high-level conversations about our conclusions at Board meetings.

Since then,

  • We have published our 2008-2009 international aid report using our revised methodology. We feel that we have found an appropriate way of investigating charities without relying on grant applications and that we have substantially improved on our prior recommendations (see our top charities – the two three-star charities were found this year, while the top charities from last year’s report have two-star ratings). We feel that our revised methodology is the best we can do with the resources and information available, and do not feel a need to “re-do” our coverage of international aid (although we will need to keep it up to date).
  • We also completed a grant application process for economic empowerment, whose results we will be publishing shortly.
  • A small number of people have scrutinized our research in detail and had in-depth conversations with us, partly in private and partly in public. These include (a) Board member Jon Behar, with whom we publicly discussed our reasoning for our two top-rated charities; (b) Phil Steinmeyer and Ian Turner, both donors whom we had no pre-GiveWell connection to and who put substantial time of their own into this year’s giving decisions (In each case, a substantial amount was ultimately allocated to our top-rated charity, VillageReach); (c) 5-10 other individual donors who have gone over much, though not all, of our research and discussed it with us at length.
  • We have also engaged in public conversation at a more general level via our blog. There have been generally favorable reactions – though these are not the same as endorsements – from people with substantial experience and/or publications on foreign aid (including David Roodman, Saundra Schimmelpfennig, Alanna Shaikh and Chris Blattman).

Where we stand

Internally we are satisfied with the quality of our research, but:

  • We need to add in direct observation of charities’ observations, to the extent that we can.
    • For international aid, it isn’t practical to get both deep and representative field experience with each, or really any, of the charities we review. Gaining truly representative up-close experience with the operations of even a single small charity (let alone one like PSI) could take years; gaining representative up-close experience with several charities, for the sake of comparative analysis, is in our view entirely impractical (and so it is not surprising that no such comparative analysis appears to exist).

    • That said, at this point we have not visited a single developing-world charity, and could potentially gain a lot from a site visit.
  • Our research needs more substantial external checks than it has gotten to date. The only people who have scrutinized it in detail (as opposed to discussing/linking to it at a general level) are, like ourselves, outsiders in the field of international aid. Though people aware of our research (including ourselves) generally seem to consider it high-quality, it is important that we eventually subject it to strong, critical scrutiny from people with substantial relevant experience and credentials.

What we can do to improve

  • We can conduct site visits to one or more of our reviewed charities. I am currently planning a trip to Africa from 2/10-2/23 during which I will visit both VillageReach and the Small Enterprise Foundation.
  • We can make a concerted effort to subject our research to strong, critical scrutiny from people with substantial relevant experience and credentials. The first basic step – which we haven’t yet taken – is simply to systematically identify the people who are most likely to be both qualified and willing to review our research, then make persistent contact with them (both personally and through contacts of ours) asking them specifically for in-depth reviews of our research. If they decline, we’ll hopefully learn more about why they are declining and what we can do to make reviewing our research more worth their while.

Is it practical for donors to evaluate and use GiveWell’s research in the areas it has covered?

We believe that our work has become substantially more usable and practical to engage with. The limited external checks on our research (discussed above) are a major potential obstacle for donors to evaluate it.

Progress since 11/08

A year ago, the poor design of our website was a major potential obstacle for donors seeking to understand our work. Phil Steinmeyer summarized the problems in an email earlier this year. A donor who had the time and inclination could follow every step of our process, but otherwise had very little to go on besides a list of top charities and stories of our project from us and the media. Since then, we have:

  • Substantially revamped our website (not just its design but its organization).
  • Made a concerted effort to “repackage” much of our research in an extended series of blog posts. While our website focuses on presenting our recommended charities and the process we followed to identify them, these blog posts present (a subset of) our research such that particular general-interest points can be followed without having to read through a lot of other context. Comments on the blog posts also can give context on how others view our research.

We have also made some progress in terms of giving donors ways to evaluate our credibility:

Where we stand

A donor with the time and inclination can follow our full process, reasoning, and sources, and (as discussed above) a few that we know of have actually done so. A more casual donor has some, though limited, ways of assessing our research:

We feel these options are insufficient on the whole, given our goal of influencing as many donors as possible. We feel that we should have a single, easy-to-find roundup of available information on the reliability of credibility of our research.

What we can do to improve

  • We can make a concerted effort to subject our research to strong, critical scrutiny from people with substantial relevant experience and credentials, and to create public records of such scrutiny. (This is also a key area for improvement in the previous section, “quality of our research.” We feel that strong critical scrutiny would improve both the actual quality of our research and donors’ abilities to gauge its credibility.)
  • We can put together the best possible consolidated case for the credibility of our research.

Has GiveWell covered enough areas to be useful?

Our original business plan states:

We believe that pitting charities against each other using concrete, consistent criteria is the best way to evaluate them in a way that is thorough and understandable to others. At the same time, we do not want to compare all charities in the same terms, because this would involve making philosophical decisions that should be made explicit rather than buried in conversion factors. We’ve therefore taken the approach of dividing charities into causes by broad philosophical goal … Charities are pitted directly against each other within causes, but not between them.

In other words, while we sometimes argue for one broad cause over another, we ultimately seek to serve donors with many different values. We are a long way from accomplishing this goal.

Progress since 11/08

The focus of the last year has been a report on international aid. The intent was to improve both our depth (i.e., quality of the research and top-rated charities) and our breadth (i.e., the number of “causes,” or different value sets, we can make recommendations for). Our breadth did not improve as much as we had hoped, because we concluded that health has by far the strongest options for donors (and the charities we had previously recommended were in the area of health). We have been able to make only weak recommendations for education and economic empowerment (although better recommendations for the latter are forthcoming as a result of our recently completed grant application process ).

Where we stand

Currently, the only charities we recommend with two-star ratings or higher are in the areas of U.S. education (K-12), U.S. early childhood care, and global health. This is enough to serve some donors, but overall gives quite a narrow set of options.

What we can do to improve

We are considering research on areas including:

  • More sub-causes within international aid – including disaster relief/recovery, charities aiming to help orphans and vulnerable children, and more attempts to find a strong microfinance option.
  • U.S. equality of opportunity – our current research on this cause comes from our original (2007-2008) research process. It should be re-done with the revised research process we used to create our current international aid report.
  • Disease research funding (e.g., American Cancer Society).
  • Environmental issues, particularly global warming mitigation.

There are many more charitable causes, but these are the ones we feel it would be most productive to research in the relatively short term (more in a future post).

Bottom line on GiveWell as a donor resource

  • Our international aid report meets our internal quality standards, but should be subjected to more systematic, in-depth, public scrutiny to improve both its quality and the ability of casual donors to judge its credibility.
  • We feel that both our international aid report and our report on U.S. equality of opportunity (which does not meet our internal quality standards) are valuable resources for donors interested in those causes, and more directly useful for impact-focused donors than any other available resources.
  • We offer a very narrow set of causes for donors and should research many more areas.