The GiveWell Blog

Our process: Narrowing the field

This post is more than 15 years old

One of the aspects of our research process that has generated some objections is our use of “heuristics,” i.e., shortcuts to winnow the field of recommended charities from 300+ to a manageable number for closer investigation. The heuristics we use are described here. A good statement of the objections comes this comment at Hatrack forums:

I don’t care if a charity’s evaluation and monitoring reports are on their website, as long as they are publically available in some way. And while I agree with many of thier priorities, 75% of funding or more matching a list of specific programs is not vetting cost-effectiveness, it’s vetting whether or not the organization has the same priorities as Givewell does.

This post briefly explains and defends our approach. It does not discuss our criteria (proven, cost-effective, scalable, transparent), but rather the shortcuts we use to identify the charities most likely to meet those criteria.

The most important thing to know is that we are always ready to look at charities that don’t pass these heuristics, if they meet our broader criteria. If you know of such charities, please alert us using our submission form. If it appears that the information we require is available – whether or not it’s available on the charity’s website – we will change a charity’s status to “Pending” until we have reviewed it more thoroughly.


Why do we look at what information is available on a charity’s website, instead of searching more comprehensively and contacting them directly?

We have found that going back-and-forth with charities to see what they have internally is extremely difficult and time-consuming for both us and them. We are generally first connected to fundraising staff, and it takes a lot of communication and waiting just to end up talking to someone who knows what information is available. Repeating this process for all 300+ charities we have examined would not be practical, so we use a heuristic to identify the most promising candidates for further investigation.

We are explicit that our research is constrained by practical considerations. Our goal is not to be “perfect” in our assessments but rather to provide better information than donors can find anywhere else.

We do contact all rated charities to let them know about their status and how they can change it if they feel we are in error.

Is there independent evidence that “what information is on the website?” is a reasonable proxy for “what information is available at all?”

Yes. We have also used alternate research methods that involve much more back-and-forth with charities, and feel that the results give support to the “website scanning” heuristic as an imperfect but pretty good predictor of which charities actually have the information we require (particularly evidence of impact).

  • Our first-year research process involved applications for grants of $25,000-$40,000. All non-confidential application materials have been publicly posted. For all five charities that earned a 2 star or better rating through this process, the primary evidence of impact we used is available on or via their website.
  • We’re currently conducting a grant application process for $250,000 and will be publishing the full details of what it turns up in early 2010.

Also note that our “website scanning” heuristic is similar to the method used by William Easterly and Tobias Pfutze to rate aid agencies (PDF). Our aim is similar in that we seek to reward organizations that have both good practices and the transparency to share their practices publicly.

Do we require that charities be running “priority programs” in order to receive further investigation and/or high ratings?

No. The two heuristics we use are “or”, not “and.” We don’t require charities to share our program priorities. Rather, we investigate charities that do share these priorities even if they don’t pass the other heuristic. We do this because we have enough capacity to deeply investigate some “extra” charities, but not all 300+.

Why do we issue ratings to charities that don’t pass our heuristics, rather than simply marking them as “Not examined?”

We feel it would be misleading to simply say “not examined” for the charities that didn’t pass the heuristics. Given the constraints of what information is available and what’s practical, we feel strongly that there is a better case for the highly-rated charities than for the examined-but-not-rated charities. By contrast, a charity that doesn’t appear at all is one we simply haven’t looked at.

We feel it is accurate and important to call our top-rated charities the standouts (by our criteria) from a field of 300+.

Comments

  • Ian Turner on November 6, 2009 at 9:22 am said:

    Can you elaborate on the difference between “no stars” and “not eligible for review”? Perhaps it is worthwhile to draw a distinction between “not rated” and “not examined”.

  • Holden on November 6, 2009 at 5:48 pm said:

    Not eligible for review (listed here): we have looked at this charity, but their activities don’t fit into the framework we used for international aid and/or we haven’t looked into the issues they work on. So we choose not to issue any judgment.

    0 stars: we have looked at this charity and it does fit into the framework we used. Given available information, we recommend that a donor interested in this charity’s kind of work give somewhere else.

    Not listed: we haven’t looked at this charity at all, and if it works in international aid, we should. Let us know if you know of charities like this.

Comments are closed.