The GiveWell Blog

More on charity ratings and GiveWell’s mission

Last week we wrote that we are likely going to stop giving zero-to-three-star ratings to charities. Some praised the decision, while others offered suggestions for how to resolve the problems we listed with ratings (we responded to the suggestions in the thread).

While we went into a lot of detail on the problems with ratings, I’ve realized that we didn’t talk enough about the benefits of ratings – and how those benefits don’t fit with GiveWell’s core audience and mission as we’re thinking about it now.

As we mentioned last week, we aren’t getting rid of our charity evaluations, or recommendations, or rankings of recommendations. We’re still going to disclose which charities we like best, and in what order. We’re still going to say what we think of each charity we’ve looked at, whether extensively or briefly.

What we’re getting rid of is the ability to get a quick, quantified rating of any charity we’ve looked at. In some ways that’s a big thing to lose; it’s arguably Charity Navigator‘s greatest strength in attracting broad interest and attention. A lot of the people I’ve talked to about GiveWell aren’t particularly interested in a charity recommendation; they want to know “how good” is the charity their friend is running a marathon for, or the charity that sends them mail. As the Money for Good study puts it, they’re seeking “to validate their donation, not to choose between organizations” (main study, page 40). And sometimes, they also seem to be excited by the possibility of a scandal, i.e., revealing that a particular charity is “bad.”

GiveWell’s mission has never been about serving these people. We seek to drive money to outstanding charities, and in so doing, to change incentives and allow some charities to raise money by doing demonstrably great work (instead of just by telling a great story).

We feel that if all you want from a charity evaluator is to check whether the charity that contacted you is “bad or OK,” you’ve already thrown away most of the opportunity to do as much good as possible with your donation.

For these reasons, our work has always been a poor fit for what a lot of donors want, and there has always been a tension between fulfilling our mission directly (which means focusing obsessively on charities we find promising and ignoring other charities) and doing things that might attract more attention and broaden our audience (such as publishing the star ratings that many want, or “digging up dirt” on big-name charities).

We originally introduced star ratings because we thought they would broaden our reach. The reactions we’ve gotten, particularly regarding how we rate the vast majority of charities that share no substantive information (the toughest aspect of designing an appropriate ratings system), have implied to us that there is actually very little to gain in this way. People who come to us for validation on their existing choice of charity are usually going to see that we don’t find the charity promising. As a result, rather than become interested in the work we do, they’re generally going to be disappointed and even upset.

In many ways, we’re better off making it clear from the beginning that we don’t have what these people are looking for, and focusing exclusively on the donors who do fit what we’re about: coming to their donation decisions with a lack of pre-commitments and an intent to give to the best charity possible.


  • Brian Slesinsky on October 9, 2010 at 2:53 pm said:

    This seems like a good move if combined with more promotion of your “top-rated charities” list. As many magazines have proven, people do like ranked lists.

    Such a list invites the question “how was this list chosen?” and that’s a natural segue into your research.

Comments are closed.