You can see the criteria we have used so far on our website. Furthermore, all of our reviews should be explicit about why they say everything they say, and where every rating comes from. If you don’t believe they are, we want to hear about it.
There is an understandable desire for universal, “objective” criteria in the nonprofit sector. On one hand, we believe strongly in the value of measuring impact, and in the value of statistical and other analytical tools. Perhaps related to our financial backgrounds, we generally find ourselves much more interested in measurement, metrics, and statistics than the charities we talk to.
On the other hand, the decision of where to donate inherently must be based on both highly incomplete information and on philosophical considerations. A giving decision can always be reasonably criticized and questioned, yet it must always be made. And the few attempts we’ve seen to eliminate all judgment calls, and put all charities in the same terms, have resulted – in our opinion – in absurdities.
The full story of what information we’re going to collect and what metrics we’re aiming to construct is a long one. We are currently writing it up as part of our business plan, and when we’ve expressed ourselves well, we will make it available on our website. For now, our general guiding principles are as follows:
Separate charities into causes. Some goals are too hard to put in the same terms – for example, choosing between improving a New York City public school and providing a clean water source in Africa is something that different donors will feel differently about. It is not realistic, or really even theoretically possible, to collect and process enough information to make everyone agree. Because the choice between these goals has so much philosophy in it, we put the two into different buckets. A “cause” is a set of charities that we feel can reasonably be compared in the same terms – you can see our preliminary list of causes to address here.
Designate the best charities within causes, not between them. Again, the principle is to compare what we can, and let the donor choose where we can’t. It’s not an easy distinction to draw, and we have to balance separating what can’t be compared vs. doing as much meaningful and systematic comparison as possible.
Within causes, design our metrics around the idea of making a significant impact on people’s lives. We will consistently aim to measure success in terms of people, not in terms of income or life-years or anything else. For disease-fighting charities, this means lives saved; for job-creation charities, this means people who go from indigence to self-sufficiency. We are more interested in how many people a charity serves fully and meaningfully than in the theoretical mathematical product of number of people affected times size of effect. Taking this approach also makes our decisions easier for others to understand and form their own opinions on.
Recognize that the metrics we want will not always be attainable. We have already seen how often the information we want is simply not available, as well as how often the interpretation of it is extremely problematic. We are going to have to use judgment and common sense, like any other donor; the difference is that we will promote
TOTAL, EXTREME TRANSPARENCY. We simply can’t stress the importance of this enough. This is what is already unique about GiveWell, and this is the single quality that makes our evaluations more valuable than anyone else’s. Our reviews don’t just say what we concluded; they don’t just laundry-list our resources; they explain every reason that we think what we think. Anyone and everyone has the power to go through our chain of thought, disagree where they disagree, and use our information and hard work to draw their own conclusions. Furthermore, our reviews are open for all the world to see, comment on, criticize, and improve.
There is no easy way to make giving decisions, and there is no giving decision that is ironclad or close to it. This is probably why grantmakers tend not to share what goes into their decisions; it is also precisely why it is so important to do so. With problems involving this much difficulty and judgment, the single most valuable quality of an evaluation is that it be clear. That is our explicit focus, and if we are ever failing in it we want to hear about it.
Note: I am out of time right now, but will answer the third of the current set of questions within the next couple of days.