This question might be at the core of our disagreements with many:
When you have no information one way or the other about a charity’s effectiveness, what should you assume by default?
Our default assumption, or prior, is that a charity – at least in one of the areas we’ve studied most, U.S. equality of opportunity or international aid – is falling far short of what it promises donors, and very likely failing to accomplish much of anything (or even doing harm). This doesn’t mean we think all charities are failing – just that, in the absence of strong evidence of impact, this is the appropriate starting-point assumption.
Many others seem to have the opposite prior: they assume that a charity is doing great things unless it is proven not to be. These people are shocked that we hand out “0 out of 3 stars” for charities just because so little information is available about them; they feel the burden of proof is on us to show that a charity is not accomplishing good. When someone asks them to give to a charity, they usually give.
This puts us in an odd situation: we have very little interest in bad charities, yet others are far more interested in us when we talk about bad charities. To us, credible positive stories are surprising and interesting; to others, credible negative stories are surprising and interesting.
A good example is Smile Train. Nothing in our recent post is really recent at all – we did all of that investigation in 2006. We’ve known about the problem with Smile Train’s pitch for over three years, and have never written it up because we just don’t care that much.
Since day one of our project, our priority has been to identify top charities. When we see a promising charity, we get surprised and excited and pour time into it; when a charity looks bad, we move on and forget about it. Yet others find a report on a great charity to be mind-numbing, and are shocked and intrigued when they hear that a charity might be failing.
So which prior is more reasonable? Before you have evidence about a charity’s impact, should you assume that it’s living up to its (generally extravagant) promises? Or should you assume that it’s well-intentioned but failing?
We’ll offer up a few points in defense of our position in a future post. If you have the opposite position, please share your thoughts in the comments.
There is a middle ground. There’s a difference between the statement “this charity has not demonstrated that it is doing effective work” and the statement “this charity has not demonstrated that it is doing effective work, therefore I believe it isn’t.” The former is a statement of fact, the latter of faith. (As is, of course, the opposite: “this charity has not demonstrated that it is doing effective work, but I believe it is anyway.”) With that said, I think the skepticism you adopt is healthy for the kind of investigation you’re conducting. Where it would be dangerous is in an institutional funding context, where setting the bar for proven impact too high could stifle innovation and deny new/experimental approaches access to the capital they need to get to the point where demonstrating effectiveness is even possible.
There is a difference between “this charity is new and unproven” and “this decades-old charity has still not demonstrated its effectiveness”. Contributing to the second seems to require more faith than than the first (although I would like even the first charity to have a plan for demonstrating its degree of effectiveness).
My assumption is that the charity is doing some good work but perhaps very inefficiently. When you give someone a gift, they may not want it, but actively harming them seems rare, and it’s likely they would trade or sell what they don’t want.
You may be running up against some strong social conventions to avoid criticizing people who give gifts.
Incidentally, I believe your reports on successful charities could be made much more interesting for the general reader if illustrated with suitable anecdotes. While the plural of anecdotes isn’t data, stories are much easier to remember than statistics.
But perhaps that’s more of a job for a reporter or a writer, rather than an analyst.
I’ve been wondering about this for a while and wondering what it’s like to have to navigate the conflict between being a part of “non-profit press,” which like the business press, provides consumers and investors with valuable decision making information vs. being a blogger/project that gets a reputation for non-profits that are working really hard against a lot of obstacles.
A couple of times I’ve felt like chastising you for going to far one way or the other, but have tried to hold back, because I do feel like having watchdog groups is really important. What I don’t know is how you give groups the feeling that you’re not just pointing the spotlight at them and aiming without giving them the opportunity to do better.
One thing I would love to see more of: criticism of institutional funders (primarily foundations). As someone who works with a non-profit, nothing bothers me more. To be honest, I think they’re a bigger problem than individual donors in terms of giving non-profits perverse incentives. And on top of that, they have a really lazy approach to their job.
One example: most Venture Capitalists want to hear about new businesses they should check out. They’ve got money and they’re looking for something do with it. By contrast, practically every foundation I’ve ever seen has funds that are by invitation only, are rarely clear about the metrics they want to see and are usually best worked with when you know someone from within prior to applying. Honestly, I think they’re worse than the non-profits you put to task for donor illusion. At least that is a means serving an end.
100% agreed regarding problematic foundations. The secret of receiving institutional gifts is certainly to develop personal connections with key decision-makers in foundations. This inefficiency, however, is really just a symptom of the same donor apathy seen elsewhere: Foundation donors are as notorious for failing to hold their foundations accountable as individual donors are for failing to hold their charities accountable.
The reality is that foundation donors are individuals too with the same biases, motivations, and objectives as other individuals, which usually do not lead them toward impactful giving.
That said, however, you have to pick your battles, and GiveWell has made the decision to provide information to individual donors. I think that’s a reasonable one, because individuals looking for impact are left out in the cold to a much greater extent than foundation donors looking for impact. Givewell is (for the moment) trying harder to provide information to donors looking for impact than to convince donors to search for impact in the first place.
You are right on in your focus on provable results, but some areas are much easier to evaluate than others. Much easier to count bed nets distributed than evaluate interventions to increase agricultural production. Thus, most of your recommended charities are in the international health area.
I donate to Transparency International. Very difficult to access their effectiveness simply by the nature of their work. A bad charity to me is one that is misleading, not transparent, or ineffective in comparison to its peers.
Completely agree that Foundation nonsense and donor apathy are related and respect GiveWell’s focus.
My thought is that being “rougher” on Foundations might be a means to an end. As institutions, they are easier to find information on and have an interest in protecting their name (so will be more responsive to criticisms). Furthermore, because of their size, foundations have the ability to demand impact measurements, where they are feasible (thus making them available for all donors).
Thus by conducting an audit of 15-25 foundations that share one industry (say education) as a focus area, my thought is that GiveWell would be able to unearth a ton of information for impact minded donors (perhaps even ones who might want to contribute to a well-run foundation) in addition to a number of other benefits.
Does that make sense?
Economist Robin Hanson, in an aside comment in a blog post on microlending:
I agree with the skepticism and certainly think that the burden of proof is on the nonprofit. Data on impact is not just as a fundraising tool, but a commitment to know that you are making a difference. Nonprofits that don’t have a plan for measuring impact, improving, and learning are hand down not as effective or efficient as they could be.
I think perhaps this is a framing issue – you give nonprofits with not enough data 0 out of 3 stars, which implies they’re doing a terrible job (failing). A more accurate rating would be a separate category for “not enough data” – which would more truthfully reflect your own opinion that not all charities with no data are failing.
0 out of 3 stars should be reserved for nonprofits that are proven to have no impact or do harm (are failing). I recognize that such data is likely to go unpublished (or more likely, ungathered), but it’s more truthful. As to the “not enough data” category, it is a nuance – and you’re depending on your audience to be sophisticated enough to understand what that means (it’s bad, very bad). But if GiveWell is serious about transforming the culture of how nonprofits are viewed, then seek to live in the nuance and don’t blend “no impact” with “no data”.
I doubt it’ll stop people criticizing you, but hey, one step at a time.
Ian David Moss and James, I agree that the GiveWell approach is not appropriate for smaller/still-developing charities (more discussion here). To me the core differences are that
Those, to me, are the reasons that investing in smaller/developing charities makes sense. However, they don’t relate to the question of what your “prior” should be for a charity about which you have no information. (As a side point, I agree with James that “large and unproven” implies different things from “small and unproven,” though I’d put the “default assumption” as a low probability of success for either.)
Brian, one thing I don’t think you’re accounting for is the opportunity cost of human resources imposed by giving. We’ll discuss in a future post. It’s also good to keep in mind that many of the “gifts” provided by charities are in forms such that they cannot be sold or transferred (and when they can be sold or transferred, that additional concerns enter the picture).
James Edward Dillard and Ian Turner, it’s an interesting point and a tough call regarding foundations. Our thinking to date has been that we’re better positioned to change incentives for charities. Charities are asking for your money and foundations aren’t, and so encouraging people to give to only the most transparent charities creates a very clear link between rewards and good behavior. We have very little sense of what foundation incentives are. And arguably foundations do not have the same responsibility to share information, since they’re not soliciting.
That said, there are good reasons to argue that the current level of opacity at foundations is unacceptable. And it’s possible that they would be quite sensitive to public criticism, since their public image is one of the few concrete ways many seem to measure success. It’s something to think about.
Bill Biggs, charities doing more measurable things are more likely to be able to learn from their mistakes, more likely to be held accountable by donors, and thus – in our view – more likely to succeed. If the goal is to maximize good accomplished rather than to be “fair” to charities on their own terms, then we don’t think it makes sense to “sector-adjust” how strong the case for a charity is. That said, some sectors with less measurable short-term impact could have higher upside, and thus might be worth more risk. We’ll be discussing this point in the future.
Doug S, we prefer not to comment on charged political issues like this one when there isn’t a clear benefit to doing so, but assume for the sake of argument that Prof. Hanson is correct. There is still a good reason to focus on charity, which is that any individual can apply rationality to their giving and accomplish more good. By contrast, agreeing with Prof. Hanson on immigration would still leave you a long way from doing anything to improve the situation. This argument would not apply to a specific person with significant political power, but it applies to the layperson. Some semi-related thoughts at this very old post on my attitude toward politics.
Yi-An, your comment seems to imply that you agree that “no data” = “bad, very bad” (at least as a candidate for an individual/casual donor’s gift). I think that giving 0/3 stars is the best way to accurately communicate this. Our ratings are always accompanied by a “What do these ratings mean?” link.
You say others like to hear you dump on bad charities, but you mainly want to celebrate good ones, to direct folks to donate there. But ridicule and exposure of bad charities might actually do just as much good. People don’t like to be associated with things that can be effectively ridiculed, so the more ridicule bad charities get, the more folks will seek out good ones instead.
I’m joining this discussion a little late (just stumbled across this blog the other day) but I wanted to add to Robin’s comment.
There seems to be value in reporting on both the best AND the worst, especially different folk have different Bayesian priors. Some will be encouraged to donate to “the best”. Others may wish to mostly refer to their own counsel, but at least they may be convinced to avoid “the worst”.
Even in the unpleasant scenario that this just means that charitable dollars are going to a charity that is doing less harm than “the worst”, you’re still getting a better return on giving than if the info on “the worst” hadn’t been around.
So isn’t there some value in a “stinkers” list?
I think there is value in discussing the bad, as both Robin and Harry say. We have been doing so recently on this blog.
The point of this post was not that we see no value in discussing the bad (we are gradually realizing how much value there is), but that for us personally the good is more interesting/important and this fact points to an interesting difference in priors.
I agree with Harry.
If you don’t discuss bad charities, people may not realize the importance of seeking out good charities.
Let’s assume that we could rank-order charities based on social utility created per dollar donated.
If we identify one set of donors who like Holden, are somewhat pessimistic about charities on the whole (i.e. many of them are ineffective are worse), then the spread in social utility between an elite organization (at the 90th percentile or better) and a very poor one (at the 10th percentile or worse) is quite wide. There is considerable value in an impact-oriented donor really studying things.
On the other hand, there may be a hypothetical optimistic donor, who thinks that most charities are well intentioned, and able to translate those good intentions into useful social impact, such that the spread between a top charity and a much weaker charity is not that great. To this donor, expending a lot of time researching which charities to donate to is not particularly important.
i.e. Spending at least some time discussing charities that appear to be ineffective helps understand the spread from a strong charity to a weak one, and thus, the importance of avoiding weak charities and hopefully finding strong charities.
It’s possible that, as Phil suggests, talking about bad charities will make people realize the importance of being selective about where you give. But ultimately seeking out bad charities is a pointless exercise: The vast majority of charities (of anything, really) are mediocre, so if you avoid bad charities then you are still likely to end up giving to a mediocre organization. Only by seeking out the spectacular can you obtain stunning results.
Comments are closed.