I started work at GiveWell six months ago, just a few weeks after graduating from college. I had been following GiveWell pretty intensely for more than a year, since I had gotten back from my own trip to India. During that time, I had become a little obsessed: I had read the entire history of the blog and got really excited each time GiveWell finally posted the audio from the most recent board meeting.
Even as a serious GiveWell fan, though, there were a number of things that I didn’t know about the organization that I should have. These aren’t secrets or titillating stories about office politics, just some things that I’ve learned that I didn’t know before.
The biggest challenge remains “find outstanding giving opportunities” – not “get more eyeballs.” I wasn’t totally ignorant about the difficulty of finding outstanding giving opportunities, but I thought that GiveWell was clearly doing so better than anyone else working publicly, and that accordingly it should focus more on outreach, rather than improving research. As an outsider, I didn’t have a good sense of how much went into the recommendations or all the work that goes into charities that don’t end up receiving recommendations. I didn’t think it was easy, but it seemed like Holden and Elie pretty much had it under control, and that there was lots of low-hanging fruit on the outreach side.
As far as I can tell now, neither of those things are really true.
On the outreach front, GiveWell had already tried or looked into many different strategies, even if they hadn’t blogged about it. And because our users generally aren’t typical donors, a lot of the things that charities normally do to cultivate donors might be actively harmful for us. (But we definitely haven’t thought of everything, so please do let us know or comment if you have ideas for how we could “sell” our research better.)
On the research front, although it isn’t very hard to come up with better recommendations than other charity evaluators, we face two problems I hadn’t fully considered:
- Room for more funding. Because a number of large funders are scooping up excellent funding opportunities in global health, many good chances to help people are already taken. We need to find charities that are good bets, but not so obviously good that they have all the funding they can productively use.
- Our competition isn’t other charity rating organizations. This is about the baseline that GiveWell’s recommendations are compared to, rather than the competition for funding opportunities. For a long time, it has seemed natural for people to compare GiveWell to Charity Navigator or Philanthropedia, but as we continue to grow, I think that comparison becomes less and less salient.
As we raise our ambitions with projects like GiveWell Labs, we will be “competing” not with other charity evaluators but with foundations. Because some foundations are extremely strategic, well-resourced, and focused on the same goal of doing as much good as possible, finding better giving opportunities than they do is a much higher burden.
Both of these problems become harder as GiveWell grows, because we’ll need to create more “room for money moved” and will more naturally be compared to foundations rather than other charity evaluators.
Our standards for what we expect from charities are a moving target. This is true in two senses:
- Our standards for giving opportunities and evidence are generally higher than they used to be. Although we thought in December 2011 that our top charities in 2011 might not be as good an opportunity as VillageReach was in December 2010, they had both been through a much more thorough wringer than VillageReach had been when we first recommended it. On the evidence side, just compare our intervention reports for immunizations (VillageReach), bednets (AMF), and deworming (SCI). The latter two are much more thorough.
- We’re willing to accept less evidence from a specific charity if we have more confidence in an intervention. This dynamic played an important role in our decision between our top two charities. SCI had significant evidence that it had historically actually decreased the prevalence of worm infections, while AMF had very limited evidence that people actually used the bednets that they distributed. The overall evidence that people do generally use bednets that they’re given was stronger, however, than the evidence that deworming has developmental effects, which was crucial to the case for SCI. Even though SCI had more evidence that its own activities were successful, we thought that the evidence for AMF’s intervention made it a better choice.
The majority of the useful evidence we rely on to produce our ratings does not come from charities themselves. The VillageReach review, which focused on evidence from VillageReach’s pilot project, and Holden’s exchange with Giving What We Can, in which he argued for focusing on charities more than interventions, had made me think that GiveWell’s research focused overwhelmingly on charities rather than interventions. That impression was incorrect. I personally have spent far more of my time at GiveWell working on intervention-level research than research on individual charities, and while I’m not sure of the overall breakdown across GiveWell, it’s definitely not a blowout for charity-level research. Part of the reason for that is that high-quality evidence actually exists for many interventions; generally, that’s not the case for charities, even outstanding ones. This isn’t really much a criticism of charities, since it wouldn’t be a good allocation of resources for, e.g., every deworming charity to be doing research about the developmental effects of its own specific deworming projects.
GiveWell is really skeptical of academic research, even though we use it all the time. I’m still not sure if this level of skepticism is appropriate, but it seems to be a rational response to all the potential issues with publication bias. My impression is that the typical response to findings like “most published research findings are false” is to bemoan the structural issues and then ignore them when interpreting any particular study. GiveWell doesn’t do that.
I knew before I started that GiveWell really liked randomized control trials, but I thought that preference was due to econometric naivete rather than a principled preference. Now I know that one of the reasons why GiveWell prefers RCTs is that they make it harder for researchers to fudge results, intentionally or unintentionally, and thus hopefully make publication bias less of a problem.
Our research focuses more on global health than economic development. This realization struck me in October as I found myself reading articles on schistosomiasis from parasitology journals published in the 1970s, though it was obvious in hindsight given the charities GiveWell had recommended. I think the discussions of randomized control trials and the general trappings of cost-effectiveness analysis make it easy to think of our work as economic, and we do share many of the sensibilities and interests of the “randomista” movement in development economics, but in subject matter most of our research is focused on health.
In addition to navel-gazing about GiveWell, I’ve gotten to do some pretty neat stuff while I’ve been here: I went to Malawi, found some big errors in a WHO cost-effectiveness calculation for deworming, and had some really interesting conversations about how to do the most good. So, just in case you’re interested, we’re still hiring.