The GiveWell Blog

Update on GiveWell’s plans for 2013

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

Previously, we wrote about the need to trade off time spent on (a) our charities that meet our traditional criteria vs. (b) broadening our research to include new causes (the work we’ve been referring to as GiveWell Labs). This post goes into more detail on the considerations in favor of assigning resources to each, and lays out our working plan for 2013.

Key considerations in allocating resources to traditional criteria vs. GiveWell Labs
We see major advantages to upping our allocation to GiveWell Labs:

  • Most importantly, we would guess that the best giving opportunities are likely to lie outside of our traditional work, and our mission and passion is to find the best giving opportunities we can.Our traditional criteria apply only to a very small subset of possible giving opportunities, and it’s a subset that doesn’t seem uniquely difficult to find funders for. (While there are certainly causes that are easier to raise money for than global health, it’s also the case that governments and large multilateral donors such as GFATM put large amounts of money into the interventions we consider most proven and cost-effective, including vaccines – hence our inability to find un-funded opportunities in this space – as well as bednets, cash transfers and deworming.) While we do believe that being able to measure something is a major plus holding all else equal – and that it’s particularly important for casual donors – we no longer consider ourselves to be “casual,” and we would guess that opening ourselves up to the full set of things a funder can do will eventually lead to substantially better giving opportunities than the ones we’ve considered so far.
  • We believe that we are hitting diminishing returns on our traditional research. We have been fairly thorough in identifying the most evidence-supported interventions and looking for the groups working on them, and we believe it’s unlikely that there are other existing charities that fit our criteria as well as or better than our current top charities.We have previously alluded to such diminishing returns and now feel more strongly about them. We put a great deal of work into our traditional research in 2012, both on finding more charities working on proven cost-effective interventions (nutrition interventions and immunizations) and on more deeply understanding our existing top charities (see Revisiting the case for insecticide-treated nets, Insecticide resistance and malaria control, Revisiting the case for developmental effects of deworming, New Cochrane review of the Effectiveness of Deworming). Yet none of this work changed anything about our bottom-line recommendations; the only change to our top charities came because of the emergence/maturation of a new group (GiveDirectly).

    Putting in so much work without coming to new recommendations (or even finding a promising path to doing so) provides, in our view, a strong sign that we have not been using our resources as efficiently as possible for the goal of finding the best giving opportunities possible. We believe that substantially broadening our scope is the change most likely to improve the situation.

  • GiveWell Labs also has advantages from a marketing perspective – improving our chances of attracting major donors – as discussed previously.

We also see major considerations in favor of maintaining a high level of quality for our more traditional work:

  • GiveWell Labs is still experimental, and we haven’t established that this work can identify outstanding giving opportunities or that there would be broad demand for the recommendations derived from such work. By contrast, we have strong reason to believe that we do our traditional work well and that there is broad and growing demand for it.
  • When giving season arrives this year, many donors (including us) will want concrete recommendations for where to give. At this point, the best giving opportunities we know of are the ones identified by our traditional process, and continuing to follow this process is the best way we know of to find outstanding giving opportunities within a year (though we believe that GiveWell Labs is likely to generate better giving opportunities over a longer time horizon).
  • The survey we recently conducted has lowered the weight that we place on a third potential consideration. We previously believed that “a sizable part of our audience values our traditional-criteria charities and does not value our cause-broadening work.”However, in going through our survey responses, we found that over 95% of respondents were interested in or open to (i.e., marked “1” or “2” for) at least one category of research that falls clearly outside our traditional work (we consider the first two categories on the survey to fall within our traditional “proven cost-effective” framework). Furthermore, over 90% of respondents were interested in or open to at least one category of research that would not be directly connected to evidence-backed interventions at all (i.e., would involve neither funding evidence-backed interventions nor funding the creation of better evidence). Even when looking at the four areas that we believe to be most controversial (the three political-advocacy-related areas and the “global catastrophic risks” area), ~70% of respondents expressed interest in or openness to these categories. These figures were broadly the same whether considering the entire set of survey respondents or subsets such as “donors” and “major donors.”

    More information on the survey is available at the end of this post.

Our working plan for 2013
The items that we consider essential for our “traditional” work are:

  • Continuing to do charity updates (example) on our existing top charities.
  • Reviewing any charity we come across that looks like it has a substantial chance of meeting our traditional criteria as well as, or better than, our current #1 charity (which would require not only that the charity itself has outstanding transparency, but also that the intervention it works on has an outstanding academic evidence base). We have created an application page for charities that believe they can meet these criteria.
  • Hiring. As mentioned previously, we believe our process has reached a point where we ought to be able to hire, train and manage people to carry it out with substantially reduced involvement from senior staff. We are currently hiring for the Research Associate role, and if we could find strong Research Associates we would be able to be more thorough in our traditional work at little cost to GiveWell Labs.

We plan to execute on all three of these items. We do not plan, in 2013, to prioritize (a) looking more deeply into the academic case for our top charities’ interventions; (b) searching for, and investigating, charities that are likely to be outstanding giving opportunities but less outstanding than our current #1 charity; (c) investigating new ways of delivering proven cost-effective interventions, such as partnering with large organizations via restricted funding; (d) reviewing academic evidence for possibly proven cost-effective interventions that we have not found outstanding charities working on. All four of these items may become priorities again in the future, depending largely on our staff capacity.

Between the above priorities and other aspects of running our organization (administration, processing and tracking donations, outreach, etc.) we have significant work to do that doesn’t fall under the heading of GiveWell Labs research. However, we expect to be able to raise our allocation to GiveWell Labs, to the point where our staff overall puts more total research time into GiveWell Labs than into our traditional work.

For similar reasons to those we laid out last year, we continue to prioritize expanding and maintaining our research above other priorities. We do not expect to put significant time into research vetting (see this recent post on the subject) or exploring new possibilities for outreach (though we will continue practices such as our conference calls and research meetings). We are continuing to see strong growth in money moved without prioritizing these items.

Survey details:
We publicly published our feedback survey and linked to it in a blog post and an email to our email list. We also emailed most people we had on record as having given $10,000 or more in a given year to our top charities, specifically asking them to complete the survey, if they hadn’t done so.

In analyzing the data, we looked at the results for all submissions (minus the ones we had identified as spam or solicitations); for people who had put their name and reported giving to our top charities in the past; for people who reported giving $10k+ to our top charities in the past, regardless of whether they put our name; and for people who we could verify (using their names) as past $10k+ donors. The results were broadly similar with each of these approaches.

We do not have permission to share individual entries, but for questions asking the user to provide a 1-5 ranking (which comprised the bulk of the survey), we provide the number of responses for each ranking, for each of the four categories discussed in the previous paragraph. These are available at our public survey data spreadsheet.

GiveWell annual review for 2012: Details on GiveWell’s money moved and web traffic

This is the final post (of five) we have made focused on our self-evaluation and future plans.

This post lays out highlights from our metrics report for 2012. For more detail, see our full metrics report (PDF).

1. In 2012, GiveWell tracked $9.57 million in money moved based on our recommendations, a significant increase over past years.

2. Our #1 charity received about 60% of the money moved and our #2 and #3 charities each received over $1 million as a result of our recommendation. Organizations that we designated “standouts” until November (when we decided not to use this designation anymore) received fairly small amounts. $1.1 million went to “learning grants” (details here and here) that GiveWell recommended to Good Ventures.

3. Growth was robust for every donor size. As in 2011, a majority of growth in overall money moved came from donors giving $10,000 or more.

This table excludes Good Ventures and donations for which we don’t have individual information. More in our full metrics report.

4. Web traffic continued to grow. A major driver of this growth was Google AdWords, which we received for free from Google as part of the Google Grants program. As in prior years, search traffic (both organic and AdWords) provided the majority of the traffic to the website. Traffic tends to peak in December of each year, circled in the chart below.

5. The group of donors giving $10,000 or more has grown from 55 to 96, but the characteristics of this group have changed little. Most of these donors are young (for major donors) and work in finance or technology. Of the $3.2 million donated by major donors who responded to our survey, $1.9 million (60%) came from donors under the age of 40. The most common ways they find GiveWell are through online searches and referral links and through Peter Singer.

6. As in 2011, the most common response when we asked donors giving $10,000 or more ‘how has GiveWell changed your giving?’ was ‘I would have given a similar amount to another international charity.’

What effect has GiveWell had on your giving?

For donors who responded that GiveWell caused them to reallocate their giving, where would you have given in GiveWell’s absence?

7. GiveWell’s website now processes more than twice as much giving as GuideStar’s and about 80% as much as Charity Navigator’s, though it offers far fewer charities as options. This comparison provides evidence that the growth we saw in 2012 is due not to generalized increases in online giving or use of charity evaluators, but rather to GiveWell-specific factors. (Note that the GiveWell figure in this chart includes only what was processed through our website – not all money moved – in order to provide a valid comparison to the others, for which we only have online-giving data.)

Why I didn’t give to the Schistosomiasis Control Initiative last year

Before publishing this post, I sent a draft to Alan Fenwick, Director of the Schistosomiasis Control Initiative who asked colleagues of his to comment. We asked each person for their permission to post their comments, and we’ve posted those for which we received permission here:


The Schistosomiasis Control Initiative (SCI) is an outstanding giving opportunity compared to nearly every other option out there. It’s a challenge of the work GiveWell does that we have to communicate about differences between outstanding giving opportunities because these differences matter to us (in deciding where we would give) and to our audience.

This post explains why I think the gap between SCI and our top two charities is substantial. I have enough reservations about SCI that (uniquely among GiveWell staff members,) I did not allocate any of my personal giving to it last giving season. (I gave 75% of my gift to AMF and 25% to GiveDirectly.)

While my thoughts have been alluded to in already-public content (see our discussion of the relative merits of our top three charities as well as our review of SCI), conversations with donors have given me the continuing sense that the weight I, personally, put on these considerations hasn’t been made fully clear. I think it’s important to do so, and that’s the intent of this post. I think SCI is an outstanding giving opportunity in the scheme of things, but I want to be as clear as I can be about how I think about giving to them, in the spirit of transparency and open dialogue about the best giving opportunities. These views do not represent a change in GiveWell’s ranking or suggested allocation.

My position is not a function of doubts about the strength of the evidence for deworming or SCI’s track record. Deworming is an outstanding intervention, and I am on board with the analysis we’ve published about its relative cost-effectiveness. SCI has an impressive track record. As far as we can tell, it has repeatedly been involved in large-scale, successful deworming programs.

So why did I decide to give to other organizations instead of SCI?

Though I, personally, have spent tens of hours speaking with Professor Fenwick and other SCI staff and reviewing SCI documents (and other GiveWell staff have spent hundreds of hours speaking with Professor Fenwick and analyzing SCI’s documents) over the past 4 years, I still do not have a concrete, specific understanding of how SCI has allocated funds and its specific value added. My understanding could be summarized in the following way: deworming is an outstanding program; SCI is involved in deworming programs; the programs with which it has been involved with in the past have had good results; it requests additional funds.

In general, I feel that I’ve experienced a strong pattern in which uncovering new information about an organization or intervention (which I previously understood only at a superficial level) tends to lower rather than raise my confidence in it. As a result, I’ve started to adjust my confidence downward for organizations that I understand less well, where I have questions about how they work or spend money.

Good examples of this dynamic are organizations GiveWell rated highly earlier in its history but no longer recommends. Although in some cases, the change in ranking was due to a change in GiveWell’s approach, in most cases, continued analysis of a charity led to new information that shifted our view about the likely impact of their program.

The fact that I still have a relatively limited understanding of SCI’s use of funds (a) contrasts with our other two top organizations, and makes me relatively more concerned about SCI’s overall capabilities as an organization (for an example of the sort of thing I’m concerned about, see this exchange); (b) leads me to believe there’s a higher probability that we’ve missed important information about SCI that would lower our confidence if we had it.

The experience of having important unanswered questions also applies to following SCI’s progress since we gave it a top ranking. We have now recommended SCI for over a year (and have been carefully following it since mid-2009), and I feel that we’ve learned relatively little about its progress in that time. (See our updates on SCI.)

This experience is also relevant because I see limited opportunity to learn from SCI in the future, which undermines the argument we’ve given for supporting multiple charities. In my view, learning should be a key goal of giving, especially to organizations that are not #1. While the rest of GiveWell staff are not fully on board with the points I’m raising in this post, they do agree that thinking about our prospects for learning from SCI in the future will play an important role in deciding whether we should aim to direct more funding to it in the future.

While I’m not able to pin down more specific concerns about SCI – i.e., I’m concerned because I haven’t been able to answer important questions and in the past answering previously unanswered questions has led organizations to move off our top-rated list – I have some theories about what we might be missing.

Although SCI has an impressive track record, it’s worth noting that its major achievements have been in partnership with major funders (Gates Foundation, USAID, DFID, Geneva Global) and it is not clear to me how large a role these funders played and how much credit they deserve for SCI’s past successes. (See the relevant section of our review of SCI.) One can easily imagine a model in which countries agree to work with SCI and implement a deworming program largely because a major funder is behind the program. This could simply be because the funder can commit all the funding a program needs (so the country knows the program will move forward) or because a major funder exerts its influence over a country to convince it to implement a program.

SCI is now using unrestricted funding to try to start major programs without the backing of a major funder. Its largest use of unrestricted funding to date is in attempts to start a deworming program in Ethiopia. SCI is attempting this on its own, with fewer financial resources and diminished non-financial major funder support relative to what it had in its past successes.

To me, the best argument for supporting SCI is that (a) deworming is an excellent intervention; (b) SCI is a large, long-standing, credible organization that focuses on deworming and has no red flags. However, I also think this line of argument might apply to many other organizations that we’ve looked into briefly but stopped investigating because we found it challenging to get sufficient information about their track records or had questions about their room for more funding. These organizations include Deworm the World, the Center for Neglected Tropical Diseases, the African Programme for Onchocerciasis Control, the Measles and Rubella Initiative, and UNICEF’s Maternal and Neonatal Tetanus program.

The same was true about SCI when we first approached it in 2009, but because of the nature of our research process at that time, we were more willing to spend significant time trying to convince an organization to share information with us. SCI shared a significant amount of information with us about its past activities, but my intuition is that were we to spend the type of time on these other organizations, we would ultimately reach a similar level of understanding about their activities.

It’s true that we have carefully analyzed deworming as a program and found it to be among the most cost-effective programs we’ve considered. We have yet to assess the programs run by the organizations listed above, but my intuition again is that were we to analyze these programs – measles immunization, maternal and neonatal tetanus immunization, lymphatic filariasis control, onchocerciasis control – we would find programs that are as strong or nearly as strong as deworming.

 

External evaluation of our research

We’ve long been interested in the idea of subjecting our research to formal external evaluation. We publish the full details of our analysis so that anyone may critique it, but we also recognize that it can take a lot of work to digest and critique our analysis, and we want to be subjecting ourselves to constant critical scrutiny (not just to the theoretical possibility of it).

A couple of years ago, we developed a formal process for external evaluations, and had several such evaluations conducted and published. However, we haven’t had any such evaluations conducted recently. This post discusses why.

In brief,

  • The challenges of external evaluation are significant. Because our work does not fall cleanly into a particular discipline or category, it can be difficult to identify an appropriate reviewer (particularly one free of conflicts of interest) and provide enough structure for their work to be both meaningful and efficient. We put a substantial amount of capacity into structuring and soliciting external evaluations in 2010, and if we wanted more external evaluations now, we’d again have to invest a lot of our capacity in this goal.
  • The level of in-depth scrutiny of our work has increased greatly since 2010. While we would still like to have external evaluations, all else equal, we also feel that we are now getting much more value than previously from the kinds of evaluations that we ultimately would guess are most useful – interested donors and other audience members scrutinizing the parts of our research that matter most to them.

Between these two factors, we aren’t currently planning to conduct more external evaluations in the near future. However, we remain interested in external evaluation and hope eventually to make frequent use of it again. And if someone volunteered to do (or facilitate) formal external evaluation, we’d welcome this and would be happy to prominently post or link to criticism.

The challenges of external evaluation

The challenges of external evaluation are significant:

  • There is a question around who counts as a “qualified” individual for conducting such an evaluation, since we believe that there are no other organizations whose work is highly similar to GiveWell’s. Our work is a blend of evaluating research and evaluating organizations, and it involves both in-depth scrutiny of details and holistic assessments of the often “fuzzy” and heterogeneous evidence around a question.

    On the “evaluating research” front, one plausible candidate for “qualified evaluator” would be an accomplished development economist. However, in practice many accomplished development economists (a) are extremely constrained in terms of the time they have available; (b) have affiliations of their own (the more interested in practical implications for aid, the more likely a scholar is to be directly involved with a particular organization or intervention) which may bias evaluation.

  • Based on past work on external evaluation, we’ve found that it is very important for us to provide a substantial amount of structure for an evaluator to work within. It isn’t practical for someone to go over all of our work with a fine-toothed comb, and the higher-status the person, the more of an issue this becomes. Our current set of evaluations is based on old research, and to have new evaluations conducted, we’d need to create new structures based on current research. This would take trial-and-error in terms of finding an evaluation type that produces meaningful results.
  • There is also the question of how to compensate people for their time: we don’t want to create a pro-GiveWell bias by paying, but not paying further limits how much time we can ask.

I felt that we found a good balance with a 2011 evaluation by Prof. Tobias Pfutze, a development economist. Prof. Pfutze took ten hours to choose a charity to give to – using GiveWell’s research as well as whatever other resources he found useful – and we “paid” him by donating funds to the charity he chose. However, developing this assignment, finding someone who was both qualified and willing to do it, and providing support as the evaluation was conducted involved significant capacity.

Given the time investment these sorts of activities require on our part, we’re hesitant to go forward with one until we feel confident that we are working with the right person in the right way and that the research they’re evaluating will be representative of our work for some time to come.

Improvements in informal evaluation

Over the last year, we feel that we’ve seen substantially more deep engagement with our research than ever before, even as our investments in formal external evaluation have fallen off.

Where we stand

We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny. However, at this time, the scrutiny we’re naturally receiving – combined with the high costs and limited capacity for formal external evaluation – make us inclined to postpone major effort on external evaluation for the time being.

That said,

  • If someone volunteered to do (or facilitate) formal external evaluation, we’d welcome this and would be happy to prominently post or link to criticism.
  • We do intend eventually to re-institute formal external evaluation.

GiveWell’s plan for 2013: A top-level decision

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

This is the fourth post (of five) we’re planning to make focused on our self-evaluation and future plans. The final post will be our metrics report.

One of the major questions we grappled with in 2012 – and probably the single biggest open question at this moment – is how to prioritize researching charities that meet our traditional criteria vs. broadening our research to include new causes (the work we’ve previously referred to as GiveWell Labs).

We discussed this tradeoff previously, saying that we would put enough work into our traditional research to “meet demand” and would otherwise be prioritizing research-broadening work. We believe this approach did not work well and needs to be changed, because

  • “Meeting demand” for charities that meet our traditional criteria arguably includes not just identifying top charities, but investigating them deeply. Over the course of 2012, we spent significant time deepening our understanding of our top charities and their interventions (see Revisiting the case for insecticide-treated nets, Insecticide resistance and malaria control, Revisiting the case for developmental effects of deworming, New Cochrane review of the Effectiveness of Deworming). This work ended up taking significant co-founder time, and conceptually we believe that there is no limit to how deeply we can investigate our top charities. (This isn’t to say that we know such depth is what our audience requires; this is one of the things we’re hoping to learn more about, as discussed below.)
  • As discussed previously, we made much less progress on cause-broadening work than we had hoped to, largely because of the above point.
  • We believe that continuing to improve our offerings under our traditional criteria is not the most efficient way to find the best possible giving opportunities. However, we also believe (though, again, we could be wrong and are hoping to learn more) that a sizable part of our audience values our traditional-criteria charities and does not value our cause-broadening work.

We now see the situation as potentially involving two different audiences for GiveWell’s work, with different values and priorities; rather than claiming we can fully serve both (we’re too resource-constrained to do so), we need to explicitly define and segment the audiences, determine how to assign resources to each, and do the best we can for each within the resource constraints we set.

The next step for us is to get better data on the extent to which, and way in which, our audience is divided. We need to know which of our followers are most interested in our cause-broadening work as opposed to our traditional criteria, and what aspects of each are most important to them.

Accordingly, we have created a survey for GiveWell followers, seeking to gauge the appeal of different possible paths we could take to different parts of our audience. We are using the survey as only one factor to determine how to move forward, but we would very much appreciate participation from any followers.

Following our collection and analysis of survey results, we will publish further content regarding how we plan to segment our work and what we believe we will be able to deliver. Further discussion of our 2013 plans is not forthcoming until after that point.

(Note that our most recent Board meeting focuses on the issues discussed here, for those who are interested in listening.)

Take the survey for GiveWell followers

GiveWell is hiring

GiveWell continues to seek outstanding people to join our team. In addition to the Research Analyst position for which we’ve been accepting applications, we’ve also recently posted another job opening.

For the Research Associate position, we seek someone with a strong background in causal inference and quantitative research methods either through academic coursework at the graduate level (e.g. a masters degree in economics, statistics or a heavily-quantitative social science) or previous work experience.

We are hiring both for full-time positions and for paid summer internships for students entering their final year of school. Both positions would be located in San Francisco where GiveWell is based.

More details about both positions on our jobs page.