The GiveWell Blog

Why I didn’t give to the Schistosomiasis Control Initiative last year

Before publishing this post, I sent a draft to Alan Fenwick, Director of the Schistosomiasis Control Initiative who asked colleagues of his to comment. We asked each person for their permission to post their comments, and we’ve posted those for which we received permission here:


The Schistosomiasis Control Initiative (SCI) is an outstanding giving opportunity compared to nearly every other option out there. It’s a challenge of the work GiveWell does that we have to communicate about differences between outstanding giving opportunities because these differences matter to us (in deciding where we would give) and to our audience.

This post explains why I think the gap between SCI and our top two charities is substantial. I have enough reservations about SCI that (uniquely among GiveWell staff members,) I did not allocate any of my personal giving to it last giving season. (I gave 75% of my gift to AMF and 25% to GiveDirectly.)

While my thoughts have been alluded to in already-public content (see our discussion of the relative merits of our top three charities as well as our review of SCI), conversations with donors have given me the continuing sense that the weight I, personally, put on these considerations hasn’t been made fully clear. I think it’s important to do so, and that’s the intent of this post. I think SCI is an outstanding giving opportunity in the scheme of things, but I want to be as clear as I can be about how I think about giving to them, in the spirit of transparency and open dialogue about the best giving opportunities. These views do not represent a change in GiveWell’s ranking or suggested allocation.

My position is not a function of doubts about the strength of the evidence for deworming or SCI’s track record. Deworming is an outstanding intervention, and I am on board with the analysis we’ve published about its relative cost-effectiveness. SCI has an impressive track record. As far as we can tell, it has repeatedly been involved in large-scale, successful deworming programs.

So why did I decide to give to other organizations instead of SCI?

Though I, personally, have spent tens of hours speaking with Professor Fenwick and other SCI staff and reviewing SCI documents (and other GiveWell staff have spent hundreds of hours speaking with Professor Fenwick and analyzing SCI’s documents) over the past 4 years, I still do not have a concrete, specific understanding of how SCI has allocated funds and its specific value added. My understanding could be summarized in the following way: deworming is an outstanding program; SCI is involved in deworming programs; the programs with which it has been involved with in the past have had good results; it requests additional funds.

In general, I feel that I’ve experienced a strong pattern in which uncovering new information about an organization or intervention (which I previously understood only at a superficial level) tends to lower rather than raise my confidence in it. As a result, I’ve started to adjust my confidence downward for organizations that I understand less well, where I have questions about how they work or spend money.

Good examples of this dynamic are organizations GiveWell rated highly earlier in its history but no longer recommends. Although in some cases, the change in ranking was due to a change in GiveWell’s approach, in most cases, continued analysis of a charity led to new information that shifted our view about the likely impact of their program.

The fact that I still have a relatively limited understanding of SCI’s use of funds (a) contrasts with our other two top organizations, and makes me relatively more concerned about SCI’s overall capabilities as an organization (for an example of the sort of thing I’m concerned about, see this exchange); (b) leads me to believe there’s a higher probability that we’ve missed important information about SCI that would lower our confidence if we had it.

The experience of having important unanswered questions also applies to following SCI’s progress since we gave it a top ranking. We have now recommended SCI for over a year (and have been carefully following it since mid-2009), and I feel that we’ve learned relatively little about its progress in that time. (See our updates on SCI.)

This experience is also relevant because I see limited opportunity to learn from SCI in the future, which undermines the argument we’ve given for supporting multiple charities. In my view, learning should be a key goal of giving, especially to organizations that are not #1. While the rest of GiveWell staff are not fully on board with the points I’m raising in this post, they do agree that thinking about our prospects for learning from SCI in the future will play an important role in deciding whether we should aim to direct more funding to it in the future.

While I’m not able to pin down more specific concerns about SCI – i.e., I’m concerned because I haven’t been able to answer important questions and in the past answering previously unanswered questions has led organizations to move off our top-rated list – I have some theories about what we might be missing.

Although SCI has an impressive track record, it’s worth noting that its major achievements have been in partnership with major funders (Gates Foundation, USAID, DFID, Geneva Global) and it is not clear to me how large a role these funders played and how much credit they deserve for SCI’s past successes. (See the relevant section of our review of SCI.) One can easily imagine a model in which countries agree to work with SCI and implement a deworming program largely because a major funder is behind the program. This could simply be because the funder can commit all the funding a program needs (so the country knows the program will move forward) or because a major funder exerts its influence over a country to convince it to implement a program.

SCI is now using unrestricted funding to try to start major programs without the backing of a major funder. Its largest use of unrestricted funding to date is in attempts to start a deworming program in Ethiopia. SCI is attempting this on its own, with fewer financial resources and diminished non-financial major funder support relative to what it had in its past successes.

To me, the best argument for supporting SCI is that (a) deworming is an excellent intervention; (b) SCI is a large, long-standing, credible organization that focuses on deworming and has no red flags. However, I also think this line of argument might apply to many other organizations that we’ve looked into briefly but stopped investigating because we found it challenging to get sufficient information about their track records or had questions about their room for more funding. These organizations include Deworm the World, the Center for Neglected Tropical Diseases, the African Programme for Onchocerciasis Control, the Measles and Rubella Initiative, and UNICEF’s Maternal and Neonatal Tetanus program.

The same was true about SCI when we first approached it in 2009, but because of the nature of our research process at that time, we were more willing to spend significant time trying to convince an organization to share information with us. SCI shared a significant amount of information with us about its past activities, but my intuition is that were we to spend the type of time on these other organizations, we would ultimately reach a similar level of understanding about their activities.

It’s true that we have carefully analyzed deworming as a program and found it to be among the most cost-effective programs we’ve considered. We have yet to assess the programs run by the organizations listed above, but my intuition again is that were we to analyze these programs – measles immunization, maternal and neonatal tetanus immunization, lymphatic filariasis control, onchocerciasis control – we would find programs that are as strong or nearly as strong as deworming.

 

External evaluation of our research

We’ve long been interested in the idea of subjecting our research to formal external evaluation. We publish the full details of our analysis so that anyone may critique it, but we also recognize that it can take a lot of work to digest and critique our analysis, and we want to be subjecting ourselves to constant critical scrutiny (not just to the theoretical possibility of it).

A couple of years ago, we developed a formal process for external evaluations, and had several such evaluations conducted and published. However, we haven’t had any such evaluations conducted recently. This post discusses why.

In brief,

  • The challenges of external evaluation are significant. Because our work does not fall cleanly into a particular discipline or category, it can be difficult to identify an appropriate reviewer (particularly one free of conflicts of interest) and provide enough structure for their work to be both meaningful and efficient. We put a substantial amount of capacity into structuring and soliciting external evaluations in 2010, and if we wanted more external evaluations now, we’d again have to invest a lot of our capacity in this goal.
  • The level of in-depth scrutiny of our work has increased greatly since 2010. While we would still like to have external evaluations, all else equal, we also feel that we are now getting much more value than previously from the kinds of evaluations that we ultimately would guess are most useful – interested donors and other audience members scrutinizing the parts of our research that matter most to them.

Between these two factors, we aren’t currently planning to conduct more external evaluations in the near future. However, we remain interested in external evaluation and hope eventually to make frequent use of it again. And if someone volunteered to do (or facilitate) formal external evaluation, we’d welcome this and would be happy to prominently post or link to criticism.

The challenges of external evaluation

The challenges of external evaluation are significant:

  • There is a question around who counts as a “qualified” individual for conducting such an evaluation, since we believe that there are no other organizations whose work is highly similar to GiveWell’s. Our work is a blend of evaluating research and evaluating organizations, and it involves both in-depth scrutiny of details and holistic assessments of the often “fuzzy” and heterogeneous evidence around a question.

    On the “evaluating research” front, one plausible candidate for “qualified evaluator” would be an accomplished development economist. However, in practice many accomplished development economists (a) are extremely constrained in terms of the time they have available; (b) have affiliations of their own (the more interested in practical implications for aid, the more likely a scholar is to be directly involved with a particular organization or intervention) which may bias evaluation.

  • Based on past work on external evaluation, we’ve found that it is very important for us to provide a substantial amount of structure for an evaluator to work within. It isn’t practical for someone to go over all of our work with a fine-toothed comb, and the higher-status the person, the more of an issue this becomes. Our current set of evaluations is based on old research, and to have new evaluations conducted, we’d need to create new structures based on current research. This would take trial-and-error in terms of finding an evaluation type that produces meaningful results.
  • There is also the question of how to compensate people for their time: we don’t want to create a pro-GiveWell bias by paying, but not paying further limits how much time we can ask.

I felt that we found a good balance with a 2011 evaluation by Prof. Tobias Pfutze, a development economist. Prof. Pfutze took ten hours to choose a charity to give to – using GiveWell’s research as well as whatever other resources he found useful – and we “paid” him by donating funds to the charity he chose. However, developing this assignment, finding someone who was both qualified and willing to do it, and providing support as the evaluation was conducted involved significant capacity.

Given the time investment these sorts of activities require on our part, we’re hesitant to go forward with one until we feel confident that we are working with the right person in the right way and that the research they’re evaluating will be representative of our work for some time to come.

Improvements in informal evaluation

Over the last year, we feel that we’ve seen substantially more deep engagement with our research than ever before, even as our investments in formal external evaluation have fallen off.

Where we stand

We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny. However, at this time, the scrutiny we’re naturally receiving – combined with the high costs and limited capacity for formal external evaluation – make us inclined to postpone major effort on external evaluation for the time being.

That said,

  • If someone volunteered to do (or facilitate) formal external evaluation, we’d welcome this and would be happy to prominently post or link to criticism.
  • We do intend eventually to re-institute formal external evaluation.

GiveWell’s plan for 2013: A top-level decision

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

This is the fourth post (of five) we’re planning to make focused on our self-evaluation and future plans. The final post will be our metrics report.

One of the major questions we grappled with in 2012 – and probably the single biggest open question at this moment – is how to prioritize researching charities that meet our traditional criteria vs. broadening our research to include new causes (the work we’ve previously referred to as GiveWell Labs).

We discussed this tradeoff previously, saying that we would put enough work into our traditional research to “meet demand” and would otherwise be prioritizing research-broadening work. We believe this approach did not work well and needs to be changed, because

  • “Meeting demand” for charities that meet our traditional criteria arguably includes not just identifying top charities, but investigating them deeply. Over the course of 2012, we spent significant time deepening our understanding of our top charities and their interventions (see Revisiting the case for insecticide-treated nets, Insecticide resistance and malaria control, Revisiting the case for developmental effects of deworming, New Cochrane review of the Effectiveness of Deworming). This work ended up taking significant co-founder time, and conceptually we believe that there is no limit to how deeply we can investigate our top charities. (This isn’t to say that we know such depth is what our audience requires; this is one of the things we’re hoping to learn more about, as discussed below.)
  • As discussed previously, we made much less progress on cause-broadening work than we had hoped to, largely because of the above point.
  • We believe that continuing to improve our offerings under our traditional criteria is not the most efficient way to find the best possible giving opportunities. However, we also believe (though, again, we could be wrong and are hoping to learn more) that a sizable part of our audience values our traditional-criteria charities and does not value our cause-broadening work.

We now see the situation as potentially involving two different audiences for GiveWell’s work, with different values and priorities; rather than claiming we can fully serve both (we’re too resource-constrained to do so), we need to explicitly define and segment the audiences, determine how to assign resources to each, and do the best we can for each within the resource constraints we set.

The next step for us is to get better data on the extent to which, and way in which, our audience is divided. We need to know which of our followers are most interested in our cause-broadening work as opposed to our traditional criteria, and what aspects of each are most important to them.

Accordingly, we have created a survey for GiveWell followers, seeking to gauge the appeal of different possible paths we could take to different parts of our audience. We are using the survey as only one factor to determine how to move forward, but we would very much appreciate participation from any followers.

Following our collection and analysis of survey results, we will publish further content regarding how we plan to segment our work and what we believe we will be able to deliver. Further discussion of our 2013 plans is not forthcoming until after that point.

(Note that our most recent Board meeting focuses on the issues discussed here, for those who are interested in listening.)

Take the survey for GiveWell followers

GiveWell is hiring

GiveWell continues to seek outstanding people to join our team. In addition to the Research Analyst position for which we’ve been accepting applications, we’ve also recently posted another job opening.

For the Research Associate position, we seek someone with a strong background in causal inference and quantitative research methods either through academic coursework at the graduate level (e.g. a masters degree in economics, statistics or a heavily-quantitative social science) or previous work experience.

We are hiring both for full-time positions and for paid summer internships for students entering their final year of school. Both positions would be located in San Francisco where GiveWell is based.

More details about both positions on our jobs page.

Self-evaluation: GiveWell as a project

This is the third post (of five) we’re planning to make focused on our self-evaluation and future plans.

This post answers a set of critical questions for GiveWell stakeholders. The questions are the same as last year’s.

Is GiveWell’s research process “robust,” i.e., can it be continued and maintained without relying on the co-Founders?

Where we stood as of Feb 2012

We wrote:

We currently have 3 full-time analysts, and have made an offer to an analyst who will start in July, which would bring GiveWell to 4 full-time analysts. We continue to focus on recruiting and hope to reach 6 full-time analysts (8 total employees) summer 2012.

Analysts take the lead on most charity investigations; co-founders may provide basic guidance and sign off on work before it is published. GiveWell Labs, because of its experimental nature, will be led for the time being by co-founders.

Progress since Feb 2012

Following February 2012, we made two full-time hires and one part-time hire; one of the full-time hires departed GiveWell the same year. We also saw the departure of another analyst who had started in January of 2012 (and was included in the above quote). On net, therefore, the size of our staff rose by one part-timer. We also employed a summer intern and a trial hire, both of whom may become full-time employees this year.

Due to time sensitivity, the review of GiveDirectly – our new recommended charity in 2012 – was led by co-founders, rather than analysts. (See our shortcoming on this matter.) In addition, much of the work we put into deepening our research was led by co-founders. Analysts played valuable roles, and made far greater contributions than in previous years, but the share of work done by co-founders was higher than it would have been if we had not been dealing with this time sensitivity.

Two positive developments on this front in 2012:

  • Our capacity has improved significantly because of the maturation of existing employees. We now have several analysts who are able to add substantial value on a regular basis, improving our capacity. Alexander Berger has been promoted to Senior Research Analyst and represents an expansion in our capacity for top-level investigations. Natalie Crispin has taken over primary management of GiveWell’s financials and donation processing (which was previously handled by co-founders) and is now Research Analyst and Financial Manager.
  • Our research process has become better systemized. 2012 was the first year in which our process for investigating a top charity remained substantially the same as in the previous year, and we feel that this bodes well for our ability to train analysts to take on more of this process in the future.

Our work on GiveWell Labs is still new and exploratory, and thus is led by senior staff.
Where we stand
We currently have three full-time and one part-time analyst, along with the two co-founders. We are currently re-thinking our hiring process and the roles and qualifications of people we wish to hire.

Although analysts have taken on more responsibility, we remain reliant on GiveWell’s co-founders for significant core research work. Elie Hassenfeld is heavily involved in managing and conducting individual charity/giving opportunity investigations and Holden Karnofsky is heavily involved in completing literature reviews for the evidence of effectiveness and cost-effectiveness analyses for interventions.

What we can do to improve

We intend to make hiring a priority over the coming year, but are not yet sure of exactly what path this will take. We have some ideas for finding new hires more effectively than previously, including (a) evaluating people via trial work rather than relying on interviews when possible; (b) considering more senior hires with experience that is directly relevant to the work our research analysts do. We don’t believe we have yet found a reliable formula for hiring people, though we believe we are improving on this dimension, both through trial and error in hiring and through getting a better sense over time (via repetition) of what work our employees need to do.

Does GiveWell present its research in a way that is likely to be persuasive and impactful (i.e., is GiveWell succeeding at “packaging” its research)?

Where we stood as of Feb 2012

We wrote:

As traffic to our website has increased over the past 12 months, we would guess that the importance of better packaging our research has risen. In particular, we feel our site is poorly suited to donors who want to spend more than a few minutes but less than an hour on our site. (We have designed the site to make quick action easy and to provide significant depth, but we have no “middle level” of depth for gaining some information relatively quickly.)

Progress since Feb 2012

None. This has continued to be a low priority over the past year.

Where we stand

We continue to believe that the lack of mid-level content is a shortcoming that likely prevents us from reaching some potential donors.

What we can do to improve

We have several ideas that we could execute in order to produce more “mid-level” content regarding our recommendations, but we do not plan to prioritize this work in the coming year.

Self-evaluation: GiveWell as a donor resource

This is the second post (of five) that we’re planning to make focused on our self-evaluation and future plans.

This post answers a set of critical questions about the state of GiveWell as a donor resource. The questions are the same as last year’s.

Does GiveWell provide quality research that highlights truly outstanding charities in the areas it has covered?

Where we stood as of Feb 2012

We felt that current research was high-quality and up-to-date. However:

  • We felt that there were multiple areas that could offer outstanding opportunities that we had not yet researched as thoroughly as we could have (particularly in the areas of nutrition, vaccinations, neglected tropical disease control, tuberculosis control, and research and development).
  • We were not satisfied with the degree to which our research was “vetted.” It still seemed to us that we could make a substantial mistake or error in judgment, with too high a probability that it would remain unnoticed.
  • We worried about our total “room for money moved,” which we estimated at $15-20 million in our top charities; it seemed possible to us that continued rapid growth could potentially lead us to “run out” of great giving opportunities.

Progress since Feb 2012

In 2012, we wrote that we wanted to:

  1. Revisit the goal of having our work subjected to formal, consistent, credible external review.
  2. Continue to look for more outstanding giving opportunities for individual donors, particularly in the areas we have identified as most promising (i.e. global health and nutrition).
  3. Begin to look for more outstanding giving opportunities for individual donors through GiveWell Labs.

In 2012, we made limited progress on #1, strong progress on #2, and less than anticipated progress on #3:

  1. We did not solicit any new external reviews of our work in 2012, and we did not formally revisit the goal of doing so. Rather than focusing on increasing formal expert review over the past year, we subjected our key pages to a higher level of pre-publication internal review, ensuring that pages and spreadsheets that play an important role in our final recommendations are thoroughly checked by at least one person who did not play a role in their production. We do not view this change as eliminating the eventual need for formal outside review, but we see it as adequate for our current needs. We also feel that the increased level of informal critical attention our research has received from the outside has lowered the need for formal external review (more on this in a future post).
  2. We added GiveDirectly to our list of top-rated charities in November 2012, after a thorough review that included a site visit and review of the evidence for unconditional cash transfers. We also conducted further investigations in the area of global health and nutrition:
  3. In the realm of GiveWell Labs,

    However, we have not been able to devote as much time to GiveWell Labs as we would have liked, and progress has accordingly been slower than anticipated. We have not yet identified any giving opportunities that we are ready to recommend (aside from the two grants mentioned above, both funded by Good Ventures).

Where we stand

We continue to feel our research has identified outstanding giving opportunities for individual donors, with adequate capacity (room for more funding in top charities) to absorb the level of funding that we expect in 2013, but we believe that room for improvement remains across the three broad areas we identified in 2012: continuing to find ways to subject our research to scrutiny and quality control, finding more outstanding giving opportunities according to our traditional criteria, and broadening our criteria via GiveWell Labs.

Of these three, we think the most urgent need is to make more progress on GiveWell Labs. Progress on that front in 2012 was much slower than hoped, due to a smaller allocation of staff time than intended. In order to make more progress on GiveWell Labs in the future, we may need to put less time (in the short term) into the other two goals, while hoping eventually to expand our staff capacity so that we can pursue all three effectively.

What we can do to improve

We plan to prioritize work on GiveWell Labs more highly in 2013, devoting more staff time to research on new causes than we did in 2012. We aren’t yet sure how we will be addressing the other areas of improvement discussed above; it depends heavily on how much capacity we are able to devote to GiveWell’s traditional work while making sure that we are moving forward significantly faster on GiveWell Labs. How to allocate capacity between these two arms of GiveWell is a major question for the coming year, to be discussed further in a future post.