Update on Open Philanthropy Project

Our last major public updates on the Open Philanthropy Project were our May and June posts on global catastrophic risks and U.S. policy. This post summarizes our progress since then and where we currently stand on our goal of committing to causes.

Summary of currently ongoing activities

  • Deep dives on priority causes for U.S. policy. In June, we hired Shayna Strom as our Director of U.S. Policy. Shayna comes to GiveWell from the White House, where she was the Chief of Staff and Senior Counselor at the White House Office of Information and Regulatory Affairs. She is based on Washington, D.C., and she is investigating our two leading contenders for priority causes in U.S. policy - labor mobility and criminal justice reform – at a higher level of depth than any cause investigation we’ve done so far, aiming to get a full lay of the land and develop a preliminary strategy for where we expect to concentrate our grantmaking. She is also aiming to surface more potentially promising causes.
  • Shallow- and medium-depth investigations for U.S. policy. Alexander Berger, who previously was conducting shallow and medium investigations within both U.S. policy and global catastrophic risks, is now focused on U.S. policy. He has conducted shallow investigations on several causes (not yet written up) and is currently exploring the spaces of health care policy and U.S. poverty & inequality.
  • Global catastrophic risks. Howie Lempel, who previously was focused on investigating and writing about the Pew Charitable Trusts, has been transitioning to a role in which his main responsibility is leading our work on global catastrophic risks. Howie’s main responsibility at the moment is conducting relatively high-depth investigations, including some preliminary grantmaking if warranted, of biosecurity and geoengineering. He also is conducting or managing lower-depth investigations on several other causes, including risks associated with artificial intelligence. In addition, Nick Beckstead, currently at Future of Humanity Institute, is working on a contract basis to conduct shallow investigations of further possible global catastrophic risk areas. Nick will be joining us as a full-time employee in December.
  • Critical evidence reviews. We are now working with David Roodman as a contractor focused on critical evidence reviews relevant to the Open Philanthropy Project. He recently completed a draft writeup on immigration and current residents’ wages and is now working on a writeup attempting to quantify the risks of geomagnetic storms.
  • Preliminary work on scientific research funding. We have assembled a small team of junior scientific advisors that works with me on preliminary explorations of scientific research funding. Our last update on this front was in 2013, and more updates are forthcoming.
  • Other work. In light of our enlarged team, we have spent substantial time improving our procedures for efficiently coordinating on investigations (and particularly grant decisions). We recently finalized another grant that has come out of our (now de-prioritized) co-funding work. We have continued work on our history of philanthropy project and have two case studies pending publication.

Our current plans
Our current focus is on committing to causes within the broad categories of U.S. policy and global catastrophic risks. We’ve published a spreadsheet summary of our current stances on the causes we’ve investigated, and which ones we consider to be the strongest contenders based on what we know today. In brief:

  • Within U.S. policy, our top contenders for priority causes are labor mobility and criminal justice reform. We have also done substantial work (including some preliminary grants) on macroeconomic policy.
  • Within global catastrophic risks, our top contenders for priority causes are biosecurity and geoengineering, though there are several causes that we see as potential contenders and are still working on investigating, including risks associated with artificial intelligence.

Before committing to causes, we are hoping to:

  • Investigate more potential causes at shallow and/or medium depth.
  • Do relatively deep investigations of the top contenders, and gain a more tangible sense of what our strategy and grantmaking would likely look like if we committed to these causes.
  • Hold convenings in which a small number of people with relevant experience provides feedback on our thinking. We are planning a convening for November in Washington, D.C. on the subject of our priorities within U.S. policy. We are still deciding whether and how to conduct a convening around global catastrophic risks (currently leaning against doing so).
  • Put more thought into how many, and what level, of “commitments to causes” we should make given our current capacity.

We are still hoping to commit to causes in these categories by the end of the calendar year, though we still see this as a stretch goal and see a reasonable probability that we’ll delay by a couple of months. As a lower priority, I am working on investigating possible approaches to scientific research funding, and hope to provide more updates on this front fairly soon.

Grants
Good Ventures has made a number of grants within potential priority causes, largely as a way of exploring what giving opportunities in these causes might look like (more on this idea at a previous post). These grants have been in top contender areas for U.S. policy (labor mobility, criminal justice reform and macroeconomic policy); there are likely to be some grants within some global catastrophic risk areas as well in the near future. A list of grants to date is available here.

Update on SCI’s evidence of impact

Note: Consistent with our usual practices, SCI reviewed a draft of this post prior to publication, but the final product is ours.

We wrote last year about reevaluating studies we relied on in our evaluation of the Schistosomiasis Control Initiative (SCI), which is one of our top charities. We noted at the time that we were planning to continue learning about the studies. We have now revisited these studies and discussed them in depth with SCI. We have two main takeaways from this investigation:

  1. We now feel that some of the panel studies we previously relied on in our evaluation of SCI do not in fact provide evidence that SCI’s national programs have reached a large proportion of children targeted. We wrote last year that in one of the four studies, participants received extra deworming treatment from researchers if they were found to be infected with worms. We learned this year that participants in at least one of the other three studies received treatment separately from and under closer supervision than other students in the country. Where participants received more treatment or more careful treatment than other students in the country, we believe that the results of the studies do not reflect the treatment coverage achieved by the national programs.
  2. We are concerned about SCI’s external communications around these panel studies. We first published the view that all of SCI’s panel studies provided evidence of effective national programs in 2009, along with our interpretation of how the studies were carried out. It is concerning to us that (a) the published papers on the studies, which had SCI staff as coauthors, imply or explicitly say that the studies reflect the performance of the national programs, (b) SCI did not correct this interpretation, which we published and asked them to review, and (c) when we asked SCI about the methodological issue specifically, SCI leadership gave us information that SCI program staff later contradicted. 

SCI has told us that it has standardized and made significant improvements to its procedures for more recent monitoring, including treating children at sentinel schools as part of MDAs. SCI recently shared with us a more recent panel study and two studies of MDA coverage rates. We plan to write about these studies in our upcoming review of SCI.

On the whole, we continue to view SCI as an outstanding giving opportunity, and it will likely maintain its status as a top charity when we refresh our recommendations in December.

Our updated views on four SCI panel studies

In our 2011 and 2012 recommendations of SCI, we emphasized a set of four panel studies from Uganda, Burkina Faso, Niger, and Burundi. In each of the studies, a set of sentinel schools was chosen, and children at each sentinel school were tested for worm infections before the start of an SCI-led control program. The same children were followed up and tested again in subsequent years. The studies show a fairly consistent picture of dramatic drops in disease prevalence among the study participants. We acknowledged multiple limitations to these studies, but felt that they gave relatively strong suggestive evidence that the SCI-led control programs had reached a large proportion of the children they targeted.

Last year, we learned that in the Uganda study (Kabatereine et al. 2007), study participants who tested positive for infection in the course of the study were directly treated by the researchers. Thus, improvements measured over time could be attributable to the activities of the researchers carrying out the study rather than necessarily to the coverage of the national control program.

This year, we have revisited the methodology used in the other three studies through conversations with SCI. We have learned that the Burkina Faso study has similar methodological issues to the Uganda study. We are unsure whether researchers participated in treatment in the studies in the other two countries.

In the Burkina Faso study, which began in 2004, drug administration at the sentinel sites was not part of the national mass drug administration (MDA) program. Researchers working for the national schistosomiasis control program supervised teachers as they administered deworming treatment to all children in the sentinel schools on the day researchers took samples, a few weeks before the MDA. SCI told us that the national control program staff ran the study in this way to meet ethical guidelines laid out by the countries’ ethics boards as well as to produce more impressive results than might have been obtained by treating children as part of the national MDA. In addition to their ethical commitment and strong desire to ensure that all children tested were treated, the researchers were more knowledgeable about deworming than the district-level health workers who administered treatment in the MDA. Supervision from the researchers may have caused teachers to do a more thorough job administering deworming treatment. For this reason, and because drug administration in sentinel schools was logistically separate from the MDA, we do not consider this panel study to reflect the coverage achieved by the national MDA.

We are unsure whether participants in the Niger study received treatment in the same manner as the other students throughout the country, or whether they were treated separately under the supervision of the researchers as in Burkina Faso. SCI leadership originally told us that participants were treated in the same way as other students in the country. SCI’s current Program Manager for Niger and Burkina Faso (who was not the Program Manager at the time of the study) later said that participants were treated separately under the supervision of the researchers (note 1). After we discussed the Program Manager’s comments with SCI leadership, SCI contacted Dr. Amadou Garba, National Coordinator of the Schistosomiasis and Soil-transmitted Helminth Control Program in Niger at the beginning of the panel study. According to SCI, he stated that participants were treated as part of the MDA. We believe that participants were most likely treated as part of the MDA, but feel that we cannot be sure given the conflicting positions expressed by SCI staff.

Note that we believe that each SCI staff member we spoke to told us what they believed to be the truth about the methodology of the studies, and we do not think that any of them intentionally misled us.

SCI’s Program Manager in Burundi confirmed SCI leadership’s statement that the children in the sentinel schools there received treatment in the same manner as children throughout the country (note 2).

Note that other substantial concerns about these studies remain even in cases where children in sentinel schools were treated as part of the national MDAs. As we wrote in our 2012 review of SCI, we are concerned because the sentinel sites selected may not be representative of the whole country, and because only about half of the students initially surveyed were followed up. We are also concerned that treatment teams and teachers may have known which schools were sentinel schools and may have administered deworming treatment more carefully in the sentinel schools (note 3). We have had these concerns since first evaluating the studies and they should not be taken as an update, but it is important to note that they are still live concerns.

Additionally, given that it took us years to discover the issues regarding different protocols for treating children at sentinel sites, we feel that it is reasonably likely that the Niger and Burundi studies have weaknesses that we have not yet discovered.

We have seen little monitoring data outside of the panel studies that we believe shows that national programs supported by SCI reach a high proportion of the children they attempt to treat. Thus, we are unsure of the impact of SCI’s programs. Also, considering that the Burkina Faso study does not seem to reflect the coverage achieved by the MDA, we now have somewhat lower confidence that SCI has been able to effectively use its research results to improve its programs.

SCI’s external communications about the four studies

Our conversations with SCI staff about the panel studies have reduced our confidence in SCI’s external communications, particularly its communication with us. There are three reasons for this:

  1. The published papers on the studies, which had SCI staff as coauthors, either imply or explicitly say that the Uganda and Burkina Faso studies reflect the performance of the national programs, even though we now believe that the studies do not necessarily reflect the performance of the national programs (note 4).
  2. We treated the panel studies as central evidence of the impact of SCI’s MDAs in our 2011 and 2012 reviews of SCI. SCI vetted those reviews but did not note to us that in some studies, children in the sentinel schools were treated separately from the national MDAs.
  3. This year, we asked SCI’s leadership about whether the Niger and Burkina Faso studies had methodological issues similar to the Uganda study. They told us, “All students were treated as part of the national treatment program (at the same time and with the same treatment strategy) as the purpose of the sentinel sites was to assess the impact of the national control program.” When we later spoke with program staff who had been involved in the studies, they contradicted this picture (though we believe that all SCI staff told us what they believed to be the truth).

We previously noted difficulties communicating with SCI. We feel that our struggle to communicate effectively with SCI about the panel studies was more serious than previous difficulties. We credit SCI with connecting us with staff who could provide more detailed answers to our questions, even where those answers contradicted SCI leadership. Still, the fact that it took this much time and effort to gain this information reflects poorly on our ability to communicate with SCI.

More recent studies

The Uganda and Burkina Faso studies began in 2003 and 2004 respectively. SCI has continued to collect monitoring data as part of its ongoing programs. SCI has told us that it has standardized and made significant improvements to its procedures for more recent monitoring, including treating children at sentinel schools as part of MDAs. SCI recently shared with us a more recent panel study and two studies of MDA coverage rates. We have begun to analyze these new studies and feel that they likely provide some degree of additional evidence that national programs supported by SCI achieve relatively high coverage. We plan to write about these studies in our upcoming review of SCI.

Bottom line

Our concerns about SCI’s evidence of effectiveness and its external communications discussed in this post cause us to take a less positive view of SCI. However, we continue to believe that the program SCI supports, combination deworming, is among the most cost-effective programs we have considered and that the program has room for more funding globally. We recently wrote about a new study that seems to bolster the evidence for the long-term effects of deworming. In addition, SCI recently sent us the more recent studies discussed above and hosted us for three days of meetings with staff to update and expand our understanding of SCI’s work. When we refresh our top charity rankings later this year, we will likely include SCI.


Note 1: “In 2004 both Niger and Burkina Faso received funds from the Bill & Melinda Gates Foundation through the SCI to establish a national Schistosomiasis and STH control program. In addition to the financing of mass drug administration (MDA), a research lab was built in each country with the purpose of monitoring and evaluating the impact of the national deworming program. Assessment of the MDA was carried out through a six-year sentinel site survey from 2004-2010. Statistical power calculations were performed to ensure that the correct number of schools and children per school were sampled to be representative of all schools receiving the same treatment strategy. At baseline (in 2004), each of the sentinel schools were sampled by a team from the research lab who arrived in the morning to take samples from 150 students selected at random, which were then taken back to the lab for analysis. In the afternoon, one or two lab staff remained at the school to supervise the deworming drug administration and to ensure that all students in the school were treated. Ethical review boards in the countries mandated that the members of the lab team personally ensure treatment of children who had been sampled for the study. The lab staff who administered treatment were aware that the school was a sentinel site. These same children were then followed up each sub sequential year. If there were any children that were lost-to-follow up then new children who were entering into the first year of school at sentinel schools were recruited into the study. They were then representative of non-treated children. Children in the sentinel schools received treatment under the supervision of the lab staff each year. The lab staff were unaware of which students had been found to have worms to avoid preferential treatment. Children who were found to have worms were not given any extra treatment beyond the treatment given to the entire sentinel school. The sentinel sites were treated two weeks prior to the national MDA to ensure that the schools were not treated twice. SCI were able to then monitor registers from the national campaign showing which schools had received treatment in the MDA to ensure that the sentinel schools were not twice treated. For the non-sentinel schools, the National Schistosomiasis and STH Control Program were responsible for coordinating the mass treatment for all schools in each district. Cascade training was carried out where teachers were trained at the district level on how to administer the deworming drugs. The teachers in turn were responsible for treating the children with supervision from district level health staff. The district level health staff were in turn trained and supervised by the central level Ministry of Health.” Non-verbatim summary of a conversation with Anna Phillips on May 27, 2014.

Note 2: “The Schistosomiasis Control Initiative (SCI) ran a study on Burundi’s national control program from 2007-2011. During the study, researchers tested students at sentinel schools for schistosomiasis in mid-May every year. Researchers did not provide treatment to any students. Students in sentinel schools were supposed to receive schistosomiasis treatment at their schools as part of the Burundian government’s annual mass drug administration (MDA) in mid-June. The ethics review board in Burundi approved the option of treating children from sentinel schools as part of the MDA. The MDA was part of Mother-and-Child Health Week, a national program in Burundi that delivered vaccines and other medical interventions. The treatment delivery system was the same throughout the country, including in regions containing sentinel schools.” Non-verbatim summary of a conversation with Giuseppina Ortu on June 20, 2014.

Note 3: “It is unclear whether the treatment team knew which schools were sentinel schools. Researchers visiting the sentinel schools would have been highly visible, so teachers, students, and people living nearby likely knew if a school was a sentinel school. Conceivably, if the treatment team knew which schools were sentinel schools, it may have been particularly careful to provide treatment to the students in the sentinel schools. The best way to avoid this would have been for the researchers to sample different schools every year so that the treatment team could not predict which schools would be sampled next. However, switching schools every year would have prevented the researchers from following the same students from one year to the next. The treatment team did not know the medical test results of the individuals whom they were treating in sentinel schools, but the team might have been told that there were some students with positive test results in particular schools. The team leading the sentinel school study and the teams administering treatment were part of the Burundian government’s neglected tropical disease (NTD) control program. However, there was little overlap between the team leading the study and treatment teams, because the people leading the study worked for the central government, while the treatment teams consisted of workers from district health centers. On the other hand, it is possible that one of the leaders of the study also supervised the MDA.” Non-verbatim summary of a conversation with Giuseppina Ortu on June 20, 2014.

Note 4: For example:

  • 2007 paper on the Burkina Faso study with SCI staff Artemis Koukounari, Elisa Bosqué-Oliva, Yaobi Zhang, Christl Donnelly, Alan Fenwick, and Joanne Webster as coauthors was titled “Schistosoma haematobium Infection and Morbidity Before and After Large-Scale Administration of Praziquantel in Burkina Faso.”
  • 2007 paper on the Uganda study with SCI staff Artemis Koukounari, Fiona Fleming, Yaobi Zhang, Joanne Webster, and Alan Fenwick as coauthors states in the abstract, “We aimed to assess the health impact of a national control programme targeting schistosomiasis and intestinal nematodes in Uganda, which has provided population-based anthelmintic chemotherapy since 2003.” We previously wrote about other aspects of this paper that we found to present a confusing picture.

Investigating the Ebola response

Should you donate to efforts to contain the Ebola outbreak in west Africa? With hundreds of millions of dollars coming in from other donors, will your donation make a difference? How does this compare to giving to GiveWell’s top charities?

These are difficult questions. It’s always hard to estimate how much good a donation does; it’s much harder in the midst of a rapidly evolving situation like this one. It requires predicting the future path of the pandemic and the effects of response efforts. New information (and new donations) are constantly changing the picture. Further complicating matters, the people who best understand the situation are extremely busy, and we need to be careful with how we request their time. Even coming up with a rough take on Ebola involves major effort. However, at this point – due to some preliminary analysis and estimates – we are in the midst of conducting a small investigation, and hope to publish our take on donating for Ebola response within the next week or two.

In this post, we lay out the steps we’ve taken and the steps we’re planning next for our investigation. We then discuss what goes into our decisions about how to respond to sudden, prominent donation opportunities like this one, and why we’ve decided to do an investigation in this case.

Our investigation
The basic question we’re trying to answer is: what is the cost-effectiveness (in terms of lives saved and similar benefits per dollar spent) of additional donations to the Ebola response (beyond what’s already been raised, and including factors such as the risk that Ebola might spread to more countries in Africa or become endemic if not contained)?

Unfortunately, we don’t know of any published efforts to answer this question. We also don’t know of efforts to answer related questions such as “What is the expected death toll of the Ebola outbreak conditional on the current planned response, and how would this change if the response were better-funded?” The information and analysis we do have that seems most relevant is:

  • A CDC model that projects Ebola deaths under different assumptions about what proportion of cases are effectively isolated. The projection goes only through January 20, and covers only Liberia and Sierra Leone. There are also some other models with broadly similar properties. We initially tried using these models, but now provisionally believe they cannot be used for our purposes (more below).
  • Some basic information on the status of the UN fundraising appeal. As of now, $988 million has been requested; $486 million has been raised and an additional $233 million has been pledged.
  • Some basic information on the World Health Organization (WHO)’s hopes for the containment effort. A recent press briefing with a WHO representative states: “…the numbers we need to get behind are 70:70:60; that number is 70% safe burials, 70% cases being managed and cared for properly; and within 60 days of our start date which for UNMEER we’re taking as 1st October. So, our goal is to have that in place by 60 days which would be 1st December.”

Initially, we tried to focus on using the CDC model to forecast Ebola cases at higher and lower levels of response efforts, which we tried to map to higher or lower levels of funding. However, we ran into several issues here.

  • One fundamental issue is that we know too little about the relationship between “how much money is raised” and “what sort of response is possible”: it might be that the activities most crucial to containing the epidemic can already be funded at current levels, and that additional donations would do relatively little.
  • Another major issue turned out to be that the CDC model already appears to be out of date (and specifically, overly pessimistic). The model incorporates data on cases through late August; reported Ebola cases since then are lower than the model predicted even in the maximal “strong response effort” scenario. It is possible that the recent reports of Ebola cases reflect issues with data collection (for example, perhaps people with Ebola are now avoiding care or healthcare workers are too overwhelmed to report data); but based purely on the numbers, we don’t feel we can use the CDC model to make good forecasts for cost-effectiveness analysis.
  • Even if we resolved the above two issues, there would be major questions remaining. The CDC model covers only two countries, and only through January 20; it does not address cases in Guinea, the possibility that Ebola becomes endemic, or the possibility that Ebola spreads to other countries. We know little about the organizations involved in the response effort and how well they’re performing, and it’s unlikely that we’ll be able to find out much about this question while the epidemic is ongoing.

We have also experimented with using a model published more recently by the Virginia Bioinformatics Institute, but we haven’t yet determined whether this model could be useful. We haven’t been able to compare this model’s predictions to recent reports directly, but it appears to make similar projections to the CDC’s for Ebola cases conditional on strong control as of December 31. We would need a better understanding of the model, and more discussion, in order to determine whether it might be used for a cost-effectiveness estimate (but even if we did use it for such an estimate, the estimate would remain problematic for many of the reasons listed above).

At this point, we’re focusing on trying to set up conversations to gain more information about the following questions:

  • If we were to recommend donations to the response effort, how quickly could donations be utilized on the ground? Would they make a difference to the response effort?
  • What would these donations allow that could not be funded otherwise? Would they expand the most important response activities? Should we think of additional donations as having similar impact to the average dollar in the response effort?
  • How significant is the risk that Ebola spreads to other countries and/or becomes endemic? How should we think about the likely longer-term death toll, factoring in unlikely but extremely bad scenarios?
  • Should we infer from recent data that the CDC model was overly pessimistic, or is there another explanation for the low (relative to the CDC model’s projections) reports of further cases?
  • If one donates to the response effort, whom specifically should they donate to?

We’re first trying to see whether we can gain information by speaking with people who aren’t directly involved with the effort, and who can therefore take time to speak with us in a low-stakes way. If necessary, we may need to create an estimate of how much money we might be able to raise for the response, in order to give people more information about whether talking to us is worth their time.

How we decide which crises to investigate
When a humanitarian crisis hits the headlines, we usually get a lot of questions along the lines of “How can I help and where should I give?” At the same time, there are several reasons that headline-dominating-crises tend not to make for the best giving opportunities, and particularly tend to be a poor fit for our work.

  • The people best positioned to understand, and help with, Ebola response are probably the people who have been working on pandemic containment, developing-world health systems, and other related areas for years before this crisis emerged. The best opportunities to prevent or contain the epidemic were probably before it was widely recognized as a crisis (and perhaps before Ebola had broken out at all – more funding for preventive surveillance could have made a big difference). We’d guess that a similar dynamic holds in general: it takes years to build expertise and context in an area, and the most crucial opportunities to make a difference will often be before the issue is getting widespread attention. In general, we think we’ll find the best giving opportunities by picking good causes to focus on and working on them for years, not by scrambling to catch up on the state of knowledge about an urgent and chaotic situation. As it happens, biosecurity is one of our leading contenders for a focus area, and we have been actively investigating the area for a few months. One of our main focuses is on strengthening routine preventive surveillance. However, we are far from having the network and knowledge needed for a rapid diagnosis of the Ebola outbreak.
  • When an issue is getting a lot of media coverage, it often attracts a lot of funding. All else equal, this makes giving less attractive, since we emphasize room for more funding. In past investigations (2010 Haiti earthquake, Japan tsunami), we found evidence that money was not the limiting factor for the relief effort.
  • Urgent issues also tend to be particularly difficult to investigate. The people who know the most about them tend to be extremely busy, and issues tend to be more newsworthy when they are more unprecedented and chaotic.
  • If we do choose to investigate a crisis, we generally need to make the investigation an urgent top priority in order to keep up with developing news. That means high involvement from senior staff and major disruptions to our workflow. It can be worth it, but the costs are high.

In some past crises, we have made major efforts to put out helpful content – particularly the 2010 Haiti earthquake and 2011 Japan tsunami. Our work attracted a fair amount of media coverage, and helped us formulate general principles for disaster relief giving, but it also took a lot of time and did not result in large amounts of donations (in 2011, when we covered both the Japan tsunami and the Somalia famine and recommended Doctors Without Borders for both, we tracked ~$50,000 in money moved to Doctors Without Borders; note that in these cases, we also stated that we did not feel the giving opportunity was as strong as giving to our top charities). We provided more limited coverage for the 2011 Somalia famine and choose only to provide general tips in response to the 2013 Philippines earthquake.

When a crisis starts getting coverage, we weigh factors such as (a) how many people are asking for our views and (b) how much capacity we have for an investigation, as well as (c) the likely “cost per life saved” (or similar metric) for donating to the relief effort.

In the case of the Ebola outbreak, we initially guessed that the outbreak would remain relatively contained, and that ample funding for the relief effort would come in. (High-profile donations from individuals and significant attention from governments both contributed to this view.) Recently, several things have changed:

  • Over the past week, we’ve heard from more people – particularly people who follow GiveWell closely – than we had in previous weeks.
  • The crisis has now been attracting significant attention, yet funding remains substantially below what has been requested.
  • The crisis appears quite relevant to our ongoing investigation of preventive surveillance. Many of the people we are speaking to about surveillance are heavily involved in the Ebola response.
  • In light of the above factors, we decided to put some time into a very rough estimate of what the “cost per life saved” might look like for the Ebola response. Some initial calculations indicated that the cost-effectiveness could be quite strong, consistent with the idea that containing a small number of cases now could prevent a large number of cases later. However, in light of our questions about the CDC model (among other issues), we don’t think our estimate is usable, and decided to gather more information along the lines described above.

Ebola response may be an outstanding use of funds, largely because the right preventive measures could stop the problem from becoming much larger and more costly to contain. The same logic would apply at an even earlier stage – to the strengthening of everyday preventive surveillance, of the kind that could have led to much earlier detection and containment of this epidemic. If that’s right, surveillance could turn out to be an outstanding cause to specialize in, under the heading of the Open Philanthropy Project.

Our ongoing review of Living Goods

Living Goods runs a network of Community Health Promoters (CHPs) who sell health and household goods door-to-door in their communities in Uganda and Kenya. CHPs also provide basic health counseling. Living Goods also provides consulting and funding to other organizations to run similar networks in other locations. We have been considering Living Goods for a 2014 recommendation.

We’ve now spent a considerable amount of time talking to Living Goods and analyzing documents Living Goods shared with us. This post shares what we’ve learned so far and what questions we’re planning to focus on throughout the rest of our investigation. (For more detail, see our detailed interim review.)

Living Goods has successfully completed the first phase of our investigation process and we view it as a contender for a recommendation this year. We now plan (a) to make a $100,000 grant to Living Goods (as part of our “top charity participation grants,” funded by Good Ventures) and (b) continue our analysis to determine whether or not we should recommend Living Goods to donors at the end of the year.

Reasons we prioritized Living Goods

Living Goods contacted us a few months ago to inform us that the initial results from a randomized controlled trial (RCT) of its program were available. The headline result from the study was a 25% reduction in under-five mortality, a remarkable effect size.

Questions we hope to answer in our ongoing analysis

How robust is the RCT?

The authors of the RCT have not yet completed the full report on the study, so we have not been able to vet the results in detail. RCTs generally have fewer methodological issues that severely undermine the results than other types of studies, but they are not immune to these problems. We discuss potential issues with the RCT in our interim review.

The authors are seeking publication in an academic journal and the paper will be embargoed until a journal publishes it. This may mean that we are unable to discuss the details of the study before releasing our 2014 recommendations. We are unsure how strong a recommendation of Living Goods we might make if we were unable to give the details of the main evidence for its impact.

In addition, we don’t want to overemphasize the strength of the evidence provided by a single RCT (even if it has no methodological issues). Interventions such as bednets and cash transfers are supported by multiple RCTs and other evidence.

Will future work be as impactful as past work, and how will we know?

There are some reasons to think future results could be worse than RCT results: locations for the RCT were carefully selected, perhaps to maximize impact, and malaria control in Uganda may have improved in recent years. Even if the program is somewhat less effective in the future, it may still be worth supporting.

Our main concern is about both Living Goods’ and our ability to know how well the program is performing in the future. Living Goods asks CHPs to report on activities such as treatments provided and follow up visits, but because of the incentive structure and lack of audits on the accuracy of these reports, we put limited weight on these metrics. Living Goods told us that its branch managers conduct randomized follow ups with clients, but we have not see documentation from these audits (or other evidence that these checks are happening). We’re not aware of any other monitoring that Living Goods conducts on its program.

Will other funders fill Living Goods’ funding gap?

Living Goods is looking to significantly scale up its program in the next four years. It is in discussions with current funders to see if they will increase their support. It believes it may be able to fund up to two-thirds of its scale-up through these commitments. It is continuing to seek new sources of funding. We may have to make a decision about how much funding to recommend to Living Goods in 2014 before other funders make their decisions known.

If Living Goods raises more than it needs for its scale-up, it would likely use these funds to co-fund partner organizations to start networks of CHP-like agents in other countries. This would be a riskier bet for donors, and its not clear how much we can expect to learn about how these programs turn out.

Is the CHP program cost-effective?

Living Goods estimates that its program will have a cost per life saved of $4,773 in 2015, decreasing to $2,773 in 2018. We have made some adjustments to this model to generate our own estimates. We estimate that Living Goods’ cost per life saved will be roughly $11,000 in 2014-2016. Making assumptions that we would guess are particularly optimistic about Living Goods, we estimate the cost per life saved at about $3,300. Pessimistic assumptions lead to an estimate of $28,000 per life saved. (Details in our interim review.) Our work on this model is ongoing.

Our guess is that Living Goods’ program is in the same range as (though slightly less cost-effective than) the most cost-effective programs we have considered, such as bednets, deworming, and iodization.

(See our page on cost-effectiveness for more on the role these estimates play in our recommendations.)

Expert philanthropy vs. broad philanthropy

It seems to me that the most common model in philanthropy – seen at nearly every major staffed foundation – is to have staff who specialize in a particular cause (for example, specializing in criminal justice policy). Often, such staff have a very strong background in the cause before they come to the foundation, and they generally seem to focus their time exclusively on one cause – to the point of becoming (if they weren’t already) an expert in it.

I think this model makes a great deal of sense, partly for reasons we’ve discussed previously. Getting to know the people, organizations, literature, challenges, etc. most relevant to a particular cause is a significant investment – a “fixed cost” that can then make one more knowledgeable about all giving opportunities within that cause. Furthermore, evaluating and following a single giving opportunity can be a great deal of work. Now that the Open Philanthropy Project has made some early grants, it is hitting home just how many questions we could – and, it feels, should – ask about each. If we want to follow each grant to the best our abilities, we’ll need to allocate a lot of staff time to each; having staff specialize in causes is likely the only way to do so efficiently.

Yet I’m not convinced that this model is the right one for us. Depth comes at the price of breadth. With our limited management capacity, following each grant to the best of our abilities shouldn’t be assumed to be the right approach. I’ve been asking myself the question of whether there’s a way to be involved in many more causes at a much lower level of depth, looking for the most outstanding giving opportunities to come along in the whole broad set of causes. I’ve been thinking about this question recently mostly in the context of policy, which will be the focus of this post.

Having a “low-depth” involvement in a given issue could take a number of forms – for example:

  • One might make a concerted effort to identify a small number of “big bets” related to an issue, and focus effort on following these “big bets.”
  • One might make a concerted effort to identify a small number of “gaps” – aspects of an issue that get very little attention and have very few people working on them – and focus grantmaking activity on these “gaps.” This approach could be consistent with making a relatively large number of grants in the hopes that some grantee gains traction.
  • One might focus on identifying a trusted advisor in an issue space, and make a small number of grants as recommended by the advisor (this is largely the approach behind our grants so far on labor mobility).
  • One might co-fund the work of another major funder, join a collaboration of major funders, or support the work of a large and established organization, and gain more familiarity with the issue over time by following this partner’s work.
  • One might aim for a very basic level of understanding of an issue – in particular, which way we would like to see policy change relative to the status quo, and whom we feel aligned enough with to take their advice. With this understanding in hand for multiple issues, one might then be well-positioned to support: (a) “cross-issue” organizations and projects that are likely to have a small impact on many issues; (b) campaigns aiming to take advantage of short-term “windows of opportunity” that arise for various issues.

I can see a few arguments in favor of trying one or more of these, all of which make it possible to take some form of a “breadth” oriented approach (more causes, at with a lower degree of depth and expertise, than the standard cause-specialist approach would involve).

First and most importantly, we will never know as much about grantees’ work as they do, and it arguably makes more sense to think of grantees as the relevant experts. The best funder might be the one who picks qualified grantees in an important cause, supports them and otherwise stays out of their way. With this frame in mind, focusing on in-house expertise is arguably inefficient (in the sense that our expertise would become somewhat redundant with grantees’) and possibly even counterproductive (in the sense that it could lead us to be overly “active” with grantees, pushing them toward our theory of the case).

Of course, picking qualified grantees is a serious challenge, and one that is likely harder without deep context. But the question is how much additional benefit deep context provides. Even without expertise, it is possible to get some signals of grantee quality – general reputation, past accomplishments, etc. – and even with expertise, there will be a great deal of uncertainty. In a high-risk model of the world, where perhaps 10% of one’s grants will account for 90% of one’s impact, it may be better pick “potentially outstanding” grantees from a relatively broad space of possibilities than to limit oneself to a narrower space, while having more precise and reliable ways of distinguishing between marginally better or worse giving opportunities.

Expertise would also be an advantage for following a grant, learning from it and continuing to help grantees as they progress. However, it seems quite possible to me that the best grantees tend to be self-driven and improvisatory, such that following them closely wouldn’t add value to what they’re doing, and would largely serve to assuage our own anxiety without doing much to increase our impact.

Secondly, the best giving opportunities may sometimes cut across multiple causes and be hard to assess if we’ve engaged seriously with only a small number of causes. This issue seems particularly important to me in the area of U.S. policy, where the idea of strengthening the network of people who share our values – or the platform representing those values – could be very important. If we focus exclusively on a small number of policy areas, and give little attention to others, we could end up lacking the knowledge and networks to perform well on this goal, and we could be ill-positioned to evaluate the ramifications of a giving opportunity for the full set of issues we care about. (An argument for pursuing both breadth- and depth-oriented strategies simultaneously is that the depth-oriented work may surface opportunities that are relevant to a large number of issues, and the breadth-oriented work could then be helpful in assessing such opportunities.)

Finally, it seems to us that there are some issue areas where the giving opportunities are quite limited – particularly issues that we think of as green fields, as well as neglected sub-areas of other issues. Devoting a full staff member to such an issue would pose particular risks in terms of inefficiency, and it might be better to fund the few available opportunities while waiting for more to emerge.

I think the cases of Ed Scott and the Sandler Foundation represent interesting examples of what a philanthropist can accomplish despite not specializing exclusively in a particular cause, and despite not building out a staff of domain experts.

  • Ruth Levine of the Hewlett Foundation writes that Ed Scott has “built at least four excellent organizations from the ground up” – including the Center for Global Development, which we have supported and think positively of. She adds that “Far more than many others seem to be able to do, he lets go – and as he does, the organizations he supports go further and faster than if he were holding on tight.”
  • We know less about the Sandler Foundation, but it seems to have played a founding role in several prominent organizations and to be well-respected by many, despite not having staff who specialize in a particular cause over the long run. It does do deep cause investigations in sequence, in order to identify promising grantees, but staff work on new cause investigations even while maintaining their funding of previous causes and organizations; this approach therefore seems distinct from the traditional foundation model and can be thought of as one approach to the kind of “broad” work outlined here. One of its core principles is that of looking for excellence in organizations and in leadership, and entrusting those it supports with long-term, flexible support (rather than continuously revisiting and revising the terms of grants).

In both cases, from what we can tell (and we are considering trying to learn more via case studies), a funder helped create organizations that shared a broad set of values but weren’t focused on a particular policy issue; the funder did not appear to become or hire a domain expert, and may have been more effective by being less hands-on than is the norm among major foundations. My point isn’t that these funders should be emulated in every way (I know relatively little about them), but that the “cause-focused, domain expert” model of grantmaking is not the only viable one.

I’m not yet sure of exactly what it would look like for us to try a breadth-emphasizing model, and I know that we don’t want this to be the only model we try. The depth-emphasizing model has much to recommend it. I can anticipate that, in some ways, a breadth-emphasizing model could be both genuinely risky and psychologically challenging, as we’d have a lower level of knowledge about our grants than many foundations have of theirs. But I think the potential benefits are big, and I think this idea is worth experimenting with.