The GiveWell Blog

Answering some questions about water quality programs

On June 22, we held a virtual event on research into water quality interventions, featuring presentations from Michael Kremer of the University of Chicago’s Development Innovation Lab; Brett Sedgewick from Evidence Action, the parent organization of Dispensers for Safe Water; and Stephan Guyenet, Elie Hassenfeld, and Catherine Hollander of GiveWell. (If you weren’t able to attend, we’ve published a video recording, audio recording, and transcript here.)

We hosted the event to provide some additional background for our recommendation of up to $64.7 million to Dispensers for Safe Water, which installs chlorine dispensers to treat water at rural collection sites in Kenya, Malawi, and Uganda. This grant was the result of a lengthy investigation and a significant update in our views on the cost-effectiveness of water treatment, which we’ve written about here.

Several attendees wrote in with a range of thoughtful questions—about our analysis of the effects of chlorination interventions, about the particulars of Dispensers for Safe Water’s program, or more generally about our work. We covered as many as we could during the event and followed up on others by email. Below, we’re sharing a selection of the questions we responded to in writing, along with other questions we’ve gotten about this work outside of the event, in the hope that they’ll be of interest to a broader audience. Questions and answers have been anonymized; some have been edited slightly for brevity, or to fill in important context that was missing.

We always appreciate getting your questions—beyond giving us a chance to clarify our work, it provides valuable insight into what we’re not communicating as well as we could. Feel free to email us with your own questions, about water quality or anything else, at info@givewell.org. You can also comment directly on this blog post or on our most recent open thread. We aim to respond to all questions, though it may take us a few days to get back to you.

Table of contents

On Dispensers for Safe Water and our grant recommendation

On our meta-analysis of water treatment’s effects

On unintended consequences of charity-run programs

On GiveWell’s research process and sources of potential new funding opportunities

On Dispensers for Safe Water and our grant recommendation

Q: This grant represents a big investment in the Dispensers for Safe Water approach of installing chlorine dispensers next to water collection points. A worry for me is that the dispensers won’t be kept in good working order, making the program less effective—I’ve heard that maintenance of facilities can be a challenge in water provision and sanitation programs. Has anyone checked the effectiveness of the maintenance that Dispensers for Safe Water does?

A: Dispensers for Safe Water performs a number of checks to make sure that the dispensers are still working as they should be, and also conducts surveys to make sure that the maintenance is working. A summary of Dispensers for Safe Water’s monitoring and evaluation activities can be found in our 2018 report on the program here, although the January 2022 grant covers additional data collection activities.

Program staff conduct spot checks of dispensers when delivering chlorine and when conducting surveys.[1] Promoters (community volunteers who refill the chlorine dispensers and promote use of the dispensers) are also asked to report any faults with the dispensers and are given a number to call. When promoters in Kenya call to report maintenance issues, the issues are logged and tracked using a cloud-based app; Evidence Action hopes to eventually launch the issue tracker app in Uganda and Malawi as well. Promoters and staff delivering chlorine are trained to address basic functionality problems, and specialized staff engineers are brought in for more serious faults.[2]

To confirm that the dispensers remain functional, Dispensers for Safe Water conducts surveys six times a year to (a) determine how many dispensers remain in use and (b) measure the percentage of households served by dispensers who have chlorine in their water. For the latter, Dispensers for Safe Water randomly selects households to visit, asks them to pour a glass of water, and measures the chlorine content. In 2019, Dispensers for Safe Water’s data showed that approximately 50% of targeted households poured a glass of water that contained chlorine, suggesting that dispensers are in use in those locations. (We’re citing 2019 data here because we think conditions during COVID-19 may not be representative of the future.) See here in Dispensers for Safe Water 2019 monitoring data analysis.

Q: Evidence Action says Dispensers for Safe Water is currently reaching four million people in Kenya, Malawi, and Uganda, and this grant will allow them to expand that to 9.5 million. But Michael Kremer mentioned that more than two billion people are consuming contaminated water each year. Is there a good estimate for how much it would cost to scale up this kind of program so that all of the approximately two billion affected people have safe water?

A: We haven’t come up with an estimate for what it would cost to scale up Dispensers for Safe Water (or a similar program) to all people who could benefit from it, and most likely some locations with unsafe water wouldn’t meet our current cost-effectiveness bar for funding. Our cost-effectiveness analyses are location-specific, and we would expect the cost-effectiveness of scaling up this program globally to vary quite a bit in each country, or even from region to region within a country, depending on overall mortality rates there and how much of that mortality is due to enteric infections.

But generally, our estimates suggest that the locations where simple chlorination interventions, such as Dispensers for Safe Water, tend to be very cost-effective are mostly African countries with a low socio-demographic index (SDI). African countries with a slightly higher SDI (low-middle as opposed to low) are less well represented among the most cost-effective locations.[3] These locations might not meet our cost-effectiveness bar, but they could still be covered by other funders.

Though we can’t confidently say how much it would cost to bring this chlorination intervention to two billion people annually, we have come up with a very rough, speculative estimated cost of $170 million/year to cover all locations where we think the program might be at least eight times as cost-effective as cash transfers. This figure relies on a number of uncertain assumptions about costs, how much of each country’s population Dispensers for Safe Water could reach, etc., but it may help give a sense of the huge (and cost-effective) global funding gap for water treatment.

On our meta-analysis of water treatment’s effects

Q: Michael Kremer and his team found a very large effect of water treatments like chlorination on child mortality, and GiveWell found a smaller but still significant mortality effect. What about effects beyond mortality? For instance, I’m curious about the possibility of increasing the child’s adult income because of improved overall health, or the impact of reducing the family’s expenditure on medical treatment, etc. How confident can we be in these results?

A: The single largest benefit we model from chlorination interventions like Dispensers for Safe Water is a reduction in mortality among children under five, but we do incorporate other benefits as well:

  • Development effects. We believe reducing illness in children probably improves their development, leading to slightly higher income in adulthood. We refer to this as “development effects” and include it in our model. We don’t have direct evidence that water treatment causes this, but we do have direct evidence that malaria prevention and deworming cause it, and we think it probably applies here as well.
  • Medical costs averted. If a child becomes ill less often because their water is chlorinated, we do expect the family will save money on medical treatment, and we factor in this savings. The estimate we arrived at in our cost-effectiveness analysis for water treatment was higher than we’d initially expected. We’re working on refining this and investigating how generalizable it might be to other types of programs.
  • Over-five mortality. We estimate that water treatment reduces mortality in people older than five by 1 to 4%. That’s less than the 6 to 11% reduction in all-cause mortality we estimate in children under five, but not negligible, given that there are more people over five than under five. Much of this benefit is expected to occur in children just over five.[4]

Q: In describing the meta-analysis on which this grant recommendation was based, you mentioned that you took steps to limit publication bias, and Michael Kremer also mentioned doing checks for publication bias in his own analysis. Can you explain what that is and why it mattered to this investigation?

A: “Publication bias” is when a body of scientific literature is biased because certain types of results are more likely to be published than others. There are different types of publication bias, but a common example is that studies that produce statistically significant results are more likely to be published than studies that do not produce such results. This biases the available literature on the subject in favor of research that finds an effect, which causes the literature as a whole to exaggerate effect sizes. The Wikipedia page on publication bias has some good further explanation and examples.

When reviewing the evidence behind an intervention, we consider whether publication bias is a factor and make adjustments in our cost-effectiveness models accordingly. In this case, we decided to do our own meta-analysis to arrive at our own pooled estimate of water chlorination’s impact on deaths. We think our pooled estimate, which guided our decision to recommend the grant to Dispensers for Safe Water, is resistant to publication bias because the vast majority of the weight comes from three large, recent water quality trials. We think this because all of the recent large trials of water quality have reported mortality outcomes, and results from large trials usually don’t go unpublished, regardless of outcome (so it is unlikely that only statistically significant results showing a benefit would be chosen for publication). We also excluded smaller trials with shorter follow-up periods (that is, the period of time in which study participants are observed after the treatment), which tend to be more susceptible to publication bias.[5]

We ultimately arrived at an estimated 14% reduction in all-cause mortality in children under five for chlorination interventions generally, with lower estimates for the effect of Dispensers for Safe Water specifically. This led us to update our cost-effectiveness estimate for Dispensers for Safe Water to between four and nine times as cost-effective as unconditional cash transfers (depending on location). You can read more about Michael Kremer et al.’s meta-analysis and our own meta-analysis here, and more about how we arrived at our cost-effectiveness estimates here.

Q: Michael Kremer mentioned that his team’s analysis included Bayesian estimates. I know just a little about Bayesian analysis in RCTs, but I recall the idea of enthusiastic and skeptical priors. Were those used in this Bayesian analysis? And, are there “enthusiasts” and “skeptics” on this issue?

Kremer et al. used a prior centered around zero effect, with a wide standard deviation. As they explain in the supplementary materials for their paper: “For τ, we set a normal distribution with mean 0 and standard deviation of 10. This prior encodes the belief that causal effects should not be thought of as large unless data contains evidence to the contrary” (p. 3). Kremer et al. conducted a number of sensitivity analyses, but we are not aware that they tried pessimistic and optimistic priors in their Bayesian meta-analysis.[6]

More broadly, GiveWell did not consider optimistic and pessimistic scenarios. We sometimes consider these for decision-making purposes if a program is below our funding bar and we want to see how plausible it is that further work would raise it above the bar. However, if a program is over our funding bar, we typically make funding decisions based on our best guess, assuming that the best guess of cost-effectiveness represents the expected value of the entire probability distribution of possible outcomes. In other words, we assume that the probability that the program is better than we think balances out the probability that it is worse than we think. We are aware that this assumption may not always be satisfied, and we may do further work on this issue.

Michael Kremer’s meta-analysis is recent and not yet published in a peer-reviewed journal, and we aren’t aware that people have publicly aligned themselves as advocates or skeptics yet. However, the external expert reviewers who we contracted to review an earlier draft of the Kremer et al. meta-analysis were fairly skeptical of the findings.[7] Our analysis of the data reflects the reviewers’ skepticism, and our estimated effect size is quite a bit smaller than the one in the Kremer et al. meta-analysis (although confidence intervals overlap),[8] so it would be fair to say we are also on the more skeptical side.

On unintended consequences of charity-run programs

Q: What do you make of the criticism that charities reduce the pressure put on local governments to deliver better services?

A: This is a reasonable critique. We believe that there could be knock-on effects from funding charities to implement water chlorination (or other) programs, such as making governments less likely to provide these services themselves, and right now we are not very confident in our ability to measure the likelihood of such effects. However, we believe there is a role for private funding to play in getting vital services to people who need them. Water treatment is a service that is clearly needed and not currently provided by the governments where Dispensers for Safe Water operates.[9] Based on our conversations with Evidence Action and others, it also appears to be highly neglected by private funders, and therefore well suited for impact-motivated philanthropy to step in.

Additionally, many programs are set up to be partnerships between government and non-governmental organizations, in which the NGO provides training, monitoring and evaluation activities, or other forms of “technical assistance” to government staff, who actually deliver the services. We’ve found that these partnerships can result in more impact than NGOs acting alone.

For example, Deworm the World Initiative supports the governments of India, Kenya, Nigeria, and Pakistan in their implementation of deworming campaigns (more here). Against Malaria Foundation works with countries’ national malaria programs to determine funding needs and allocation decisions for insecticide treated nets campaigns (more here).

Beyond our top charities, we’ve recommended grants to organizations that provide technical assistance to governments to make their programs more effective. For example, we recommended a grant to another program from Evidence Action that is working to support the Liberian government in switching from HIV rapid tests during routine antenatal care to more effective dual HIV/syphilis rapid tests and syphilis treatment (more here).

On GiveWell’s research process and sources of potential new funding opportunities

Q: Could you please share the process GiveWell uses to make funding decisions—how you narrow down potential projects for investigation and then grantmaking?

A: Our process for investigating a grant consists of three stages, though not all of them apply to all grants:

  • Research review. We look at the evidence behind an intervention, talk to experts and possible implementing organizations, and build a rough cost-effectiveness model. This stage applies more to new interventions than top charities, since we already are very familiar with top-charity programs and have directed funds to them before.
  • Strong interest. We’ve determined that the program likely is cost-effective enough for us to consider funding, and we want to move on to a deeper investigation of a specific grant opportunity.
  • Grant recommendation (called “conditional approval” internally, as the grant approval is conditional on the funder’s confirmation). We’ve done a thorough analysis of cost-effectiveness and spent a lot of time talking to the potential grantee about the funding opportunity. We’ve decided it meets our criteria for funding, and we want to recommend the grant (or make it ourselves).

When evaluating a program for funding, we look for evidence of its effectiveness, its impact per dollar spent, room for more funding (i.e., how much money it can productively use), and whether the organization seems like a strong partner that will share transparently with us and allow us to share our views about it publicly. We also consider what effect our funding in this situation will have on other funders’ decisions—i.e., whether it will cause them to allocate more or less money to this intervention (more here).

Right now we have 350 programs in our pipeline that we’d potentially like to investigate, and we’re actively investigating about 60 of them. But we don’t expect to make grants to all of them—we think probably about 15 of these active investigations will result in grants.

You can read more about our research process and see a partial list of prioritized programs here.