The GiveWell Blog

Millennium Villages Project

Several people have emailed us in the past few days asking about the new evaluation of the Millennium Villages Project (MVP), published in The Lancet last week. It has received significant attention in the development blogosphere (see, e.g., here, here, here, and here).

The evaluation argues that the MVP was responsible for a substantial drop in child mortality. However, we see a number of problems.

Summary

  • Even if the evaluation’s conclusions are taken at face value, insecticide-treated net distribution alone appears to account for 42% of the total effect on child mortality (though there is high uncertainty).
  • The MVP is much more expensive than insecticide-treated net distribution – around 45x on a per-person basis. Therefore, we believe that in order to make an argument that the MVP is the best available use of dollars, one must demonstrate effects far greater than those attained through distributing bednets. We believe the evaluation falls short on this front, and that the mortality averted by the MVP could have been averted at about 1/35th of the cost by simply distributing bednets. Note that the evaluation does not claim statistically significant impacts beyond health; all five of the reported statistically significant impacts are fairly closely connected to childhood mortality reduction.
  • There are a number of other issues with the evaluation, such that we believe the child mortality effect should not be taken at face value. We have substantial concerns about both selection bias and publication bias. In addition, a mathematical error, discovered by the World Bank’s Gabriel Demombynes and Espen Beer Prydz, overstates the reduction in child mortality, and the corrected effect appears similar to the reduction in child mortality for the countries as a whole that the MVP works in (though still greater than the reduction in mortality for the villages the MVP chose as comparisons for the evaluation). The MVP published a partial retraction with respect to this error (PDF) today.

We would guess that the MVP has some positive effects in the villages it works in – but for a project that costs as much per person as the MVP, that isn’t enough. We don’t believe the MVP has demonstrated cost-effective or sustainable benefits. We also don’t believe it has lived up (so far) to its hopes of being a “proof of concept” that can shed new light on debates over poverty.

Also see coverage of the Millennium Villages Project by David Barry, Michael Clemens, Lee Crawfurd, and Gabriel Demombynes and Espen Beer Prydz, much of which we’ve found helpful in thinking about the MVP and some of which we cite in this post.

Background

The Millennium Villages Project attempts to make significant progress towards achieving the Millennium Development Goals through a package of intensive interventions in 13 clusters of villages in rural Africa. It further aims to serve as a demonstration of the potential of integrated development efforts to cost-effectively improve lives in rural Africa. In its own words, the MVP states, “Millennium Villages are designed to demonstrate how the Millennium Development Goals can be met in rural Africa over 10 years through integrated, community-led development at very low cost.”

The drop in child mortality, and the comparison to insecticide-treated nets

The new evaluation concludes:

“Baseline levels of MDG-related spending averaged $27 per head, increasing to $116 by year 3 of which $25 was spent on health. After 3 years, reductions in poverty, food insecurity, stunting, and malaria parasitaemia were reported across nine Millennium Village sites. Access to improved water and sanitation increased, along with coverage for many maternal-child health interventions. Mortality rates in children younger than 5 years of age decreased by 22% in Millennium Village sites relative to baseline (absolute decrease 25 deaths per 1000 livebirths, p=0.015) and 32% relative to matched comparison sites (30 deaths per 1000 livebirths, p=0.033). The average annual rate of reduction of mortality in children younger than 5 years of age was three-times faster in Millennium Village sites than in the most recent 10-year national rural trends (7.8% vs 2.6%).”

In a later section, we question the size and robustness of this conclusion; here we argue that even taken at face value, it does not imply good cost-effectiveness for the MVP compared to insecticide-treated net distribution alone.

The MVP’s own accounting puts the cost per person served in the third year of treatment, including only field costs, at $116 (see the quote, above). Assuming linear ramp-up of the program, we take the average of baseline ($27/person) and third year ($116/person) spending and estimate that MVP spent roughly $72/person during the first three years of the project. Michael Clemens, has argued that their spending amounts to “roughly 100% of local income per capita.”

We should expect that amount of spending to make a difference in the short term, especially since some of it is going to cheap, proven interventions, like distributing bednets. In fact, it appears that the biggest and most robust impact of the 18 reported was increasing the usage of bednets.

The proportion of under-5 children sleeping under bednets in the MVP villages in year 3 was 36.7 percentage points higher than the proportion in the comparison villages. The Cochrane Review on bednet distribution estimates that “5.53 deaths [are] averted per 1000 children protected per year.” (See note.) If we assume that 80% of bednets distributed are used, the additional bednet usage rate (36.7 percentage points) found in MVP’s survey indicates that MVP’s program lead to 46 percentage points (36.7 / 80%) more villagers receiving bednets than did in the control villages. (Note that using a figure lower than 80% for usage would imply a higher impact of bednets because of the way the estimate works.) Therefore, we’d estimate that for every 1000 children living in an MVP village, the bednet portion of MVP’s program alone would be expected to save 2.54 lives per year ((5.53 lives saved per year / 1000 children who receive a bednet) * 0.46 additional children receiving a bednet per child in a MVP village). Said another way, the bednet effect of the MVP program would be expected to reduce a child’s chances of dying by his or her fifth birthday by roughly 1.27 percentage points (0.254% reduction in mortality per year over 5 years). The total reduction in under-five mortality observed in the evaluation was 3.05 percentage points (30.5 per 1000 live births). Thus the expected effect of increasing bednet usage in the villages accounts for 42% of the observed decrease in under-5 mortality, and is within the 95% confidence interval for the total under-5 mortality reduction. (We can’t say with 95% confidence that the true total effect of the MVP on child mortality is larger than just its effect due to increased bednet distribution.)

Insecticide-treated nets cost roughly $6.31 (including all costs) to distribute and cover an average of 1.8 people and last 2.22 years (according to our best estimates). That works out to about $1.58 per person per year. At $72 per person per year, the MVP costs about 45 times as much (on a per-person-per-year basis) as net distribution. Although we would expect bednets to achieve a smaller effect on mortality than MVP on a per-person-per-year basis, we estimate that the MVP could have attained the same mortality reduction at ~1/35 of the cost by simply distributing bednets (see our spreadsheet for details of the calculation).

If the MVP evaluation had shown other impressive impacts, then perhaps the higher costs would be well justified, but 3 of the 5 statistically significant results from the study are on bednet usage, malaria prevalance, and child mortality. (The other two are access to improved sanitation and skilled birth attendance, both of which would also be expected to manifest benefits in terms of reductions in under-5 mortality.) There were no statistically significant benefits in terms of poverty or education.

Other issues with the MVP’s evaluation

Lack of randomization in selecting treatment vs. comparison villages. The evaluation uses a comparison group of villages that were selected non-randomly at the time of follow-up, so many of the main conclusions of the evaluation are drawn based simply on comparing the status of the treated and non-treated villages in year 3 of the intervention, without controlling for potential initial differences between the two groups. If the control villages started at a lower baseline level and improved over time at exactly the same rate as the treatment villages, then the treatment would appear to have an impact equal to the initial difference, before the intervention began, between the the treatment and control groups, even though it actually had none. Even in cases in which baseline data is available from the control groups, it is possible that the group of villages selected as controls could improve more slowly than the treatment group for reasons having nothing to do with the treatment. Accordingly, there are strong structural reasons to regard the evaluation’s claims with skepticism.

Michael Clemens has written more about this issue here and here. We agree with his argument that the MVP could and seemingly should have randomized its selection of treatment vs. control villages instead, especially given its goal of serving as a proof of concept.

Publication bias concerns. The authors report 18 outcomes from the evaluation; results on 13 of them are statistically insignificant at the standard 95% confidence level (including all of the measures of poverty and education). Even if results were entirely random, we’d expect roughly one statistically significant result out of 18 comparisons. The authors find five statistically significant results, which implies that the results are unlikely to be just due to chance, but they could have explicitly addressed the fact that they checked a number of hypotheses and performed statistical adjustments for this fact, which would have increased our confidence in their results. The authors did register the study with ClinicalTrials.gov, but the protocol was first submitted in May 2010, long after the data had been collected for this study.

We also note that the registration lists 22 outcomes, but the authors only report results for 18 in the paper. They explain the discrepancy as follows: “The outcome of antimalarial treatment for children younger than 5 years of age was excluded because new WHO guidelines for rapid testing and treatment at the household level invalidate questions used to construct this indicator. Questions on exclusive breast-feeding, the introduction of complementary feeding, and appropriate pneumonia treatment were not captured in our year 3 assessments.” But this only accounts for three of the four missing outcomes. This does not explain why the authors do not report results for mid-upper arm circumference (a measure of malnutrition), which the ClinicalTrials.gov protocol said they would collect.

Mathematical error in estimating the magnitude of the child-mortality drop.

Note: the MVP published a partial retraction with respect to this error (PDF) today.

At the World Bank’s Development Impact Blog, Gabriel Demombynes and Espen Beer Prydz point out a mathematical error in the evaluation’s claim that “The average annual rate of reduction of mortality in children younger than 5 years of age was three-times faster in Millennium Village sites than in the most recent 10-year national rural trends (7.8% vs 2.6%).”

Essentially, they used the wrong time frame in calculating the decline in Millennium Villages: to estimate the per-year decline in childhood mortality, they divided the difference in the average childhood mortality during the treatment period (3 years long) and the previous 5 year baseline period by three, to try to get the annual decline. As Demombynes and Prydz point out, however, this mistakenly assumes that the time difference between the 3 year average and the 5 year average is 3 years, when it is in fact 4 years:

[When we originally published this post in 2012, we included a link here to an image stored on a World Bank web server. In 2020, we learned that this image link was broken and were unable to successfully replace it. We apologize for the omission of this image.]

This shifts the annual decline in child mortality from 7.8% to 5.9% (though see Dave Barry and Michael Clemens’ comments here for more discussion of the assumptions behind these calculations).

The adjusted figure for child mortality improvement is no better for the MVP villages than for national trends. Demombynes and Prydz go on to argue that using a more appropriate and up-to-date data set for the national trends in childhood mortality get an average trend of -6.4% a year, better than in the Millennium Villages, and that the average reductions in rural areas are even higher.

Note, however, that this argument is saying that the comparison group in the study is not representative of the broader trend, not that the Millennium Villages did not improve relative to the comparison group.

Conclusion

The Millennium Villages Project is a large, multi-sectoral, long-term set of interventions. The new evaluation suggests, though it does not prove, that the MVP is making progress in reducing childhood mortality, but at great cost. It does not provide any evidence that the MVP is reducing poverty or improving education, its other main goals. These results from the first three years of implementation, if taken seriously, are discouraging. The primary benefits of the intervention so far–reductions in childhood mortality–could have been achieved at much lower costs by simply distributing bednets.

Note: the Cochrane estimate of 5.53 deaths averted per 1,000 children protected per year does not assume perfect usage. Our examination of the studies that went in to the Cochrane estimate found that most studies report usage rates in the range of 60-80%, though some report 90%+ usage.

Comments

  • Sarah Chapman on May 21, 2012 at 7:00 pm said:

    Thanks for the interesting post. But I don’t think your analysis quite does the MVP justice. Maybe the the magnitude of the so-called ‘bednet effect’ in Millennium Villages is so high precisely due to the simultaneous investments made by the project in complementary programs that make bednet distribution more efficient, and reinforce utilization? These include substantial investments in a community health worker program in each Millennium Village, who assist in distribution and actively reinforce their utilization at the household level. Simply throwing bednets at communities, without the complimentary investments in broader health systems strengthening, as seen in Millennium Villages, is likely to result in a much smaller ‘bednet effect’.

  • Alexander on May 22, 2012 at 10:15 am said:

    Sarah: Thanks for the comment. Given the way we actually calculated the “bednet effect,” I think it’s pretty unlikely, though not impossible, that the community health workers or other co-interventions are driving the observed bednet effect. In particular, when estimating the proportion of the mortality benefits attributable to bednets, we relied only on the observed increase in bednet usage from the MVP and outside average estimates of the effect of bednet distributions on usage and mortality, not from the MVP. The only way that the MVP co-interventions could affect our calculation is through the assumption about the proportion of nets distributed that are used; we assumed 80% to get the 42% bednet effect estimate. If, most charitably for the MVP, we assume that 100% of the bednets distributed there were used, the estimated bednet effect falls to 33% (see our spreadsheet (XLS)). But, given our methodology, that is effectively a floor on the estimate of the bednet effect; it can’t go lower than 33%.

    The more general reason that I think it’s tough to argue that the MVP’s co-interventions are driving the observed bednet effects is that the estimate we’re using for the mortality benefits of the bednet distributions doesn’t come from the MVP, it comes from a meta-analysis by the Cochrane Review of randomized controlled trials of bednet distributions. We discuss those studies in some depth here. In short, there’s some reason to believe that the effects they observed relied on fairly intensive distribution of bednets (e.g. multiple lessons about how to use bednets), but little reason to believe that they relied on the more general type of health system strengthening practiced by the MVP.

    Note that it would be nice if we could get an estimate of the “bednet effect” on mortality internal to the MVP, which would be possible using the data they compiled for the Lancet paper. (We would essentially regress the increase in bednet usage in each village on the village-level changes in mortality rates.) However, because they have not released the data that went into the Lancet paper, this internal comparison is not possible.

  • Daniel C on July 10, 2012 at 3:49 am said:

    I think you are missing the point. MVP’s purpose is not merely to reduce child mortality.

    MVP’s goal is to get out of the poverty trap. To do this they work on Food production and nutrition, education, sanitation, medicine, infrastructure… and yes, they also do bed nets.

    Our goal should not be to just have fewer kids dying. I think that’s rather shortsighted. MVP trying to take the most extremely poor villages and make them self sustaining… So of course, if you only look at child mortality rate and ignore 95% of what they work on, they will look expensive.

  • Alexander on July 10, 2012 at 9:54 am said:

    Daniel – thanks for the comment, and sorry for the lack of clarity in the post. I should have emphasized more strongly that the MVP evaluation discussed in the post does not find any statistically significant impacts on wealth or poverty, and points to the decline in child mortality as the main evidence of the MVP’s success.

    So while I agree with you that it is an ambitious project with multiple goals, taking their own evaluation at its word implies that their “success” could have been achieved at far lower cost by just using bednets.

  • Daniel C on July 10, 2012 at 10:52 am said:

    Alex,

    Thanks for the clarification. Though I disagree, it is good to see where you are coming from.

    The MVP did a study on child mortality. This is not evidence that they are measuring the whole success of MVP based purely on child mortality. Thus, I think you are acting on a false and unfair premise.

    Imagine that next month they publish an article on the increase in food production in the MV. Would you write another article condemning the MVP because it costs more than program that only tries to increase food production?

    It is entirely reasonable to do a study on MVP’s effect on one specific variable. That doesn’t mean that this one variable is its sole and only goal… It is perfectly reasonable to do one study on child mortality, another on sanitation and another on housing quality. And if you take any one of these as the only goal of MVP, you are almost guaranteed to reach the conclusion that MVP isn’t worth it, regardless of the reality of MVP.

    On another note, your post would have been more fair if it had included the replies from the authors to the criticism of the paper. That includes their acceptance of much of the criticism and the changes made to improve data collection and analysis in the future.

    http://www.lancet.com/journals/lancet/article/PIIS0140-6736(12)60787-9/fulltext

    http://www.millenniumvillages.org/field-notes/millennium-villages-project-corrects-lancet-paper

    http://retractionwatch.wordpress.com/2012/05/31/millennium-villages-project-forced-to-correct-lancet-paper-on-foreign-aid-as-leader-leaves-team/

  • Alexander on July 11, 2012 at 2:20 pm said:

    Daniel – thanks for the continued discussion.

    Two notes:

    • First, this wasn’t a study of “a specific variable.” The study measured 22 outcomes, only 18 of which are included in the published paper. Of those 18 outcomes, 13–including all of the measures of wealth and poverty, nutrition, child health (except mortality), and primary education–found no statistically significant effect. So if the study only measured child mortality but still held out hope for impact on other unmeasured outcomes, I agree that it would be unfair to attribute all spending to the reduction in child mortality, because as you point out, another study could find different results. But that is not the case: this study looked for all of the results that you mention, and didn’t find them. Given that, it seems much more appropriate to say, “the benefits documented in this study could have been achieved much more cost-effectively by focusing on the successful elements, namely bednets.”
    • Second, two of the three replies from the study authors that you link to had not been published at the time I wrote the post. The one that had been published was a blog post about the authors’ retraction of the numerical mistake from the study, which I did in fact link to (see the last bullet point in the “Summary” section of the post). Anyway, I do agree that their response to the criticism was admirable, especially with respect to the retraction.

Comments are closed.