Several people have emailed us in the past few days asking about the new evaluation of the Millennium Villages Project (MVP), published in The Lancet last week. It has received significant attention in the development blogosphere (see, e.g., here, here, here, and here).
The evaluation argues that the MVP was responsible for a substantial drop in child mortality. However, we see a number of problems.
- Even if the evaluation’s conclusions are taken at face value, insecticide-treated net distribution alone appears to account for 42% of the total effect on child mortality (though there is high uncertainty).
- The MVP is much more expensive than insecticide-treated net distribution – around 45x on a per-person basis. Therefore, we believe that in order to make an argument that the MVP is the best available use of dollars, one must demonstrate effects far greater than those attained through distributing bednets. We believe the evaluation falls short on this front, and that the mortality averted by the MVP could have been averted at about 1/35th of the cost by simply distributing bednets. Note that the evaluation does not claim statistically significant impacts beyond health; all five of the reported statistically significant impacts are fairly closely connected to childhood mortality reduction.
- There are a number of other issues with the evaluation, such that we believe the child mortality effect should not be taken at face value. We have substantial concerns about both selection bias and publication bias. In addition, a mathematical error, discovered by the World Bank’s Gabriel Demombynes and Espen Beer Prydz, overstates the reduction in child mortality, and the corrected effect appears similar to the reduction in child mortality for the countries as a whole that the MVP works in (though still greater than the reduction in mortality for the villages the MVP chose as comparisons for the evaluation). The MVP published a partial retraction with respect to this error (PDF) today.
We would guess that the MVP has some positive effects in the villages it works in – but for a project that costs as much per person as the MVP, that isn’t enough. We don’t believe the MVP has demonstrated cost-effective or sustainable benefits. We also don’t believe it has lived up (so far) to its hopes of being a “proof of concept” that can shed new light on debates over poverty.
Also see coverage of the Millennium Villages Project by David Barry, Michael Clemens, Lee Crawfurd, and Gabriel Demombynes and Espen Beer Prydz, much of which we’ve found helpful in thinking about the MVP and some of which we cite in this post.
The Millennium Villages Project attempts to make significant progress towards achieving the Millennium Development Goals through a package of intensive interventions in 13 clusters of villages in rural Africa. It further aims to serve as a demonstration of the potential of integrated development efforts to cost-effectively improve lives in rural Africa. In its own words, the MVP states, “Millennium Villages are designed to demonstrate how the Millennium Development Goals can be met in rural Africa over 10 years through integrated, community-led development at very low cost.”
The new evaluation concludes:
“Baseline levels of MDG-related spending averaged $27 per head, increasing to $116 by year 3 of which $25 was spent on health. After 3 years, reductions in poverty, food insecurity, stunting, and malaria parasitaemia were reported across nine Millennium Village sites. Access to improved water and sanitation increased, along with coverage for many maternal-child health interventions. Mortality rates in children younger than 5 years of age decreased by 22% in Millennium Village sites relative to baseline (absolute decrease 25 deaths per 1000 livebirths, p=0.015) and 32% relative to matched comparison sites (30 deaths per 1000 livebirths, p=0.033). The average annual rate of reduction of mortality in children younger than 5 years of age was three-times faster in Millennium Village sites than in the most recent 10-year national rural trends (7.8% vs 2.6%).”
In a later section, we question the size and robustness of this conclusion; here we argue that even taken at face value, it does not imply good cost-effectiveness for the MVP compared to insecticide-treated net distribution alone.
The MVP’s own accounting puts the cost per person served in the third year of treatment, including only field costs, at $116 (see the quote, above). Assuming linear ramp-up of the program, we take the average of baseline ($27/person) and third year ($116/person) spending and estimate that MVP spent roughly $72/person during the first three years of the project. Michael Clemens, has argued that their spending amounts to “roughly 100% of local income per capita.”
We should expect that amount of spending to make a difference in the short term, especially since some of it is going to cheap, proven interventions, like distributing bednets. In fact, it appears that the biggest and most robust impact of the 18 reported was increasing the usage of bednets.
The proportion of under-5 children sleeping under bednets in the MVP villages in year 3 was 36.7 percentage points higher than the proportion in the comparison villages. The Cochrane Review on bednet distribution estimates that “5.53 deaths [are] averted per 1000 children protected per year.” (See note.) If we assume that 80% of bednets distributed are used, the additional bednet usage rate (36.7 percentage points) found in MVP’s survey indicates that MVP’s program lead to 46 percentage points (36.7 / 80%) more villagers receiving bednets than did in the control villages. (Note that using a figure lower than 80% for usage would imply a higher impact of bednets because of the way the estimate works.) Therefore, we’d estimate that for every 1000 children living in an MVP village, the bednet portion of MVP’s program alone would be expected to save 2.54 lives per year ((5.53 lives saved per year / 1000 children who receive a bednet) * 0.46 additional children receiving a bednet per child in a MVP village). Said another way, the bednet effect of the MVP program would be expected to reduce a child’s chances of dying by his or her fifth birthday by roughly 1.27 percentage points (0.254% reduction in mortality per year over 5 years). The total reduction in under-five mortality observed in the evaluation was 3.05 percentage points (30.5 per 1000 live births). Thus the expected effect of increasing bednet usage in the villages accounts for 42% of the observed decrease in under-5 mortality, and is within the 95% confidence interval for the total under-5 mortality reduction. (We can’t say with 95% confidence that the true total effect of the MVP on child mortality is larger than just its effect due to increased bednet distribution.)
Insecticide-treated nets cost roughly $6.31 (including all costs) to distribute and cover an average of 1.8 people and last 2.22 years (according to our best estimates). That works out to about $1.58 per person per year. At $72 per person per year, the MVP costs about 45 times as much (on a per-person-per-year basis) as net distribution. Although we would expect bednets to achieve a smaller effect on mortality than MVP on a per-person-per-year basis, we estimate that the MVP could have attained the same mortality reduction at ~1/35 of the cost by simply distributing bednets (see our spreadsheet for details of the calculation).
If the MVP evaluation had shown other impressive impacts, then perhaps the higher costs would be well justified, but 3 of the 5 statistically significant results from the study are on bednet usage, malaria prevalance, and child mortality. (The other two are access to improved sanitation and skilled birth attendance, both of which would also be expected to manifest benefits in terms of reductions in under-5 mortality.) There were no statistically significant benefits in terms of poverty or education.
Lack of randomization in selecting treatment vs. comparison villages. The evaluation uses a comparison group of villages that were selected non-randomly at the time of follow-up, so many of the main conclusions of the evaluation are drawn based simply on comparing the status of the treated and non-treated villages in year 3 of the intervention, without controlling for potential initial differences between the two groups. If the control villages started at a lower baseline level and improved over time at exactly the same rate as the treatment villages, then the treatment would appear to have an impact equal to the initial difference, before the intervention began, between the the treatment and control groups, even though it actually had none. Even in cases in which baseline data is available from the control groups, it is possible that the group of villages selected as controls could improve more slowly than the treatment group for reasons having nothing to do with the treatment. Accordingly, there are strong structural reasons to regard the evaluation’s claims with skepticism.
Michael Clemens has written more about this issue here and here. We agree with his argument that the MVP could and seemingly should have randomized its selection of treatment vs. control villages instead, especially given its goal of serving as a proof of concept.
Publication bias concerns. The authors report 18 outcomes from the evaluation; results on 13 of them are statistically insignificant at the standard 95% confidence level (including all of the measures of poverty and education). Even if results were entirely random, we’d expect roughly one statistically significant result out of 18 comparisons. The authors find five statistically significant results, which implies that the results are unlikely to be just due to chance, but they could have explicitly addressed the fact that they checked a number of hypotheses and performed statistical adjustments for this fact, which would have increased our confidence in their results. The authors did register the study with ClinicalTrials.gov, but the protocol was first submitted in May 2010, long after the data had been collected for this study.
We also note that the registration lists 22 outcomes, but the authors only report results for 18 in the paper. They explain the discrepancy as follows: “The outcome of antimalarial treatment for children younger than 5 years of age was excluded because new WHO guidelines for rapid testing and treatment at the household level invalidate questions used to construct this indicator. Questions on exclusive breast-feeding, the introduction of complementary feeding, and appropriate pneumonia treatment were not captured in our year 3 assessments.” But this only accounts for three of the four missing outcomes. This does not explain why the authors do not report results for mid-upper arm circumference (a measure of malnutrition), which the ClinicalTrials.gov protocol said they would collect.
Mathematical error in estimating the magnitude of the child-mortality drop.
Note: the MVP published a partial retraction with respect to this error (PDF) today.
At the World Bank’s Development Impact Blog, Gabriel Demombynes and Espen Beer Prydz point out a mathematical error in the evaluation’s claim that “The average annual rate of reduction of mortality in children younger than 5 years of age was three-times faster in Millennium Village sites than in the most recent 10-year national rural trends (7.8% vs 2.6%).”
Essentially, they used the wrong time frame in calculating the decline in Millennium Villages: to estimate the per-year decline in childhood mortality, they divided the difference in the average childhood mortality during the treatment period (3 years long) and the previous 5 year baseline period by three, to try to get the annual decline. As Demombynes and Prydz point out, however, this mistakenly assumes that the time difference between the 3 year average and the 5 year average is 3 years, when it is in fact 4 years:
This shifts the annual decline in child mortality from 7.8% to 5.9% (though see Dave Barry and Michael Clemens’ comments here for more discussion of the assumptions behind these calculations).
The adjusted figure for child mortality improvement is no better for the MVP villages than for national trends. Demombynes and Prydz go on to argue that using a more appropriate and up-to-date data set for the national trends in childhood mortality get an average trend of -6.4% a year, better than in the Millennium Villages, and that the average reductions in rural areas are even higher.
Note, however, that this argument is saying that the comparison group in the study is not representative of the broader trend, not that the Millennium Villages did not improve relative to the comparison group.
The Millennium Villages Project is a large, multi-sectoral, long-term set of interventions. The new evaluation suggests, though it does not prove, that the MVP is making progress in reducing childhood mortality, but at great cost. It does not provide any evidence that the MVP is reducing poverty or improving education, its other main goals. These results from the first three years of implementation, if taken seriously, are discouraging. The primary benefits of the intervention so far–reductions in childhood mortality–could have been achieved at much lower costs by simply distributing bednets.
Note: the Cochrane estimate of 5.53 deaths averted per 1,000 children protected per year does not assume perfect usage. Our examination of the studies that went in to the Cochrane estimate found that most studies report usage rates in the range of 60-80%, though some report 90%+ usage.