The GiveWell Blog

Deciding between two outstanding charities

We’ve recently published our updated charity recommendations, featuring two top charities (Against Malaria Foundation and Schistosomiasis Control Initiative) that score well on all of our criteria. In this post, we discuss how we decided which of these two charities to rank #1 and which to rank #2.

Both charities are executing health programs that deliver significant and very cheap help to people in the developing world. Both have strong track records and transparency, as well as concrete plans for how to use future donations.

Here’s what we see as the major relative pros and cons:

SCI has a more complete and convincing case that its past activities have had the intended outcomes.

  • AMF has consistently gotten nets delivered to communities – and given the strong evidence on the impact of nets, this in itself is stronger evidence of impact than for nearly any other charity we’ve seen – but there are still some gaps in the picture. We aren’t sure whether, or for how long, nets are used properly, and we don’t have data on what has happened to malaria prevalence (though our research on nets in general has led us to believe that neither of these is a huge concern). AMF has made credible commitments to future data collection on both of these fronts (and has collected some data for the former).
  • By contrast, SCI’s evidence shows substantial drops in disease prevalence. This evidence has some issues (which we discuss in the review), but overall we find it convincing.

This consideration is balanced somewhat by the fact that we are more confident in the quality-of-life significance of reducing malaria than of reducing parasitic infections.

AMF has more upside.

  • It’s smaller, and appears to be earlier in its development (having just begun its first larger-scale distribution); the chance that GiveWell-influenced money can be crucial in its development is therefore higher.
  • It’s working in an area – distribution of nets – where (a) an enormous amount of money is spent each year* (b) data on long-term usage and malaria prevalence following distributions still looks to us to be pretty thin. Well-executed and well-documented distributions could be valuable as pilots and as information for the hundreds of millions of dollars worth of other distributions going on.

We have more confidence in AMF as an organization. Both AMF and SCI are outstanding on this front: both are transparent and accountable with strong track records, and both have answered all our questions well. However,

  • We’ve consistently (for more than a year now) found AMF noticeably easier to communicate with, and found it to address our concerns noticeably more clearly and directly. With AMF, we are more confident that we have gotten our questions fully answered, that we won’t later hear about something we should have heard about before, and that we will be able to learn about how our funds end up being used and whether things end up going well or poorly.
  • SCI’s evaluation is outstanding, but may have been driven by its major funders (the Gates Foundation; DFID). With AMF, we are more convinced that the organization itself is committed to skeptical self-questioning, evaluation and improvement based on evaluation.
  • Very broadly, all GiveWell staff agree that we have more general confidence in AMF’s operations and management than SCI’s. This is a completely subjective judgment call that isn’t attributable to any particular event – it’s just a general feeling based on the hours of conversations we’ve had with both organizations. This leads us to be more confident that AMF would make decisions we would ultimately agree with or understand in the face of new circumstances.

We are sufficiently confident in the people behind both SCI and AMF to feature them as top charities, but our confidence for AMF is higher, and if we kept this information to ourselves we wouldn’t feel that we’re telling donors the whole story. Ultimately, it’s hard to be 100% sure of how your money will be used before you give it; confidence in the people you’re giving to is an important factor.

We are more confident in malaria-related research than in deworming-related research. This is as topic we’ll be writing about more in the future. In brief,

  • We have done extensive research on both nets and deworming. Studies on the former have consistently raised fewer unanswered questions and red flags than studies on the latter.
  • Despite the work we’ve done, we still have many unanswered questions about both deworming and nets.
  • We would guess that our unanswered questions will result in fewer negative adjustments for the nets, because we find the research – and by extension, the researchers – around nets to be more reliable.

The most important deciding factor for us comes down to a combination of cost-effectiveness and room for more funding.

  • We believe that in general, the vast bulk of SCI’s expenditures go toward deworming children rather than adults (see the example of Yemen), and that this is a good thing because a major part of the case for deworming is the possibility of developmental impacts for people treated in childhood.
  • We believe that deworming children is cost-effective – perhaps not quite as cost-effective (by our estimations) as net distribution, but close enough to make it a non-obvious call between the two.
  • However, the activities that SCI would fund with additional dollars (in the range of what we’re likely to be able to send their way) look a bit different. Note that in Mozambique, the plan is to take children who have already been selected for planned every-other-year deworming and instead deworm them every year; we have little information to shed light on the likely marginal benefit here. Other potential activities include deworming selected and particularly at-risk adults. Overall, we feel that these activities will still accomplish substantial good, but that they’re unlikely to be as cost-effective as standard deworming of children.

Bottom line. SCI is among the best giving opportunities we’ve ever seen, and we recommend it to donors. However, GiveWell staff unanimously find AMF to be an even stronger opportunity.

There are obviously a lot of judgment calls here, and we are hoping to move substantial donations to each organization so that we can follow the progress of each and learn more for the future (we see this opportunity to learn as a major value in and of itself, in terms of making us better able to maximize the impact of future donations).

*See pages 12-13 of the World Malaria Report: in 2009-2010, the Global Fund and PMI alone spent ~$1.5 billion a year on malaria control, of which about 1/3 was for nets specifically.

Conference call discussing our top charities, Dec. 8, 7p Eastern

We put a lot of effort into making our research process and reasoning transparent so that anyone can understand and vet the thinking behind our charity recommendations.

Consistent with this, we will be holding a conference call on December 8, 7p Eastern, open to anyone who registers via our online form. Staff will take questions by email and answer them over the conference line.

If you can’t make this date but would be interested in joining another call at a later date, you can indicate this on the registration form.

If you’re thinking of giving to one of our top charities this year, or you’re just curious about our thinking, we welcome you to join.

Register for the Dec. 8 GiveWell Conference Call

If you’ve already emailed us about your intention to attend, there’s no need to submit the form.

Top charities for holiday season 2011: Against Malaria Foundation and Schistosomiasis Control Initiative

GiveWell has published our annual update on how to accomplish as much good as possible with your donations.

Our top two charities – out of hundreds we’ve examined – are (1) the Against Malaria Foundation, which fights malaria using insecticide-treated bednets, and (2) the Schistosomiasis Control Initiative, which treats children for intestinal worms.

Our update is the result of a full year of intensive research: examining hundreds of charities, contacting the most promising ones, and completing in-depth investigations that include

  • Conversations with representatives
  • Examination of internal documentation including monitoring and evaluation reports, budgets, and plans for using additional funding
  • Reviewing independent literature and evidence of effectiveness of the charities’ programs
  • Site visits to charities’ work in the field

We have published the full details of our process, including a list of all charities examined and reviews for those examined in-depth.

Our top two charities are outstanding on all fronts. They execute proven, cost-effective programs for helping people. They have strong track records. They have concrete future plans and room for more funding. They are transparent and accountable to donors.

We also have identified five other standout organizations for donors interested in other causes. These are GiveDirectly (cash grants to poor households in Kenya), Innovations for Poverty Action (research on how to fight poverty and promote development), Nyaya Health (healthcare in rural Nepal), Pratham (primary education in India), and Small Enterprise Foundation (microfinance in South Africa).

Note that last year’s top-rated charity, VillageReach, does not have projected short-term funding needs (it expects to be able to meet these needs with funds not driven by GiveWell), as discussed previously.

The charities above all work in the developing world. Our top recommendation for donors who want to support causes in the United States is KIPP Houston, an outstanding charter schools facing budget cuts.

Over the last year, we drove over $1.6 million to our top-rated charities. We hope to drive substantially more over the coming year.

Maximizing cost-effectiveness via critical inquiry

We’ve recently been writing about the shortcomings of formal cost-effectiveness estimation (i.e., trying to estimate how much good, as measured in lives saved, DALYs or other units, is accomplished per dollar spent). After conceptually arguing that cost-effectiveness estimates can’t be taken literally when they are not robust, we found major problems in one of the most prominent sources of cost-effectiveness estimates for aid, and generalized from these problems to discuss major hurdles to usefulness faced by the endeavor of formal cost-effectiveness estimation.

Despite these misgivings, we would be determined to make cost-effectiveness estimates work, if we thought this were the only way to figure out how to allocate resources for maximal impact. But we don’t. This post argues that when information quality is poor, the best way to maximize cost-effectiveness is to examine charities from as many different angles as possible – looking for ways in which their stories can be checked against reality – and support the charities that have a combination of reasonably high estimated cost-effectiveness and maximally robust evidence. This is the approach GiveWell has taken since our inception, and it is more similar to investigative journalism or early-stage research (other domains in which people look for surprising but valid claims in low-information environments) than to formal estimation of numerical quantities.

The rest of this post

  • Conceptually illustrates (using the mathematical framework laid out previously) the value of examining charities from different angles when seeking to maximize cost-effectiveness.
  • Discusses how this conceptual approach matches the approach GiveWell has taken since inception.

Conceptual illustration
I previously laid out a framework for making a “Bayesian adjustment” to a cost-effectiveness estimate. I stated (and posted the mathematical argument) that when considering a given cost-effectiveness estimate, one must also consider one’s prior distribution (i.e., what is predicted for the value of one’s actions by other life experience and evidence) and the variance of the estimate error around the cost-effectiveness estimate (i.e., how much room for error the estimate has). This section works off of that framework to illustrate the potential importance of examining charities from multiple angles – relative to formally estimating their cost-effectiveness – in low-information environments.

I don’t wish to present this illustration either as official GiveWell analysis or as “the reason” that we believe what we do. This is more of an illustration/explication of my views than a justification; GiveWell has implicitly (and intuitively) operated consistent with the conclusions of this analysis, long before we had a way of formalizing these conclusions or the model behind them. Furthermore, while the conclusions are broadly shared by GiveWell staff, the formal illustration of them should only be attributed to me.

The model

Suppose that:

  • Your prior over the “good accomplished per $1000 given to a charity” is normally distributed with mean 0 and standard deviation 1 (denoted from this point on as N(0,1)). Note that I’m not saying that you believe the average donation has zero effectiveness; I’m just denoting whatever you believe about the impact of your donations in units of standard deviations, such that 0 represents the impact your $1000 has when given to an “average” charity and 1 represents the impact your $1000 has when given to “a charity one standard deviation better than average” (top 16% of charities).
  • You are considering a particular charity, and your back-of-the-envelope initial estimate of the good accomplished by $1000 given to this charity is represented by X. It is a very rough estimate and could easily be completely wrong: specifically, it has a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and standard deviation X (so 16% of the time, the actual impact of your $1000 will be 0 or “average”).* Thus, your estimate is denoted as N(X,X).

The implications

I use “initial estimate” to refer to the formal cost-effectiveness estimate you create for a charity – along the lines of the DCP2 estimates or Back of the Envelope Guide estimates. I use “final estimate” to refer to the cost-effectiveness you should expect, after considering your initial estimate and making adjustments for the key other factors: your prior distribution and the “estimate error” variance around the initial estimate. The following chart illustrates the relationship between your initial estimate and final estimate based on the above assumptions.

Note that there is an inflection point (X=1), past which point your final estimate falls as your initial estimate rises. With such a rough estimate, the maximum value of your final estimate is 0.5 no matter how high your initial estimate says the value is. In fact, once your initial estimate goes “too high” the final estimated cost-effectiveness falls.

This is in some ways a counterintuitive result. A couple of ways of thinking about it:

  • Informally: estimates that are “too high,” to the point where they go beyond what seems easily plausible, seem – by this very fact – more uncertain and more likely to have something wrong with them. Again, this point applies to very rough back-of-the-envelope style estimates, not to more precise and consistently obtained estimates.
  • Formally: in this model, the higher your estimate of cost-effectiveness goes, the higher the error around that estimate is (both are represented by X), and thus the less information is contained in this estimate in a way that is likely to shift you away from your prior. This will be an unreasonable model for some situations, but I believe it is a reasonable model when discussing very rough (“back-of-the-envelope” style) estimates of good accomplished by disparate charities. The key component of this model is that of holding the “probability that the right cost-effectiveness estimate is actually ‘zero’ [average]” constant. Thus, an estimate of 1 has a 67% confidence interval of 0-2; an estimate of 1000 has a 67% confidence interval of 0-2000; the former is a more concentrated probability distribution.

Now suppose that you make another, independent estimate of the good accomplished by your $1000, for the same charity. Suppose that this estimate is equally rough and comes to the same conclusion: it again has a value of X and a standard deviation of X. So you have two separate, independent “initial estimates” of good accomplished, and both are N(X,X). Properly combining these two estimates into one yields an estimate with the same average (X) but less “estimate error” (standard deviation = X/sqrt(2)). Now the relationship between X and adjusted expected value changes:

Now you have a higher maximum (for the final estimated good accomplished) and a later inflection point – higher estimates can be taken more seriously. But it’s still the case that “too high” initial estimates lead to lower final estimates.

The following charts show what happens if you manage to collect even more independent cost-effectiveness estimates, each one as rough as the others, each one with the same midpoint as the others (i.e., each is N(X,X)).

The pattern here is that when you have many independent estimates, the key figure is X, or “how good” your estimates say the charity is. But when you have very few independent estimates, the key figure is K – how many different independent estimates you have. More broadly – when information quality is good, you should focus on quantifying your different options; when it isn’t, you should focus on raising information quality.

A few other notes:

  • The full calculations behind the above charts are available here (XLS). We also provide another Excel file that is identical except that it assumes a variance for each estimate of X/2, rather than X. This places “0” just inside your 95% confidence interval for the “correct” version of your estimate. While the inflection points are later and higher, the basic picture is the same.
  • It is important to have a cost-effectiveness estimate. If the initial estimate is too low, then regardless of evidence quality, the charity isn’t a good one. In addition, very high initial estimates can imply higher potential gains to further investigation. However, “the higher the initial estimate of cost-effectiveness, the better” is not strictly true.
  • Independence of estimates is key to the above analysis. In my view, different formal estimates of cost-effectiveness are likely to be very far from independent because they will tend to use the same background data and assumptions and will tend to make the same simplifications that are inherent to cost-effectiveness estimation (see previous discussion of these simplifications here and here).Instead, when I think about how to improve the robustness of evidence and thus reduce the variance of “estimate error,” I think about examining a charity from different angles – asking critical questions and looking for places where reality may or may not match the basic narrative being presented. As one collects more data points that support a charity’s basic narrative (and weren’t known to do so prior to investigation), the variance of the estimate falls, which is the same thing that happens when one collects more independent estimates. (Though it doesn’t fall as much with each new data point as it would with one of the idealized “fully independent cost-effectiveness estimates” discussed above.)
  • The specific assumption of a normal distribution isn’t crucial to the above analysis. I believe (based mostly on a conversation with Dario Amodei) that for most commonly occurring distribution types, if you hold the “probability of 0 or less” constant, then as the midpoint of the “estimate/estimate error” distribution approaches infinity the distribution becomes approximately constant (and non-negligible) over the area where the prior probability is non-negligible, resulting in a negligible effect of the estimate on the prior.While other distributions may involve later/higher inflection points than normal distributions, the general point that there is a threshold past which higher initial estimates no longer translate to higher final estimates holds for many distributions.

The GiveWell approach
Since the beginning of our project, GiveWell has focused on maximizing the amount of good accomplished per dollar donated. Our original business plan (written in 2007 before we had raised any funding or gone full-time) lays out “ideal metrics” for charities such as

number of people whose jobs produce the income necessary to give them and their families a relatively comfortable lifestyle (including health, nourishment, relatively clean and comfortable shelter, some leisure time, and some room in the budget for luxuries), but would have been unemployed or working completely non-sustaining jobs without the charity’s activities, per dollar per year. (Systematic differences in family size would complicate this.)

Early on, we weren’t sure of whether we would find good enough information to quantify these sorts of things. After some experience, we came to the view that most cost-effectiveness analysis in the world of charity is extraordinarily rough, and we then began using a threshold approach, preferring charities whose cost-effectiveness is above a certain level but not distinguishing past that level. This approach is conceptually in line with the above analysis.

It has been remarked that “GiveWell takes a deliberately critical stance when evaluating any intervention type or charity.” This is true, and in line with how the above analysis implies one should maximize cost-effectiveness. We generally investigate charities whose estimated cost-effectiveness is quite high in the scheme of things, and so for these charities the most important input into their actual cost-effectiveness is the robustness of their case and the number of factors in their favor. We critically examine these charities’ claims and look for places in which they may turn out not to match reality; when we investigate these and find confirmation rather than refutation of charities’ claims, we are finding new data points that support what they’re saying. We’re thus doing something conceptually similar to “increasing K” according to the model above. We’ve recently written about all the different angles we examine when strongly recommending a charity.

We hope that the content we’ve published over the years, including recent content on cost-effectiveness (see the first paragraph of this post), has made it clear why we think we are in fact in a low-information environment, and why, therefore, the best approach is the one we’ve taken, which is more similar to investigative journalism or early-stage research (other domains in which people look for surprising but valid claims in low-information environments) than to formal estimation of numerical quantities.

As long as the impacts of charities remain relatively poorly understood, we feel that focusing on robustness of evidence holds more promise than focusing on quantification of impact.

*This implies that the variance of your estimate error depends on the estimate itself. I think this is a reasonable thing to suppose in the scenario under discussion. Estimating cost-effectiveness for different charities is likely to involve using quite disparate frameworks, and the value of your estimate does contain information about the possible size of the estimate error. In our model, what stays constant across back-of-the-envelope estimates is the probability that the “right estimate” would be 0; this seems reasonable to me.

Some considerations against more investment in cost-effectiveness estimates

When we started GiveWell, we were very interested in cost-effectiveness estimates: calculations aiming to determine, for example, the “cost per life saved” or “cost per DALY saved” of a charity or program. Over time, we’ve found ourselves putting less weight on these calculations, because we’ve been finding that these estimates tend to be extremely rough (and in some cases badly flawed).

One can react to what we’ve been finding in different ways: one can take it as a sign that we need to invest more in cost-effectiveness estimation (in order to make it more accurate and robust), or one can take it as a sign that we need to invest less in cost-effectiveness estimation (if one believes that estimates are unlikely to become robust enough to take literally and that their limited usefulness can be achieved with less investment). At this point we are tentatively leaning more toward the latter view, this post lays out our thinking on why.

This post does not argue against the conceptual goal of maximizing cost-effectiveness, i.e., achieving the maximal amount of good per dollar donated. We strongly support this conceptual goal; rather, we are arguing that focusing on directly estimating cost-effectiveness is not the best way to maximize cost-effectiveness. We believe there are alternative ways of maximizing cost-effectiveness – in particular, making limited use of cost-effectiveness estimates while focusing on finding high-quality evidence (an approach we have argued for previously and will likely flesh out further in a future post).

In a nutshell, we argue that the best currently available cost-effectiveness estimates – despite having extremely strong teams and funding behind them – have the problematic combination of being extremely simplified (ignoring important but difficult-to-quantify factors), extremely sensitive (small changes in assumptions can lead to huge changes in the figures), and not reality-checked (large flaws can persist unchecked – and unnoticed – for years). We believe it is conceptually difficult to improve on all three of these at once: improving on the first two is likely to require substantially greater complexity, which in turn will worsen the ability of outsiders to understand and reality-check estimates. Given the level of resources that have been invested in creating the problematic estimates we see now, we’re not sure that really reliable estimates can be created using reasonable resources – or, perhaps, at all.

We expand on these points using the case study of deworming, the only DCP2 estimate that we have enough detail on to be able to fully understand and reconstruct.

Simplicity of the estimate
The estimate is extremely simplified. It consists of

  • Costs: two possible figures for “cost per child treated,” one for generic drugs and one for name-brand drugs. These figures are drawn from a single paper (a literature review published 3 years prior to the publication of the estimate); costs are assumed to scale linearly with the number of children treated, and to be constant regardless of the region.
  • Drug effectiveness: for each infection, a single “effectiveness” figure is used, i.e., treatment is assumed to reduce disease burden by a set percentage for a given disease. For each infection, a single paper is used as the source of this “effectiveness” figure.
  • Symptoms averted: the prevalence of different symptoms is assumed to be different by region, but the regions are broad (there are 6 total regions). Prevalence figures are taken from a single paper. The severity of each symptom is assumed to be constant regardless of context, using standard disability weights. Effective treatment is presumed to prevent symptoms for exactly one year, with no accounting for externalities, side effects, or long-term effects (in fact, in the original calculation even deaths are assumed to be averted for only one year).
  • Putting it all together: the estimate calculates benefits of deworming by estimating the number of children cured of each symptom for a single year (based on the six regional figures re: how common symptoms are), converting to DALYs using its single set of figures on how severe each symptom is, and multiplying by the single drug effectiveness figure. It divides these DALY-denominated benefits into the costs, which are again done using a single per-child figure.

No sensitivity analysis is included to examine how cost-effectiveness would vary if certain figures or assumptions turned out to be off. No adjustments are made to address issues such as (a) the high uncertainty of many of the figures (which has implications for overall cost-effectiveness); (b) the fact that figures are taken from a relatively small number of studies, and are thus likely to be based on unusually well-observed programs.

In our view, any estimate this simple and broad has very limited application when examining a specific charity operating in a specific context.

Sensitivity of the estimate
The estimate is extremely sensitive to changes in inputs. In the course of examining it and trying different approaches to estimating the cost-effectiveness of deworming, we arrived at each of the following figures at one point or another:

Cost per DALY for STH treatment Key assumptions behind this cost
$3.41 original DCP2 calculation
$23.92 +corrected disability weight of ascariasis symptoms
$256 -corrected disability weight of ascariasis symptoms
+corrected prevalence interpretation for all STHs and symptoms and disability weight of trichuriasis symptoms
$529 +corrected disability weight of ascariasis symptoms
$385 +incorrectly accounting for long-term effects
$326 -incorrectly accounting for long-term effects
+corrected duration of trichuriasis symptoms
$138 +correctly accounting for long-term effects
$82.54 Jonah’s independent estimate for, implicitly accounting for long-term effects and using lower drug costs

Our final corrected version of the DCP2’s estimate varies heavily within regions as well:

Cost per DALY for STH treatment Region
$77.39 East Asia & Pacific
$83.16 Latin America & Caribbean
$412.22 Middle East & North Africa
$202.69 South Asian Seas
$259.57 Sub-Saharan Africa

Lack of reality-checks
As we wrote previously, we believe that a helminth expert reviewing this calculation would have noticed the errors that we pointed to. This is because when one examines the details of the (uncorrected) estimate, it becomes clear that nearly all of the benefits of deworming are projected to come from a single symptom of a single disease – a symptom which is, in fact, only believed to be about 1/20 as severe as the calculation implies, and only about 1/100 as common.

So why wasn’t the error caught between its 2006 publication (and numerous citations) and our 2011 investigation? We can’t be sure, but we can speculate that

  • The DALY metric – while it has the advantage of putting all health benefits in the same units – is unintuitive. We don’t believe it is generally possible to look at a cost-per-DALY figure and compare it with one’s informal knowledge of an intervention’s costs and benefits (though it is more doable when the benefits are concentrated in preventing mortality, which eliminates one of the major issues with interpreting DALYs).
  • That means that in order to reality-check an estimate, one needs to look at the details of how it was calculated.
  • But looking at the details of how an estimate is calculated is generally a significant undertaking – even for an estimate as simple as this one. It requires a familiarity with the DALY framework and with the computational tools being used (in this case Excel) that a subject matter expert – the sort of person who would be best positioned to catch major problems – wouldn’t necessarily have. And it may require more time than such a subject matter expert will realistically have available.

In most domains, a badly flawed calculation – when used – will eventually produce strange results and be noticed. In aid, by contrast, one can use a completely wrong figure indefinitely without ever finding out. The only mechanism for catching problems is to have a figure that is sufficiently easy to understand that outsiders (i.e., those who didn’t create the calculation) can independently notice what’s off. It appears that the DCP2 estimates do not pass this test.

Our point here isn’t about the apparent lack of formal double-check in the DCP2’s process (though this does affect our view of the DCP2) but about the lack of reality-check in the 5 years since publication – the fact that at no point did anyone notice that the figure seemed off, and investigate its origin.

And the problem pertains to more than “catching errors”; it also pertains to being able to notice when the calculation becomes out of line with (for example) new technologies, new information about the diseases and interventions in question, or local conditions in a specific case. An estimate that can’t be – or simply isn’t – continually re-examined for its overall and local relevance may be “correct,” but its real-world usefulness seems severely limited.

The dilemma: the less simplified and sensitive, the more esoteric
It currently appears to us that the general structure of these estimates is too simplified and sensitive to be reliable without relatively constant reality-checks from outsiders (particularly subject matter experts), but so complex and esoteric that these reality-checks haven’t been taking place.

Improving the robustness and precision of the estimates would likely have to mean making them far more complex, which in turn could make it far more difficult for outsiders (including subject matter experts) to make sense of them, adapt them to new information and local conditions, and give helpful feedback.

The resources that have already been invested in these cost-effectiveness estimates are significant. Yet in our view, the estimates are still far too simplified, sensitive, and esoteric to be relied upon. If such a high level of financial and (especially) human-capital investment leaves us this far from having reliable estimates, it may be time to rethink the goal.

All that said – if this sort of analysis were the only way to figure out how to allocate resources for maximal impact, we’d be advocating for more investment in cost-effectiveness analysis and we’d be determined to “get it right.” But in our view, there are other ways of maximizing cost-effectiveness that can work better in this domain – in particular, making limited use of cost-effectiveness estimates while focusing on finding high-quality evidence (an approach we have argued for previously and will likely flesh out further in a future post).