How do GiveWell’s funding decisions influence the actions of governments, funders, and other organizations? Answering this question is an important part of figuring out which global health programs are most cost-effective and thus which we should support. We’ve already written about two key factors in our cost-effectiveness estimates: the cost per person reached and the overall burden of the problem. But those are only part of the equation.
We also consider what others are likely to do in response to our choices. For example, does our funding displace money the local government had planned to allocate to the program? Or would our funding make other funders more excited to join us in making sure the program is implemented?
Wedding registries provide a loose analogy about how one person’s decision might influence another’s: If great-aunt Sally already bought the toaster on the list, you’re probably not going to buy the lucky couple another one. The money she spent on the toaster has displaced the funding you had planned to allocate to the toaster: this is what we call “fungibility.”
In contrast, if the spouses-to-be have signed up for flatware service for 12 and other guests have purchased only 6 settings, you might prioritize filling out the remainder of the set, to be sure that the couple doesn’t run out of spoons at their upcoming dinner parties. In that case, the guests who purchased the first 6 settings are “crowding in” funding from you: this is what we call “leverage.”
Let’s think about how this might apply to health programs. Suppose we’re considering a $10 million grant for a program to increase childhood vaccination in (fictional) Beleriand. GiveWell’s initial cost-effectiveness estimate showed that the program was almost 20x as cost-effective as unconditional cash transfers. (We use cash transfers as a benchmark for comparing different programs.) This estimate makes the program initially seem like a good candidate for funding, as it surpasses our current cost-effectiveness threshold of 10x.
But what if there was a possibility that the government would have funded the program if GiveWell hadn’t? Because money is fungible, our $10 million grant would displace funds that may have been allocated by the government, freeing up the government to spend its $10 million in some other way. The arrival of the funding has the practical effect of allowing the government to spend the funds on a lower-priority (and lower-impact) program that it otherwise would not have had enough money to pay for. The ultimate effect of GiveWell’s funding is not the high-impact vaccine program, which would have happened anyway, but the lower-impact alternative, which wouldn’t.
Of course, that’s not the only way GiveWell’s grantmaking might affect others. Imagine another scenario: there’s a program in (equally fictional) Adumar to treat parasitic infections. Providing the treatment to all of the country’s young children would cost $20 million. If GiveWell provides a grant, we think it’s likely that one of the major pharmaceutical companies there would donate medication worth $3 million. The arrival of the funding has the practical effect of inducing the company to contribute medication, which it would not have done if the program did not have implementation funding.
Here’s a real example showing how we calculate these effects. Malaria Consortium’s seasonal malaria chemoprevention (SMC) program is one of GiveWell’s top charities. Through conversations with government officials, other funders, and experts in the field, our researchers estimate that in Sokoto State, Nigeria there’s almost no chance1While we recognize that there is some chance of the government stepping in to fund the program, we believe that chance is negligible. We use 0% in our model for simplicity, as our bottom-line estimate is not sensitive to small changes in this parameter. that the domestic government would fund the program if we didn’t, but there’s a 20% chance that the Global Fund or the President’s Malaria Initiative (PMI), the two largest global malaria funders, would fund it in our place. In contrast, in Togo, we think there’s a 65% chance that one of these funders would step in. Those probabilities, along with our estimate of the cost-effectiveness of other uses of government and Global Fund/PMI funding,2We currently estimate that counterfactual government spending is about 5% as valuable as seasonal malaria chemoprevention, and counterfactual Global Fund funding is about 14% as valuable. For more information about our estimates, see this section of our report on SMC. affect a program’s cost-effectiveness. The higher likelihood that another funder would step in is one important reason that our overall estimate of SMC’s cost-effectiveness is much lower in Togo (around 11x cash) than in Sokoto (around 30x cash).
***
Of course, the possibility that we are displacing or encouraging other funding is only one of many factors we consider as we look for the most cost-effective global health and poverty alleviation programs. Our researchers spend more than 50,000 hours each year poring over evidence and making judgment calls, with the goal of directing funding where it can do the most good.
Notes
↑1 | While we recognize that there is some chance of the government stepping in to fund the program, we believe that chance is negligible. We use 0% in our model for simplicity, as our bottom-line estimate is not sensitive to small changes in this parameter. |
---|---|
↑2 | We currently estimate that counterfactual government spending is about 5% as valuable as seasonal malaria chemoprevention, and counterfactual Global Fund funding is about 14% as valuable. For more information about our estimates, see this section of our report on SMC. |
Comments
I understand the importance of fungibility and the probability that another funder might step in, But where do you get estimates of probability like “a 20% chance” that the Global Fund or the President’s Malaria Initiative (PMI) would fund SMC , or “a 65% chance” in Togo? Why not 1%, or 5%, or 80%? Is there actual evidence behind your estimated probabilities, or are you just pulling those numbers “out of a hat”? Why not use ranges, like 10-30% or 50-70% chance?
Hi Angelo,
Thanks for your questions! It’s true that these estimates are necessarily subjective guesses. You can read more about how we come up with these estimates here.
In the past, we’ve occasionally done backwards looking analyses to understand how frequently programs we decided not to fund got funded without us (e.g., this page). We hope to do more of this in the future to help get more quantitative benchmarks.
We specify a best guess (e.g., 20%) because we have to make a decision on these grants and rely, in part, on the bottom line cost-effectiveness estimate. We agree these estimates have a lot of uncertainty.
In our intervention reports and grant pages, we include 25th/75th ranges to show how much uncertainty we have (e.g., see “Adjustment for diverting other actors’ spending away from SMC (“funging”)” line in the simple CEA). We think showing these ranges highlights how much uncertainty we have on these parameters.
Our intervention reports also highlight funging, especially for SMC, as a major area of uncertainty (e.g., see “How could we be wrong?” in our SMC intervention report).
When a program has a 20% chance of being funded by another org instead of GiveWell, and the counterfactual impact of GiveWell in that scenario is instead to fund that other org’s marginal project, do you make an attempt to assess the value of that marginal project, or do you treat the money as completely wasted? (or something else?)
I suppose in the case of fungibility, assuming the marginal project is zero value is a conservative assumption, but in the case of “crowding in”, it’s not a conservative assumption — by motivating them to fund this, they may not be funding something else, and that something else may have had some impact (presumably less, if they’re correct to reallocate funding, but still some).
If we believe that GiveWell’s opportunities are generally “much better than average”, perhaps it makes sense to round all these things to zero. For example, suppose the project targeted is 10x better than the marginal project, then failing to account for the impact the marginal project would have had is just the difference between that estimate being 10x or 9x, which though not a tiny effect seems probably smaller than other uncertainties in the analysis.
But if the org you’re displacing is (say) specifically a malaria control org, it seems more plausible that the effectiveness gap between the best and the marginal project isn’t so large?
Hi Ben,
Thanks for your questions! We don’t assume the money is wasted. We have assumptions in the models about the cost-effectiveness of the crowded out money—many of those assumptions are captured in this spreadsheet.
Reference points:
– We assign 116 ‘units of value’ to averting the death of a child under the age of 5 from malaria, and 1.44 units of value to increasing ln(consumption) by one unit for one person for one year (~doubling their income for a year).
– This translates into 0.0034 units of value per dollar spent on unconditional cash transfers. Our cost-effectiveness bar is 10x (or 8x for top charities) that value, i.e. 0.034 units of value per dollar.
Here are a couple examples of assumptions about value of crowded out funding:
– We assume 0.0070 units of value per dollar for Gavi funding that’s crowded out on the margin. That’s about 2x unconditional cash transfers. That’s driven by Gavi’s historic success in fundraising, which we may need to revisit after its fundraising round in the coming year.
– We assume 0.015 units of value per dollar for the Global Fund to fight AIDS, Tuberculosis, and Malaria, based an on analysis of how unused funding was reallocated. That’s about 4.5x conditional cash transfers.
Comments are closed.