The GiveWell Blog

Update on Against Malaria Foundation’s costs

New cost estimates for AMF’s 2012 distributions

In a blog post in February, we noted that we had missed some costs in our estimates that were incurred by AMF’s distribution partner, Concern Universal. We undertook an assessment of these costs through discussion with Concern Universal.

In the course of our assessment, we re-visited our estimate of all other distribution costs as well, and decided that the most informative cost estimates for donors are 2012 projected distribution costs. The reason for this is that as of November 2011, AMF has shifted to larger-scale distributions which it will continue in 2012; these distributions are more cost-effective than previous distributions.

We have now calculated the 2012 projected costs. The total cost per net is lower than our previous estimate, even including the extra distribution partner costs mentioned above. We estimate a total cost of $5.54 per net for 2012 distributions, compared to a previous estimated total cost of $6.31 per net. This figure includes estimates of all costs incurred by all organizations participating in the distribution, including AMF, AMF’s distribution partners and local actors that work with AMF’s distribution partners.

The bulk of the change is due to the fact that AMF expects to distribute a million nets in 2012 – over twice the number it distributed in any previous year – while its organizational costs are likely to remain stable. Another contributor to the lower cost is that the equivalent cost of the donated services that AMF receives has decreased (both in the past year and projected for 2012). See our updated AMF review for full details.

We also calculated the marginal cost per net, which is projected to be $5.15 per net for 2012. The marginal net cost excludes AMF organizational costs, because we believe that these are unlikely to rise as additional nets are distributed (details in our updated AMF review.) The marginal cost per net is slightly higher than our previous estimate (which was about $5 per net), since it includes an extra $0.15 costs incurred by the distribution partners (for details on these costs, see below.)

Updated cost per life saved

Using the 2012 projected costs per LLIN, we estimate the cost per child life saved through an AMF LLIN distribution at about $1,600 using the marginal cost ($5.15 per LLIN) and about $1,700 using the total cost ($5.54 per LLIN).

See our spreadsheet analysis for details of our cost per life saved estimate.

Missing distribution partner costs

We have now gathered information on the missing costs from AMF’s distribution partner, Concern Universal. These missing costs have added an additional $0.15 per net. They consist of costs for salaries and office overhead that were incurred by both Concern Universal and by the Malawi government (which pays the salaries of health workers who assisted in the net distribution). Concern Universal did not initially tell us about these costs because they were costs that it incurred regardless of whether the distribution took place. However, we prefer to include all costs incurred to carry out a project, because we believe that this gives the best view of what it costs to achieve a particular impact (such as saving a life), and also avoids the lack of clarity and complications of leverage in charity.

Full details on these costs are available in our costs spreadsheet and our updated AMF review.

Millennium Villages Project

Several people have emailed us in the past few days asking about the new evaluation of the Millennium Villages Project (MVP), published in The Lancet last week. It has received significant attention in the development blogosphere (see, e.g., here, here, here, and here).

The evaluation argues that the MVP was responsible for a substantial drop in child mortality. However, we see a number of problems.

Summary

  • Even if the evaluation’s conclusions are taken at face value, insecticide-treated net distribution alone appears to account for 42% of the total effect on child mortality (though there is high uncertainty).
  • The MVP is much more expensive than insecticide-treated net distribution – around 45x on a per-person basis. Therefore, we believe that in order to make an argument that the MVP is the best available use of dollars, one must demonstrate effects far greater than those attained through distributing bednets. We believe the evaluation falls short on this front, and that the mortality averted by the MVP could have been averted at about 1/35th of the cost by simply distributing bednets. Note that the evaluation does not claim statistically significant impacts beyond health; all five of the reported statistically significant impacts are fairly closely connected to childhood mortality reduction.
  • There are a number of other issues with the evaluation, such that we believe the child mortality effect should not be taken at face value. We have substantial concerns about both selection bias and publication bias. In addition, a mathematical error, discovered by the World Bank’s Gabriel Demombynes and Espen Beer Prydz, overstates the reduction in child mortality, and the corrected effect appears similar to the reduction in child mortality for the countries as a whole that the MVP works in (though still greater than the reduction in mortality for the villages the MVP chose as comparisons for the evaluation). The MVP published a partial retraction with respect to this error (PDF) today.

We would guess that the MVP has some positive effects in the villages it works in – but for a project that costs as much per person as the MVP, that isn’t enough. We don’t believe the MVP has demonstrated cost-effective or sustainable benefits. We also don’t believe it has lived up (so far) to its hopes of being a “proof of concept” that can shed new light on debates over poverty.

Also see coverage of the Millennium Villages Project by David Barry, Michael Clemens, Lee Crawfurd, and Gabriel Demombynes and Espen Beer Prydz, much of which we’ve found helpful in thinking about the MVP and some of which we cite in this post.

Background

The Millennium Villages Project attempts to make significant progress towards achieving the Millennium Development Goals through a package of intensive interventions in 13 clusters of villages in rural Africa. It further aims to serve as a demonstration of the potential of integrated development efforts to cost-effectively improve lives in rural Africa. In its own words, the MVP states, “Millennium Villages are designed to demonstrate how the Millennium Development Goals can be met in rural Africa over 10 years through integrated, community-led development at very low cost.”

The drop in child mortality, and the comparison to insecticide-treated nets

The new evaluation concludes:

“Baseline levels of MDG-related spending averaged $27 per head, increasing to $116 by year 3 of which $25 was spent on health. After 3 years, reductions in poverty, food insecurity, stunting, and malaria parasitaemia were reported across nine Millennium Village sites. Access to improved water and sanitation increased, along with coverage for many maternal-child health interventions. Mortality rates in children younger than 5 years of age decreased by 22% in Millennium Village sites relative to baseline (absolute decrease 25 deaths per 1000 livebirths, p=0.015) and 32% relative to matched comparison sites (30 deaths per 1000 livebirths, p=0.033). The average annual rate of reduction of mortality in children younger than 5 years of age was three-times faster in Millennium Village sites than in the most recent 10-year national rural trends (7.8% vs 2.6%).”

In a later section, we question the size and robustness of this conclusion; here we argue that even taken at face value, it does not imply good cost-effectiveness for the MVP compared to insecticide-treated net distribution alone.

The MVP’s own accounting puts the cost per person served in the third year of treatment, including only field costs, at $116 (see the quote, above). Assuming linear ramp-up of the program, we take the average of baseline ($27/person) and third year ($116/person) spending and estimate that MVP spent roughly $72/person during the first three years of the project. Michael Clemens, has argued that their spending amounts to “roughly 100% of local income per capita.”

We should expect that amount of spending to make a difference in the short term, especially since some of it is going to cheap, proven interventions, like distributing bednets. In fact, it appears that the biggest and most robust impact of the 18 reported was increasing the usage of bednets.

The proportion of under-5 children sleeping under bednets in the MVP villages in year 3 was 36.7 percentage points higher than the proportion in the comparison villages. The Cochrane Review on bednet distribution estimates that “5.53 deaths [are] averted per 1000 children protected per year.” (See note.) If we assume that 80% of bednets distributed are used, the additional bednet usage rate (36.7 percentage points) found in MVP’s survey indicates that MVP’s program lead to 46 percentage points (36.7 / 80%) more villagers receiving bednets than did in the control villages. (Note that using a figure lower than 80% for usage would imply a higher impact of bednets because of the way the estimate works.) Therefore, we’d estimate that for every 1000 children living in an MVP village, the bednet portion of MVP’s program alone would be expected to save 2.54 lives per year ((5.53 lives saved per year / 1000 children who receive a bednet) * 0.46 additional children receiving a bednet per child in a MVP village). Said another way, the bednet effect of the MVP program would be expected to reduce a child’s chances of dying by his or her fifth birthday by roughly 1.27 percentage points (0.254% reduction in mortality per year over 5 years). The total reduction in under-five mortality observed in the evaluation was 3.05 percentage points (30.5 per 1000 live births). Thus the expected effect of increasing bednet usage in the villages accounts for 42% of the observed decrease in under-5 mortality, and is within the 95% confidence interval for the total under-5 mortality reduction. (We can’t say with 95% confidence that the true total effect of the MVP on child mortality is larger than just its effect due to increased bednet distribution.)

Insecticide-treated nets cost roughly $6.31 (including all costs) to distribute and cover an average of 1.8 people and last 2.22 years (according to our best estimates). That works out to about $1.58 per person per year. At $72 per person per year, the MVP costs about 45 times as much (on a per-person-per-year basis) as net distribution. Although we would expect bednets to achieve a smaller effect on mortality than MVP on a per-person-per-year basis, we estimate that the MVP could have attained the same mortality reduction at ~1/35 of the cost by simply distributing bednets (see our spreadsheet for details of the calculation).

If the MVP evaluation had shown other impressive impacts, then perhaps the higher costs would be well justified, but 3 of the 5 statistically significant results from the study are on bednet usage, malaria prevalance, and child mortality. (The other two are access to improved sanitation and skilled birth attendance, both of which would also be expected to manifest benefits in terms of reductions in under-5 mortality.) There were no statistically significant benefits in terms of poverty or education.

Other issues with the MVP’s evaluation

Lack of randomization in selecting treatment vs. comparison villages. The evaluation uses a comparison group of villages that were selected non-randomly at the time of follow-up, so many of the main conclusions of the evaluation are drawn based simply on comparing the status of the treated and non-treated villages in year 3 of the intervention, without controlling for potential initial differences between the two groups. If the control villages started at a lower baseline level and improved over time at exactly the same rate as the treatment villages, then the treatment would appear to have an impact equal to the initial difference, before the intervention began, between the the treatment and control groups, even though it actually had none. Even in cases in which baseline data is available from the control groups, it is possible that the group of villages selected as controls could improve more slowly than the treatment group for reasons having nothing to do with the treatment. Accordingly, there are strong structural reasons to regard the evaluation’s claims with skepticism.

Michael Clemens has written more about this issue here and here. We agree with his argument that the MVP could and seemingly should have randomized its selection of treatment vs. control villages instead, especially given its goal of serving as a proof of concept.

Publication bias concerns. The authors report 18 outcomes from the evaluation; results on 13 of them are statistically insignificant at the standard 95% confidence level (including all of the measures of poverty and education). Even if results were entirely random, we’d expect roughly one statistically significant result out of 18 comparisons. The authors find five statistically significant results, which implies that the results are unlikely to be just due to chance, but they could have explicitly addressed the fact that they checked a number of hypotheses and performed statistical adjustments for this fact, which would have increased our confidence in their results. The authors did register the study with ClinicalTrials.gov, but the protocol was first submitted in May 2010, long after the data had been collected for this study.

We also note that the registration lists 22 outcomes, but the authors only report results for 18 in the paper. They explain the discrepancy as follows: “The outcome of antimalarial treatment for children younger than 5 years of age was excluded because new WHO guidelines for rapid testing and treatment at the household level invalidate questions used to construct this indicator. Questions on exclusive breast-feeding, the introduction of complementary feeding, and appropriate pneumonia treatment were not captured in our year 3 assessments.” But this only accounts for three of the four missing outcomes. This does not explain why the authors do not report results for mid-upper arm circumference (a measure of malnutrition), which the ClinicalTrials.gov protocol said they would collect.

Mathematical error in estimating the magnitude of the child-mortality drop.

Note: the MVP published a partial retraction with respect to this error (PDF) today.

At the World Bank’s Development Impact Blog, Gabriel Demombynes and Espen Beer Prydz point out a mathematical error in the evaluation’s claim that “The average annual rate of reduction of mortality in children younger than 5 years of age was three-times faster in Millennium Village sites than in the most recent 10-year national rural trends (7.8% vs 2.6%).”

Essentially, they used the wrong time frame in calculating the decline in Millennium Villages: to estimate the per-year decline in childhood mortality, they divided the difference in the average childhood mortality during the treatment period (3 years long) and the previous 5 year baseline period by three, to try to get the annual decline. As Demombynes and Prydz point out, however, this mistakenly assumes that the time difference between the 3 year average and the 5 year average is 3 years, when it is in fact 4 years:

[When we originally published this post in 2012, we included a link here to an image stored on a World Bank web server. In 2020, we learned that this image link was broken and were unable to successfully replace it. We apologize for the omission of this image.]

This shifts the annual decline in child mortality from 7.8% to 5.9% (though see Dave Barry and Michael Clemens’ comments here for more discussion of the assumptions behind these calculations).

The adjusted figure for child mortality improvement is no better for the MVP villages than for national trends. Demombynes and Prydz go on to argue that using a more appropriate and up-to-date data set for the national trends in childhood mortality get an average trend of -6.4% a year, better than in the Millennium Villages, and that the average reductions in rural areas are even higher.

Note, however, that this argument is saying that the comparison group in the study is not representative of the broader trend, not that the Millennium Villages did not improve relative to the comparison group.

Conclusion

The Millennium Villages Project is a large, multi-sectoral, long-term set of interventions. The new evaluation suggests, though it does not prove, that the MVP is making progress in reducing childhood mortality, but at great cost. It does not provide any evidence that the MVP is reducing poverty or improving education, its other main goals. These results from the first three years of implementation, if taken seriously, are discouraging. The primary benefits of the intervention so far–reductions in childhood mortality–could have been achieved at much lower costs by simply distributing bednets.

Note: the Cochrane estimate of 5.53 deaths averted per 1,000 children protected per year does not assume perfect usage. Our examination of the studies that went in to the Cochrane estimate found that most studies report usage rates in the range of 60-80%, though some report 90%+ usage.

GiveWell Labs update and priority causes

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

Over the past few months, the main focus of GiveWell Labs has been strategic cause selection. Before diving into a particular cause, we want to make sure we’ve done a reasonable amount of work looking at all our options and picking our causes strategically.

We’ve published our take on what information we can find on philanthropy’s past successes and our observations on what foundations work on today (both with spreadsheets so others can examine our data), and we’ve published our framework for identifying a good cause. With these in mind, this post lists causes we’re planning to focus on over the short term.

We are not at all confident that these causes represent the most promising ones; we see our list of priority causes as a starting point for learning. By publishing our reasoning, along with all data we’ve used, we hope to elicit feedback at this early stage; in the course of investigating our priority causes, we expect to learn more about these causes and about the best way to choose causes in general. And we have prioritized our causes partly based on the potential for learning, not just based on how promising we would guess that they are. Also note that these causes do not represent restrictions – we will consider outstanding giving opportunities in any category – but rather areas of focus for investigation.

We currently believe that no established philanthropist engages in strategic cause selection – the practice of listing all the causes one might work on, and choosing them based on a combination of “potential impact” and “underinvestment by other philanthropists.” (This is not to say that no established philanthropist picks good causes – we believe many have picked excellent causes, perhaps through more implicit “strategy” – it is just to say that we know of no established philanthropist applying the sort of explicit strategic selection we envision.) So we believe we are in uncharted territory; thus, we expect to hit a fair amount of dead ends and to do a lot of revision and learning, but we also hope that strategic cause selection will eventually become a valuable tool for having maximal impact with one’s giving.

Summary of our priority causes (details follow):

  • Global health and nutrition is an area we know well and believe has many good giving opportunities. It is our current top priority. We seek to find more opportunities for donors along the lines of our top charities; we also seek to learn from existing foundations about the best higher-risk projects they are unable to fund.
  • Funding scientific research is a good conceptual fit for philanthropy, accounts for many of philanthropy’s most impressive success stories, and may provide bang-for-the-buck as good as or better than global health and nutrition.
  • Meta-research is our term for trying to improve the systematic incentives that academic researchers face, to bring them more in line with producing maximally useful work. We believe there is substantial room for improvement in this alignment, and that this cause is therefore promising as a high-leverage way to get the benefits of funding research; current philanthropic attention to this cause appears very low.
  • Averting and preparing for global catastrophic risks (GCRs) including climate change is a good conceptual fit for philanthropy and may provide bang-for-the-buck as good as or better than global health and nutrition. Today’s philanthropy appears to invest moderately in climate change, but very little in other GCRs.

We also briefly discuss popular causes that we aren’t currently prioritizing.

Top-priority causes

Global health and nutrition
Based on our past work seeking outstanding charities, we feel that global health and nutrition is the strongest area within the category of “directly helping the disadvantaged.” It’s also an area that we know fairly well (again, because of our past work), so we expect to be able to find strong giving opportunities more quickly here than in areas we’re less familiar with. Because of this, global health and nutrition is our top priority for GiveWell Labs.

Our plans:

  • As discussed at our 2011 research outline, we are investigating the idea of restricted funding to large organizations in order to fund proven, cost-effective interventions that we can’t fund otherwise. Our goal here would be to, in a sense, “create new top charities” – create funding vehicles that allow individual donors to deliver proven, cost-effective health and nutrition interventions. (One could think of this project as trying to create an “AMF for vaccines, nutrition, or other promising intervention.”)
  • We are also interested in higher-risk, higher-upside projects within this area. We are aware of some major foundations that pursue these sorts of opportunities and have more investigative capacity and relevant background than we do. So our ideal would be to leverage these foundations’ investigative work, by working with them to identify the best giving opportunities that they have sourced but cannot fully fund. We are currently looking into the possibility of doing this. If it proves unworkable, we may seek other ways to investigate high-risk, high-upside opportunities in this area.

Funding scientific research
As discussed previously, we believe many of the most impressive “success stories” in the history of philanthropy are in the category of funding research, particularly biomedical research. We also find research funding to be a good conceptual fit for philanthropy, as well as something that could plausibly get better “bang for the buck” than global health and nutrition interventions (since it involves creating global public goods – once developed, a new insight can be applied on a global scale and potentially for a long time).

In philanthropy currently, it appears that biomedical research is a moderately popular area, while natural sciences are less popular but still have some philanthropic presence. Of course, much of the funding for (early-stage) research comes via government and/or university money, but we hypothesize that philanthropy may be able to play a special role in supplementing these systems, by specifically aiming to support the kind of work that the traditional academic system and government funders cannot or will not. (We believe that there may be ways in which the traditional system falls short of maximum value-added, as discussed in the next section.) When we look at the activities of current philanthropic players (see our notes on the biomedical research activities of the top 100 foundations), it seems possible to us that relatively few of these players are specifically looking to supplement or improve on the government and university systems (by contrast, we believe that many efforts within U.S. education and global health seek to improve on and contrast with government programs in these areas).

So we see funding research as a potentially high-impact area, and we’re especially interested in the possibility of opportunities that the government/university systems systematically underfund. In addition, funding research is fundamentally different from the sort of direct-aid-oriented work we’ve focused on in the past, and we feel that investigating it will be an important learning experience.

Our next steps will be to

  • Seek out conversations with the major foundations that fund scientific research
  • Ask researchers about under-invested-in opportunities, while conducting “meta-research” conversations (see next section)

Meta-research
In the course of our research on outstanding charities, we’ve come to the working conclusion that academic research – at least on topics relevant to us – is falling far short of its maximum value-added to society, largely due to problematic incentives. We laid out some of our views last year in Suggestions for the Social Sciences; we also think that GiveWell Board member Tim Ogden’s recent SSIR piece is worth reading on this topic.

In brief, we believe that (a) academic incentives do not appear fully aligned with what would be most useful (for example, replicating studies is highly useful but does not appear to be popular in academia); (b) academics rarely engage in practices – such as preregistration, and sharing of data and code – that could make their research easier for outsiders to evaluate and use in decisionmaking; (c) too much academic research is restricted to pay-access to journals, rather than being in a format and place that would allow maximum accessibility. Based on informal conversations, we believe these issues are present across academia generally, not just in the areas we’ve examined, though we intend to investigate more.

We have seen some philanthropy focused on (c). Two of the 82 foundations we’ve examined have program areas that we’ve categorized as “scholarship and open access”; the Wellcome Trust in the UK is also pushing for open access. However, we’re not aware of any foundation making a concerted push to improve (a) and (b), aligning academic incentives with what would be most useful to society.

As discussed in the previous section, we think of research as a highly promising and important area for philanthropy, based both on history and on the conceptual possibility of impact-per-dollar-spent. If problematic incentives are causing academic research to systematically fall short of its maximum potential value-added to society, investments in meta-research could have highly leveraged impact. That’s sufficient to think that this cause has some potential; the fact that it appears to be largely absent from today’s large-scale philanthropy increases its appeal.

We will write more in the future about our plans for investigating meta-research, which overlap strongly with our plans for investigating direct funding of research (the previous section). We are aiming to speak to a broad range of academics about whether, and how, the work being done in their fields – and the general practices of their field – diverge from what would add maximum value to society.

Global catastrophic risks (GCRs), including climate change
Foundations work to address a variety of threats – such as climate change, nuclear weapons proliferation, and bioterrorism – that could conceivably lead to major global catastrophes.

We see this work as an excellent conceptual fit for philanthropy, because the potential catastrophes are so far-reaching that it is hard to articulate any other actor that has good incentives to invest sufficiently in preparing for and averting them. (Governments do have some incentives to avert catastrophic risks, but catastrophic risk preparation has no natural “interest groups” to lobby for it, and it is easy to imagine that governments may not invest sufficiently or efficiently.) As with research, we find it plausible that opportunities in this area could have good “bang for the buck” relative to international aid, simply because they seek to avert such large catastrophes.

In philanthropy currently, working on climate change is moderately popular, but work on other risks is extremely rare. Out of 82 foundations we examined, two work on nuclear non-proliferation and one works on biological threats; none work on other potential threats.

One concern about this area is that gauging the success or failure of projects seems extremely difficult to do, even in a proximate way, because projects are so focused on low-probability events.

We are currently reviewing the literature on climate change and will be posting more in the future. We are also advising Nick Beckstead and a few volunteers from Giving What We Can as they collect information on the organizations working on GCRs other than climate change.

A note on policy advocacy
A long-term goal of ours is to learn more about policy advocacy, which is a general philanthropic tactic (an option for funding in almost any cause) that we know very little about. For the near future, we do not plan on recommending any policy advocacy funding; we plan on allocating small amounts of time to conversations with people in the space to learn more about how it works in general.

Popular causes we don’t plan to prioritize
Our survey of the current state of philanthropy highlighted the following as particularly popular causes that aren’t listed above. We will be writing more about them; for now, we provide very brief thoughts and relevant links to some work we’ve done in the past.

  • U.S./developed-world education: we perceive this as perhaps the most popular cause in philanthropy today. Many major foundations and philanthropists are working on it, and have worked on it in the past, yet progress seems slow on achieving – and rolling out – evidence-backed ways to improve educational outcomes. For more, see our report on U.S. charities.
  • U.S. poverty alleviation (including health care): we see a lot of philanthropy focused in these areas today, yet we believe the bang-for-the-buck is poor relative to international aid. For more, see our report on U.S. charities, Your Dollar Goes Further Overseas, Poor in the U.S. = rich, and Hunger Here vs. Hunger There.
  • Arts and culture. We don’t see GiveWell as having much potential value-added in this area. (We’ll be elaborating in a future post.)
  • Animal welfare; environmental conservation (not including climate change-related work). Current GiveWell staff are primarily interested in humanitarian giving, and we don’t see these areas as being directly enough connected to humanitarian values to merit a high priority. At one point we advised a volunteer who did some work investigating animal welfare charities, and we may later discuss this work.
  • Funding social entrepreneurs and social enterprise. We do not find this area promising; we will be writing about it more in the future. Also see Acumen Fund and Social Enterprise Investment and When Donations and Profits Meet, Beware.
  • Developing-world aid outside of health and nutrition. From what we’ve seen so far, health and nutrition are the most promising areas within developing-world aid. However, we remain open on this point, and are certainly more interested in this area than in the other areas listed in this section. We’re particularly interested in learning more about the “transparency/accountability/democracy” sector, which is moderately popular among today’s foundations and which we currently know very little about. Also see our writeups on microfinance, developing-world economic empowerment, disaster relief, agriculture, and education (as well as our summary of why we prefer global health and nutrition).

What large-scale philanthropy focuses on today

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

We think there are two key questions for someone trying to do strategic cause selection: (1) What is the history of philanthropy – what’s worked and what hasn’t? (2) What is the current state of philanthropy – what are philanthropists focused on and what might they be overlooking?

We started to answer (1) in our discussion of foundation “success stories.” This post addresses (2). We first discuss the data sets we have used, which we are making publicly available and linking from this post. We then make some observations from these data sets.

The data sets we’ve used

  • Dollar allocation data. The Foundation Center maintains a database of grant amounts, dates, descriptions and more for over 100,000 foundations (over 2.4 million grants). It also tags these grants by category in ways that we’ve found helpful. The Foundation Center provided us with a breakdown by category of 2009-2010 grants that it had selected as an efficient representative sample, totaling about $20 billion, which would be equivalent to about half of 2010 foundation giving according to the Foundation Center). We went through the 923 categories provided by the Foundation Center and applied our own tags to these categories, resulting in a breakdown of spending by 33 “GiveWell categories” (106 total subcategories). When we were unclear on the nature of a Foundation Center category (or simply found one interesting), we pulled the top 100 grants for that category using our paid subscription to Foundation Directory Online.”GiveWell categories” simply refers to a set of tags we created, because we found it to be helpful in thinking about the breakdown of giving from our perspective. When we discuss dollar allocations to different categories in this post, we are referring to “GiveWell categories” and not to the categories maintained by the Foundation Center. There may be some cases in which GiveWell defines a term differently from the Foundation Center, meaning that our figure for that term will be different from what the Foundation Center publishes (for example, we break out “museums” as a separate category from “arts and culture,” so the figure we would give for foundation spending on “arts and culture” is different from the figure the Foundation Center would give). This does not mean that there is actually a contradiction between our data and Foundation Center’s; we are using Foundation Center’s data and consider their reported funding allocations to be correct according to their term definitions.

    We provide a spreadsheet that includes both the data provided directly to us by Foundation Center (“FDO categories”) and the breakdown according to our own category definitions (“GiveWell categories”). It also makes it possible to see exactly how we defined “GiveWell categories” and thus how these might be different from “FDO categories.”

    Dollar allocation data (XLS)
  • Data from the top 100 foundations’ websites, compiled by Victoria Dimond (GiveWell volunteer) and Good Ventures, which has been working closely with GiveWell on GiveWell Labs. Victoria and Good Ventures visited the websites of the top 100 independent foundations in the U.S. (we generated this list using Foundation Center data; we found sufficiently informative websites for 82 of the 100) and created a spreadsheet with the names and descriptions of their program areas and sub-program areas. We then created summary sheets that rank program area types based on how many foundations work on them, and rank foundations by their “unusualness” (the extent to which they work on program areas that few other foundations work on).
    Program Areas for Top 100 U.S. Foundations (XLS)

In categorizing giving for both of these, we deliberately used categories tailored to our own interests (rather than trying to come up with a universally useful taxonomy). For example, since we have pretty well-defined views on the best ways to help the disadvantaged, we tended to lump many different things together under headings such as “Helping the disadvantaged” or “U.S. poverty” (this includes human services, youth development services, and more). By contrast, we tended to separate out any kind of work we found particularly interesting. So if you are seeking a picture of how foundations give for your own purposes, you may consider going back to the raw data (which we provide in the files linked above) and creating your own categories.

Our observations
Popular areas (according to GiveWell’s taxonomy)

Highly popular areas include:

  • U.S. education (K12/preschool) – 46 of 82 foundations in the “top 100 foundations” set list this as a program area; it accounts for over 7% of giving (in dollar terms) according to dollar allocation data.
  • U.S. higher education (scholarships, increasing access to higher education, or general/capital support) – 25 of 82 foundations, around 8% of giving according to dollar allocation data (the latter is harder to interpret on this point since it may include other activities within higher education).
  • U.S. poverty alleviation – 42 of 82 foundations, ~ 5% of giving according to dollar allocation data (this figure was obtained by adding human services and youth development, both of which appear primarily focused on the U.S.; other areas should also be partially counted, but they are a mix of international and U.S. giving) according to dollar allocation data.
  • Arts & culture: 30 of 82 foundations; ~5% of total giving according to dollar allocation data.
  • Environment (conservation): 25 of 82 foundations, ~4% of total giving according to dollar allocation data.
  • Health care and biomedical research funding (including support of hospitals): 17 of 82 foundations work on health care delivery and 14 of 82 work on biomedical research. This category (in which research and delivery can be difficult to separate) accounts for ~20% of total giving according to dollar allocation data.
  • Climate change and/or energy: 14 of 82 foundations work in these areas, though they account for only ~1% of total giving according to dollar allocation data.

This set of areas accounts for about half of all of the giving in the dollar allocation data (and much of what remains is difficult to categorize). It includes every area that is listed by 9 or more of the 82 foundations we examined.

International causes

Causes focused on helping other countries – or international relations – appear less common than the above causes, but are still fairly common. Each of the following are included in the work of 8-9 of the 82 foundations we examined:

  • Developing-world poverty
  • Developing-world health
  • Developing-world transparency/accountability/democracy
  • Foreign policy analysis

Total “international affairs” tagged giving is around 3% of all giving (in dollar terms) according to dollar allocation data, though this includes many international-aid grants that may be tagged as university support (for relevant research), health, agriculture, etc.

While we’ve done substantial investigation into the first two causes listed above, the second two have largely not been on our radar. Some of the largest foundations emphasize their work in these areas.

Less popular causes (according to GiveWell’s taxonomy)

Among the causes that are less popular, we find the following particularly interesting (not necessarily promising, but worth noting for later discussion). Here we focus on the “top 100 foundations” set since less-popular causes like this are difficult to isolate in dollar allocation data.

  • Natural sciences and mathematics, excluding biomedical sciences – 7 of 82 foundations list program areas in this category.
  • Immigration (advocacy and integration) – 4 foundations.
  • Promoting specific topics in higher education – 4 foundations. (We note that many of philanthropy’s putative success stories are in this category.)
  • Developing-world education – 3 foundations.
  • Reproductive health/rights – 3 foundations.
  • Social entrepreneurship – 3 foundations.
  • Mitigation/prevention of global catastrophic risks other than climate change. 2 foundations focus on nuclear nonproliferation, while one focuses on biological threats; the total giving for this category according to dollar allocation data is 0.1% of all giving dollars.
  • Scholarship and open access – 2 foundations.
  • Education and technology – 2 foundations.
  • Information access (cellphones, Internet) – 2 foundations.
  • Social sciences – 2 foundations.
  • Disease surveillance – 1 foundation.

Strategic cause selection

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

Our picture of how most major foundations work is as follows:

  1. First, broad program areas or “causes” – such as “U.S. education” and “environment” – are chosen. This step is almost entirely “from the heart” – no systematic review is conducted, but rather the philanthropist (or foundation President) chooses areas s/he is passionate about.
  2. Foundation staff speak to relevant people in the field and lay out a foundation strategy. This process may lead to direct identification of potential grantees or to RFPs/guidelines for open applications.
  3. Foundation staff continually work with and evaluate grantees and potential grantees.

(Our recent conversation with Paul Brest of the Hewlett Foundation, which funds GiveWell, gives one example.)

Steps #2 and #3 make sense, and seem likely to lead to at least reasonable results if carried out by people who listen well and keep their minds open. We see some potential room for improvement in terms of documentation and transparency – we believe that our own commitment to writing up and sharing our reasoning and results (rather than just discussing them internally) leads us to better-considered decisions and generates information that can inform other givers as well.

However, our working hypothesis is that the biggest room for improvement lies in step #1 – picking causes. This is where existing philanthropists seem to be least thoughtful and to ask the fewest critical questions; yet this is where we’d guess the bulk of variation in “how much good a philanthropist accomplishes” comes from.

So as we work on GiveWell Labs, we’re interested in seeing whether we can approach the “What cause should I work on?” question in a more systematic, thoughtful way, and get better results (in terms of overall good accomplished). This is what we refer to as “strategic cause selection.” We have just started this effort, and we expect a long time and multiple iterations before we feel we have a truly strong and effective approach; this post lays out our approach so far, as a starting point.

Key investigations for strategic cause selection
We’ve started our work on strategic cause selection by trying to understand the following two things:

  • The history of philanthropy. What are philanthropy’s biggest success stories, and why did they succeed? What has gone well and what has gone poorly, and why? Are there patterns what successful philanthropy looks like?We have previously posted our analysis of the single best source we know of on this question, a set of 100 “philanthropic success stories” published as a companion volume to The Foundation: a Great American Secret. We’ve been looking for all the books we can find on the history of philanthropy (there don’t seem to be many, which itself suggests that there isn’t much interest today in strategic cause selection) and intend to review several of them.
  • The current state of philanthropy. What are the causes that today’s major foundations work in? What sort of work are they doing in these causes?We are currently examining data from the Foundation Center’s database of foundation grants, and will be publishing our analysis in the future. We are also systematically reviewing the websites of the top 100 foundations (looking at what their causes are and how they describe them) and will be discussing this as well.

What makes a good philanthropic cause?
Reflecting on the examinations above, we’ve started to maintain a list of qualities that seem, logically, to make for a “good philanthropic cause.” We expect this list to evolve significantly in the future. For the moment, here are the qualities we look for in a philanthropic cause:

  • An articulable vision for the world as it could and should be, and a large gap between this and the world as it is now. (This quality may seem obvious, but we include it for completeness; one can think of it as a measure of how “big” or “ambitious” a cause is.) For example, the cause of global health and nutrition involves the following gap: it should be the case that the vast bulk of the world’s population receives adequate nutrition (certainly enough to prevent being clinically underweight or stunted), as well as any medical treatment/preventive measures that are relatively cheap and effective. We know that this vision of the world is possible, because it describes large parts of the world (such as the U.S.) today. Yet we also know that today’s world is very far from this vision – there is a lot of room for improvement, which philanthropy can pursue. Other causes involve a vision of the world that may or may not be possible (e.g., a world in which no one dies of cancer).
  • A shortage of “constituents” who can achieve change through non-philanthropic ends. As we’ve written before, most of the good in the world is accomplished through methods other than philanthropy. A good cause should be accompanied by a clear explanation of why the sought-after change cannot happen through for-profit work (people who need help pay for it directly), constituent-led government work (people who need help exercise political pressure to get it), or local philanthropy.As we noted previously, philanthropy commonly works on (a) helping the people with the least money and power; (b) basic research, top-level education reform, and other global public goods with long time horizons. Both of these seem to lack non-philanthropic constituents.
  • A shortage of strong other philanthropic actors. We have been told before that a philanthropist wishes to stay away from global health, since the Gates Foundation is probably finding most of the best opportunities and the ones it doesn’t fund are likely to be worse. This reasoning is partly valid, though mitigated by the point below.
  • Good performance by the other strong philanthropic actors. If the other strong funders in a cause area seem to be consistently funding excellent projects and/or getting excellent results, this gives some reason to believe that there is room for more strong philanthropy in the cause.

In future posts, we will list some of the causes we find most promising; we will also give our views on some of the most popular causes in today’s philanthropy.

Microfinance and cookstoves

Two interventions that command a lot of attention are microfinance (financial services, particularly small loans, for the very poor) and improved cookstoves (with the hope of reducing air pollution). We’ve recently seen a couple of helpful summaries of relevant research:

  • David Roodman summarizes the most rigorous research on microfinance. There are now five randomized controlled trials on microlending that have at least published some preliminary results; it looks like there is very little in the way of direct poverty reduction or wellbeing improvements, though there is positive impact on “stimulating enterprise.”
  • Charles Kenny discusses a recent study that randomized heavy subsidies of cookstoves in India, and found that “Households failed to use the stoves regularly or appropriately, did not make the necessary investments to maintain them properly, and use ultimately declined further over time,” leading to no significant positive impact. According to Mr. Kenny, this result is consistent with previous literature on the matter. On the other hand, Aid Thoughts points to another study in Senegal reporting, after one year, that “households receiving an improved cooking stove used less wood, spent less time cooking meals, reported better indoor air quality and (for women, who presumably did all the cooking) were significantly less likely to have respiratory disease symptoms, eye problems. Nearly all recipients of a stove used it at least seven times a week.” We note that the latter study discusses only one-year effects, while the India study found “a meaningful reduction in smoke inhalation in the first year [but] no effect over longer time horizons.” Note that we haven’t carefully examined these papers and that cookstoves are not a focus of ours, but since the recent studies are both fairly rigorous we thought it was worth noting them and their conflicting results for interested readers.