The GiveWell Blog

Why we can’t take expected value estimates literally (even when they’re unbiased)

While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas. The latter group tends to critique us – or at least disagree with us – based on our preference for strong evidence over high apparent “expected value,” and based on the heavy role of non-formalized intuition in our decisionmaking. This post is directed at the latter group.

We believe that people in this group are often making a fundamental mistake, one that we have long had intuitive objections to but have recently developed a more formal (though still fairly rough) critique of. The mistake (we believe) is estimating the “expected value” of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. We believe that any estimate along these lines needs to be adjusted using a “Bayesian prior”; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter, even when they seem to be making very conservative downward adjustments to the expected value of an opportunity, are not making nearly large enough downward adjustments to be consistent with the proper Bayesian approach.

This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).

The rest of this post will:

  • Lay out the “explicit expected value formula” approach to giving, which we oppose, and give examples.
  • Give the intuitive objections we’ve long had to this approach, i.e., ways in which it seems intuitively problematic.
  • Give a clean example of how a Bayesian adjustment can be done, and can be an improvement on the “explicit expected value formula” approach.
  • Present a versatile formula for making and illustrating Bayesian adjustments that can be applied to charity cost-effectiveness estimates.
  • Show how a Bayesian adjustment avoids the Pascal’s Mugging problem that those who rely on explicit expected value calculations seem prone to.
  • Discuss how one can properly apply Bayesian adjustments in other cases, where less information is available.
  • Conclude with the following takeaways:
    • Any approach to decision-making that relies only on rough estimates of expected value – and does not incorporate preferences for better-grounded estimates over shakier estimates – is flawed.
    • When aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.
    • The above point is a general defense of resisting arguments that both (a) seem intuitively problematic (b) have thin evidential support and/or room for significant error.

The approach we oppose: “explicit expected-value” (EEV) decisionmaking
We term the approach this post argues against the “explicit expected-value” (EEV) approach to decisionmaking. It generally involves an argument of the form:

    I estimate that each dollar spent on Program P has a value of V [in terms of lives saved, disability-adjusted life-years, social return on investment, or some other metric]. Granted, my estimate is extremely rough and unreliable, and involves geometrically combining multiple unreliable figures – but it’s unbiased, i.e., it seems as likely to be too pessimistic as it is to be too optimistic. Therefore, my estimate V represents the per-dollar expected value of Program P.
    I don’t know how good Charity C is at implementing Program P, but even if it wastes 75% of its money or has a 75% chance of failure, its per-dollar expected value is still 25%*V, which is still excellent.

Examples of the EEV approach to decisionmaking:

  • In a 2010 exchange, Will Crouch of Giving What We Can argued:

    DtW [Deworm the World] spends about 74% on technical assistance and scaling up deworming programs within Kenya and India … Let’s assume (very implausibly) that all other money (spent on advocacy etc) is wasted, and assess the charity solely on that 74%. It still would do very well (taking DCP2: $3.4/DALY * (1/0.74) = $4.6/DALY – slightly better than their most optimistic estimate for DOTS (for TB), and far better than their estimates for insecticide treated nets, condom distribution, etc). So, though finding out more about their advocacy work is obviously a great thing to do, the advocacy questions don’t need to be answered in order to make a recommendation: it seems that DtW [is] worth recommending on the basis of their control programs alone.

 

  • The Back of the Envelope Guide to Philanthropy lists rough calculations for the value of different charitable interventions. These calculations imply (among other things) that donating for political advocacy for higher foreign aid is between 8x and 22x as good an investment as donating to VillageReach, and the presentation and implication are that this calculation ought to be considered decisive.
  • We’ve encountered numerous people who argue that charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that “any imaginable probability of success” would lead to a higher expected value for these charities than for others.
  • “Pascal’s Mugging” is often seen as the reductio ad absurdum of this sort of reasoning. The idea is that if a person demands $10 in exchange for refraining from an extremely harmful action (one that negatively affects N people for some huge N), then expected-value calculations demand that one give in to the person’s demands: no matter how unlikely the claim, there is some N big enough that the “expected value” of refusing to give the $10 is hugely negative.The crucial characteristic of the EEV approach is that it does not incorporate a systematic preference for better-grounded estimates over rougher estimates. It ranks charities/actions based simply on their estimated value, ignoring differences in the reliability and robustness of the estimates.

    Informal objections to EEV decisionmaking
    There are many ways in which the sort of reasoning laid out above seems (to us) to fail a common sense test.

    • There seems to be nothing in EEV that penalizes relative ignorance or relatively poorly grounded estimates, or rewards investigation and the forming of particularly well grounded estimates. If I can literally save a child I see drowning by ruining a $1000 suit, but in the same moment I make a wild guess that this $1000 could save 2 lives if put toward medical research, EEV seems to indicate that I should opt for the latter.
    • Because of this, a world in which people acted based on EEV would seem to be problematic in various ways.
      • In such a world, it seems that nearly all altruists would put nearly all of their resources toward helping people they knew little about, rather than helping themselves, their families and their communities. I believe that the world would be worse off if people behaved in this way, or at least if they took it to an extreme. (There are always more people you know little about than people you know well, and EEV estimates of how much good you can do for people you don’t know seem likely to have higher variance than EEV estimates of how much good you can do for people you do know. Therefore, it seems likely that the highest-EEV action directed at people you don’t know will have higher EEV than the highest-EEV action directed at people you do know.)
      • In such a world, when people decided that a particular endeavor/action had outstandingly high EEV, there would (too often) be no justification for costly skeptical inquiry of this endeavor/action. For example, say that people were trying to manipulate the weather; that someone hypothesized that they had no power for such manipulation; and that the EEV of trying to manipulate the weather was much higher than the EEV of other things that could be done with the same resources. It would be difficult to justify a costly investigation of the “trying to manipulate the weather is a waste of time” hypothesis in this framework. Yet it seems that when people are valuing one action far above others, based on thin information, this is the time when skeptical inquiry is needed most. And more generally, it seems that challenging and investigating our most firmly held, “high-estimated-probability” beliefs – even when doing so has been costly – has been quite beneficial to society.
    • Related: giving based on EEV seems to create bad incentives. EEV doesn’t seem to allow rewarding charities for transparency or penalizing them for opacity: it simply recommends giving to the charity with the highest estimated expected value, regardless of how well-grounded the estimate is. Therefore, in a world in which most donors used EEV to give, charities would have every incentive to announce that they were focusing on the highest expected-value programs, without disclosing any details of their operations that might show they were achieving less value than theoretical estimates said they ought to be.
    • If you are basing your actions on EEV analysis, it seems that you’re very open to being exploited by Pascal’s Mugging: a tiny probability of a huge-value expected outcome can come to dominate your decisionmaking in ways that seem to violate common sense. (We discuss this further below.)
    • If I’m deciding between eating at a new restaurant with 3 Yelp reviews averaging 5 stars and eating at an older restaurant with 200 Yelp reviews averaging 4.75 stars, EEV seems to imply (using Yelp rating as a stand-in for “expected value of the experience”) that I should opt for the former. As discussed in the next section, I think this is the purest demonstration of the problem with EEV and the need for Bayesian adjustments.

    In the remainder of this post, I present what I believe is the right formal framework for my objections to EEV. However, I have more confidence in my intuitions – which are related to the above observations – than in the framework itself. I believe I have formalized my thoughts correctly, but if the remainder of this post turned out to be flawed, I would likely remain in objection to EEV until and unless one could address my less formal misgivings.

    Simple example of a Bayesian approach vs. an EEV approach
    It seems fairly clear that a restaurant with 200 Yelp reviews, averaging 4.75 stars, ought to outrank a restaurant with 3 Yelp reviews, averaging 5 stars. Yet this ranking can’t be justified in an EEV-style framework, in which options are ranked by their estimated average/expected value. How, in fact, does Yelp handle this situation?
    Unfortunately, the answer appears to be undisclosed in Yelp’s case, but we can get a hint from a similar site: BeerAdvocate, a site that ranks beers using submitted reviews. It states:

    Lists are generated using a Bayesian estimate that pulls data from millions of user reviews (not hand-picked) and normalizes scores based on the number of reviews for each beer. The general statistical formula is:
    weighted rank (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C
    where:
    R = review average for the beer
    v = number of reviews for the beer
    m = minimum reviews required to be considered (currently 10)
    C = the mean across the list (currently 3.66)

    In other words, BeerAdvocate does the equivalent of giving each beer a set number (currently 10) of “average” reviews (i.e., reviews with a score of 3.66, which is the average for all beers on the site). Thus, a beer with zero reviews is assumed to be exactly as good as the average beer on the site; a beer with one review will still be assumed to be close to average, no matter what rating the one review gives; as the number of reviews grows, the beer’s rating is able to deviate more from the average.

    To illustrate this, the following chart shows how BeerAdvocate’s formula would rate a beer that has 0-100 five-star reviews. As the number of five-star reviews grows, the formula’s “confidence” in the five-star rating grows, and the beer’s overall rating gets further from “average” and closer to (though never fully reaching) 5 stars.

    I find BeerAdvocate’s approach to be quite reasonable and I find the chart above to accord quite well with intuition: a beer with a small handful of five-star reviews should be considered pretty close to average, while a beer with a hundred five-star reviews should be considered to be nearly a five-star beer.

    However, there are a couple of complications that make it difficult to apply this approach broadly.

    • BeerAdvocate is making a substantial judgment call regarding what “prior” to use, i.e., how strongly to assume each beer is average until proven otherwise. It currently sets the m in its formula equal to 10, which is like giving each beer a starting point of ten average-level reviews; it gives no formal justification for why it has set m to 10 instead of 1 or 100. It is unclear what such a justification would look like.In fact, I believe that BeerAdvocate used to use a stronger “prior” (i.e., it used to set m to a higher value), which meant that beers needed larger numbers of reviews to make the top-rated list. When BeerAdvocate changed its prior, its rankings changed dramatically, as lesser-known, higher-rated beers overtook the mainstream beers that had previously dominated the list.
    • In BeerAdvocate’s case, the basic approach to setting a Bayesian prior seems pretty straightforward: the “prior” rating for a given beer is equal to the average rating for all beers on the site, which is known. By contrast, if we’re looking at the estimate of how much good a charity does, it isn’t clear what “average” one can use for a prior; it isn’t even clear what the appropriate reference class is. Should our prior value for the good-accomplished-per-dollar of a deworming charity be equal to the good-accomplished-per-dollar of the average deworming charity, or of the average health charity, or the average charity, or the average altruistic expenditure, or some weighted average of these? Of course, we don’t actually have any of these figures.For this reason, it’s hard to formally justify one’s prior, and differences in priors can cause major disagreements and confusions when they aren’t recognized for what they are. But this doesn’t mean the choice of prior should be ignored or that one should leave the prior out of expected-value calculations (as we believe EEV advocates do).

    Applying Bayesian adjustments to cost-effectiveness estimates for donations, actions, etc.
    As discussed above, we believe that both Giving What We Can and Back of the Envelope Guide to Philanthropy use forms of EEV analysis in arguing for their charity recommendations. However, when it comes to analyzing the cost-effectiveness estimates they invoke, the BeerAdvocate formula doesn’t seem applicable: there is no “number of reviews” figure that can be used to determine the relative weights of the prior and the estimate.

    Instead, we propose a model in which there is a normally (or log-normally) distributed “estimate error” around the cost-effectiveness estimate (with a mean of “no error,” i.e., 0 for normally distributed error and 1 for lognormally distributed error), and in which the prior distribution for cost-effectiveness is normally (or log-normally) distributed as well. (I won’t discuss log-normal distributions in this post, but the analysis I give can be extended by applying it to the log of the variables in question.) The more one feels confident in one’s pre-existing view of how cost-effective an donation or action should be, the smaller the variance of the “prior”; the more one feels confident in the cost-effectiveness estimate itself, the smaller the variance of the “estimate error.”

    Following up on our 2010 exchange with Giving What We Can, we asked Dario Amodei to write up the implications of the above model and the form of the proper Bayesian adjustment. You can see his analysis here. The bottom line is that when one applies Bayes’s rule to obtain a distribution for cost-effectiveness based on (a) a normally distributed prior distribution (b) a normally distributed “estimate error,” one obtains a distribution with

    • Mean equal to the average of the two means weighted by their inverse variances
    • Variance equal to the harmonic sum of the two variances

    The following charts show what this formula implies in a variety of different simple hypotheticals. In all of these, the prior distribution has mean = 0 and standard deviation = 1, and the estimate has mean = 10, but the “estimate error” varies, with important effects: an estimate with little enough estimate error can almost be taken literally, while an estimate with large enough estimate error ends ought to be almost ignored.

    In each of these charts, the black line represents a probability density function for one’s “prior,” the red line for an estimate (with the variance coming from “estimate error”), and the blue line for the final probability distribution, taking both the prior and the estimate into account. Taller, narrower distributions represent cases where probability is concentrated around the midpoint; shorter, wider distributions represent cases where the possibilities/probabilities are more spread out among many values. First, the case where the cost-effectiveness estimate has the same confidence interval around it as the prior:

    If one has a relatively reliable estimate (i.e., one with a narrow confidence interval / small variance of “estimate error,”) then the Bayesian-adjusted conclusion ends up very close to the estimate. When we estimate quantities using highly precise and well-understood methods, we can use them (almost) literally.

    On the flip side, when the estimate is relatively unreliable (wide confidence interval / large variance of “estimate error”), it has little effect on the final expectation of cost-effectiveness (or whatever is being estimated). And at the point where the one-standard-deviation bands include zero cost-effectiveness (i.e., where there’s a pretty strong probability that the whole cost-effectiveness estimate is worthless), the estimate ends up having practically no effect on one’s final view.

    The details of how to apply this sort of analysis to cost-effectiveness estimates for charitable interventions are outside the scope of this post, which focuses on our belief in the importance of the concept of Bayesian adjustments. The big-picture takeaway is that just having the midpoint of a cost-effectiveness estimate is not worth very much in itself; it is important to understand the sources of estimate error, and the degree of estimate error relative to the degree of variation in estimated cost-effectiveness for different interventions.

    Pascal’s Mugging
    Pascal’s Mugging refers to a case where a claim of extravagant impact is made for a particular action, with little to no evidence:

    Now suppose someone comes to me and says, “Give me five dollars, or I’ll use my magic powers … to [harm an imaginably huge number of] people.

    Non-Bayesian approaches to evaluating these proposals often take the following form: “Even if we assume that this analysis is 99.99% likely to be wrong, the expected value is still high – and are you willing to bet that this analysis is wrong at 99.99% odds?”

    However, this is a case where “estimate error” is probably accounting for the lion’s share of variance in estimated expected value, and therefore I believe that a proper Bayesian adjustment would correctly assign little value where there is little basis for the estimate, no matter how high the midpoint of the estimate.

    Say that you’ve come to believe – based on life experience – in a “prior distribution” for the value of your actions, with a mean of zero and a standard deviation of 1. (The unit type you use to value your actions is irrelevant to the point I’m making; so in this case the units I’m using are simply standard deviations based on your prior distribution for the value of your actions). Now say that someone estimates that action A (e.g., giving in to the mugger’s demands) has an expected value of X (same units) – but that the estimate itself is so rough that the right expected value could easily be 0 or 2X. More specifically, say that the error in the expected value estimate has a standard deviation of X.

    An EEV approach to this situation might say, “Even if there’s a 99.99% chance that the estimate is completely wrong and that the value of Action A is 0, there’s still an 0.01% probability that Action A has a value of X. Thus, overall Action A has an expected value of at least 0.0001X; the greater X is, the greater this value is, and if X is great enough then, then you should take Action A unless you’re willing to bet at enormous odds that the framework is wrong.”

    However, the same formula discussed above indicates that Action X actually has an expected value – after the Bayesian adjustment – of X/(X^2+1), or just under 1/X. In this framework, the greater X is, the lower the expected value of Action A. This syncs well with my intuitions: if someone threatened to harm one person unless you gave them $10, this ought to carry more weight (because it is more plausible in the face of the “prior” of life experience) than if they threatened to harm 100 people, which in turn ought to carry more weight than if they threatened to harm 3^^^3 people (I’m using 3^^^3 here as a representation of an unimaginably huge number).

    The point at which a threat or proposal starts to be called “Pascal’s Mugging” can be thought of as the point at which the claimed value of Action A is wildly outside the prior set by life experience (which may cause the feeling that common sense is being violated). If someone claims that giving him/her $10 will accomplish 3^^^3 times as much as a 1-standard-deviation life action from the appropriate reference class, then the actual post-adjustment expected value of Action A will be just under (1/3^^^3) (in standard deviation terms) – only trivially higher than the value of an average action, and likely lower than other actions one could take with the same resources. This is true without applying any particular probability that the person’s framework is wrong – it is simply a function of the fact that their estimate has such enormous possible error. An ungrounded estimate making an extravagant claim ought to be more or less discarded in the face of the “prior distribution” of life experience.

    Generalizing the Bayesian approach
    In the above cases, I’ve given quantifications of (a) the appropriate prior for cost-effectiveness; (b) the strength/confidence of a given cost-effectiveness estimate. One needs to quantify both (a) and (b) – not just quantify estimated cost-effectiveness – in order to formally make the needed Bayesian adjustment to the initial estimate.

    But when it comes to giving, and many other decisions, reasonable quantification of these things usually isn’t possible. To have a prior, you need a reference class, and reference classes are debatable.

    It’s my view that my brain instinctively processes huge amounts of information, coming from many different reference classes, and arrives at a prior; if I attempt to formalize my prior, counting only what I can name and justify, I can worsen the accuracy a lot relative to going with my gut. Of course there is a problem here: going with one’s gut can be an excuse for going with what one wants to believe, and a lot of what enters into my gut belief could be irrelevant to proper Bayesian analysis. There is an appeal to formulas, which is that they seem to be susceptible to outsiders’ checking them for fairness and consistency.

    But when the formulas are too rough, I think the loss of accuracy outweighs the gains to transparency. Rather than using a formula that is checkable but omits a huge amount of information, I’d prefer to state my intuition – without pretense that it is anything but an intuition – and hope that the ensuing discussion provides the needed check on my intuitions.

    I can’t, therefore, usefully say what I think the appropriate prior estimate of charity cost-effectiveness is. I can, however, describe a couple of approaches to Bayesian adjustments that I oppose, and can describe a few heuristics that I use to determine whether I’m making an appropriate Bayesian adjustment.

    Approaches to Bayesian adjustment that I oppose

    I have seen some argue along the lines of “I have a very weak (or uninformative) prior, which means I can more or less take rough estimates literally.” I think this is a mistake. We do have a lot of information by which to judge what to expect from an action (including a donation), and failure to use all the information we have is a failure to make the appropriate Bayesian adjustment. Even just a sense for the values of the small set of actions you’ve taken in your life, and observed the consequences of, gives you something to work with as far as an “outside view” and a starting probability distribution for the value of your actions; this distribution probably ought to have high variance, but when dealing with a rough estimate that has very high variance of its own, it may still be quite a meaningful prior.

    I have seen some using the EEV framework who can tell that their estimates seem too optimistic, so they make various “downward adjustments,” multiplying their EEV by apparently ad hoc figures (1%, 10%, 20%). What isn’t clear is whether the size of the adjustment they’re making has the correct relationship to (a) the weakness of the estimate itself (b) the strength of the prior (c) distance of the estimate from the prior. An example of how this approach can go astray can be seen in the “Pascal’s Mugging” analysis above: assigning one’s framework a 99.99% chance of being totally wrong may seem to be amply conservative, but in fact the proper Bayesian adjustment is much larger and leads to a completely different conclusion.

    Heuristics I use to address whether I’m making an appropriate prior-based adjustment

    • The more action is asked of me, the more evidence I require. Anytime I’m asked to take a significant action (giving a significant amount of money, time, effort, etc.), this action has to have higher expected value than the action I would otherwise take. My intuitive feel for the distribution of “how much my actions accomplish” serves as a prior – an adjustment to the value that the asker claims for my action.
    • I pay attention to how much of the variation I see between estimates is likely to be driven by true variation vs. estimate error. As shown above, when an estimate is rough enough so that error might account for the bulk of the observed variation, a proper Bayesian approach can involve a massive discount to the estimate.
    • I put much more weight on conclusions that seem to be supported by multiple different lines of analysis, as unrelated to one another as possible. If one starts with a high-error estimate of expected value, and then starts finding more estimates with the same midpoint, the variance of the aggregate estimate error declines; the less correlated the estimates are, the greater the decline in the variance of the error, and thus the lower the Bayesian adjustment to the final estimate. This is a formal way of observing that “diversified” reasons for believing something lead to more “robust” beliefs, i.e., beliefs that are less likely to fall apart with new information and can be used with less skepticism.
    • I am hesitant to embrace arguments that seem to have anti-common-sense implications (unless the evidence behind these arguments is strong) and I think my prior may often be the reason for this. As seen above, a too-weak prior can lead to many seemingly absurd beliefs and consequences, such as falling prey to “Pascal’s Mugging” and removing the incentive for investigation of strong claims. Strengthening the prior fixes these problems (while over-strengthening the prior results in simply ignoring new evidence). In general, I believe that when a particular kind of reasoning seems to me to have anti-common-sense implications, this may indicate that its implications are well outside my prior.
    • My prior for charity is generally skeptical, as outlined at this post. Giving well seems conceptually quite difficult to me, and it’s been my experience over time that the more we dig on a cost-effectiveness estimate, the more unwarranted optimism we uncover. Also, having an optimistic prior would mean giving to opaque charities, and that seems to violate common sense. Thus, we look for charities with quite strong evidence of effectiveness, and tend to prefer very strong charities with reasonably high estimated cost-effectiveness to weaker charities with very high estimated cost-effectiveness

    Conclusion

    • I feel that any giving approach that relies only on estimated expected-value – and does not incorporate preferences for better-grounded estimates over shakier estimates – is flawed.
    • Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.

 

Donating to the Somalia famine: A brief update

Since our initial post on the Somalia famine, we’ve continued our research to provide a stronger recommendation to donors. We do not yet have enough information to do so. At this point, we maintain our provisional recommendation for Doctors Without Borders (MSF).

Over the past 3 weeks, we’ve contacted many aid and UN-based organizations. We’ve spoken with representatives from Action Against Hunger, CARE, Doctors without Borders (MSF), International Committee of the Red Cross, International Medical Corps, Oxfam, Save the Children, the World Food Programme, and UNICEF.

In our conversations with organizations, we’ve tried to answer the following questions:

  • What, specifically, are your activities in response to the emergency?
  • How do your expenses break down across these activities?
  • In what regions are you working? Are you primarily in the famine zone? In refugee camps? Other locations in the region?
  • Are you appealing for additional funding? If so, how much are you seeking? If you don’t raise all that you are appealing for, would you allocate unrestricted funding to your response?
  • How, specifically, would you spend additional funding?

We’ve also contacted funders such as the UN’s Central Emergency Response Fund (CERF), the Disasters Emergency Committee in the UK and USAID, but they have not been able to give us information about organizations or the situation on the ground that would inform our views of specific aid organizations.

We are waiting on information from several of the charities we’ve contacted to answer the questions above, and once we receive this information, we’ll be in a better position to make a stronger charity recommendation to donors.

For the time being, we maintain our provisional recommendation of Doctors Without Borders (MSF), which is publicly appealing for funds for the crisis in Somalia and its consequences.

Guest post from Vipul Naik

This is a guest post from Vipul Naik about how he decided what charity to support for his most recent donation. We requested this post along the lines of earlier posts by Eric Friedman, Jason Fehr, Ian Turner, and Dario Amodei. Note that this post was written before we published our most recent update on VillageReach.

Early giving: small amounts, based on whims

In September 2007, I joined the University of Chicago for graduate study in mathematics. For the first time, I was drawing a regular stipend that significantly exceeded my financial needs. I could now consider donating parts of my "own" money. Initially, I neither had a strong sense of what I should donate to, nor a burning desire to donate large parts of my savings, though I did have a vague feeling that donating money for worthwhile causes was a nice thing.

My initial "donations" made around December 2007 weren't really donations — they were more gratitude payments to non-profits and organizations that I think have made the world a better place — such as a $100 donation to the Wikimedia Foundation (the non-profit behind Wikipedia). I didn't consider myself a philanthropist trying to achieve specific large-scale change through my giving. Also, my savings weren't very high, and I hadn't mentally adjusted to the concept of making large donations.

Sponsor a kid!

I liked the idea of donating to organizations that serve poor people. However, I wasn't aware of any organization that I considered reliable, and finding one wasn't a priority. In June 2008, I was in downtown Chicago running some errands when I came across street fundraisers advertising for Children International (GiveWell review here), a Kansas-based international NGO that serves children across many developing countries through a one-on-one child sponsorship model. The idea appealed to me (my parents had participated in child sponsorship programs in India). I investigated Children International's website, and three weeks later (July 2008), I decided to sponsor a child for $22/month. A month later, I upped the number of children to two, for $44/month. I continued increasing the number of sponsored children until, around August 2009, the number had increased to 15 kids for $330/month.

Some neat — and life-changing — logic

I read a chapter in Steven Landsburg's book More Sex Is Safer Sex (an expansion of this Slate article) where Landsburg asserted that one should donate to only one charity rather than split one's donations across multiple charities. Landsburg argued that the size of a donation is usually too small to affect the relative merits of different charitable causes — and hence if you chose to give your first $1000 to Charity A rather than Charity B, the same reasoning should continue to apply to your next $1000. "Small" charities are somewhat different, if a donation has sufficient impact on the charity’s activities such that the donation, itself, alters the relative merits of different charities. However, for much of impersonal charitable giving to large causes/organizations, Landsburg's reasoning (and the accompanying mathematics) seemed valid, and I was convinced. (GiveWell has a similar philosophy — see this blog post on triage).

Landsburg's "one charity argument," on the surface, was more reason to keep donating to Children International and simply adjust the quantity donated rather than donating extra money to other charities. Or so I thought. But I gradually realized that the argument isn't merely about donating to one charity, rather, it is about donating to the best charity. I had no reason to suspect that Children International was bad, but I had no basis to conclude that they were the best (or anywhere near). Why did I continue donating to them?

Children International's sponsorship model (as opposed to simply making one-off grants/donations) made it psychologically hard for me to stop donating to them. At the time, I had no idea of candidates for substantially better charities. In hindsight, I should have stopped donating to Children International much earlier, even before I'd found a good charity.

Cutting the sponsorship cord

In late December 2009, I discovered a Bloggingheads diavlog (conversation) between William Easterly and Peter Singer. I'd already read Easterly's books The White Man's Burden and The Elusive Quest For Growth, and I also followed the Aid Watch blog to which he was a primary contributor. I was thus aware of Easterly's work and views on the shortcomings of official aid and development assistance. Peter Singer, a Princeton bioethicist and advocate of greater giving to meet the needs of the world's poorest, was new to me. In the diavlog, Singer mentioned GiveWell, and I followed the link to their website. GiveWell's research and philosophy impressed me. GiveWell did not recommend Children International, but recommended a handful of organizations based on extensive analysis. I wasn't sold on GiveWell's recommendations, but I now had some serious candidates that seemed substantially better than Children International.

I asked Children International to end my sponsorship in February 2010. I decided to not use a regular monthly donation model any more (with its implicit feeling of lock-in) but rather make periodic donation decisions, with due diligence done each time. I wasn't sure of the period: a long period has the advantage that the donation amount is sufficiently large to undertake a more thorough investigation, but this is also a disadvantage. Shorter periods between donations and smaller donation quantities reduce the risk of making a large donation to an organization that shuts down, or closes its room for more funding gap, shortly after I donate.

Discovering VillageReach

I continued to follow GiveWell as well as other blogs on philanthropy, aid, poverty, and development. I was reasonably convinced that low-income country health systems was low-hanging fruit for donor money. The approach of GiveWell's top charity VillageReach (GiveWell review here) impressed me. I made donations of $1250 in March 2010 and $2000 in June 2010 to VillageReach through GiveWell's website.

Around this time, I started feeling that the one charity argument had exceptions. In some cases, I thought, making a donation tied to specific single projects can actually get those single projects done. Around August 2010, I got in touch with a researcher and talked about partially funding some research related to low-cost private education in the developing world. We had extensive correspondence and phone conversations and in September 2010, I made a donation covering part of the costs of a new research project, with the understanding that any cost overruns would be covered by him. The project was successful (albeit with cost overruns) though the research report is not yet published, so I cannot share details right now. I think this was a case where my willingness to come forward with initial money helped accelerate a project that may otherwise either not have happened or happened a year later.

However, such opportunities are rare and inherently risky. In October 2010, I returned to considering VillageReach for my next donation. I talked over the phone with Holden of GiveWell. I shared some concerns:

  • Did GiveWell have a sufficient incentive to critically re-evaluate their own top-rated charities in light of new data?

  • Why was there very little other information or news coverage about VillageReach other than their own website and GiveWell's evaluation of them?

  • Why hadn't any major donor or foundation agreed to cover VillageReach's funding gap?

Holden addressed my questions, and, shortly thereafter, GiveWell elaborated further in the blog posts Health system strengthening + sustainability + accountability and After "Extraordinary and Unorthodox" comes the Valley of Death.

In December 2010, I made a donation of $5100 to VillageReach, my largest to the organization, bringing my total to-date donations to VillageReach at $8350. After donating, I talked over the phone with VillageReach employee John Beale about VillageReach's activities, to help me in future donation decisions.

A new year

I planned to make my next donation around April 2011. GiveWell published an update on VillageReach in March 2011. The good news: GiveWell found no reason, based on VillageReach's latest activities, to modify its analysis of VillageReach's cost-effectiveness. However, the evidence at this stage wasn't sufficiently clear to conclude definitively that VillageReach's current programs would be as successful as (or more successful than) the pilot programs on which GiveWell had based its analysis.

GiveWell's recommendation was responsible for about $1.1 million of roughly $2 million that VillageReach raised in 2010. VillageReach had originally projected a need for slightly under $6 million for their Mozambique project that was to continue till 2014. They seemed to be on track to meet their funding needs. I was now unsure of the value of my marginal donation. I would still have reason to donate to VillageReach if either:

  1. They could deliver demonstrably greater benefits by rolling out their program much more quickly, and they could do so by getting funding more quickly.

  2. GiveWell could identify other top charities so that, once VillageReach's funding gap was closed, other donors could donate instead to these other top charities.

I talked again with VillageReach's John Beale in March 2011, and although I continued to be convinced about VillageReach's effectiveness, I was unconvinced about (1). The key hope was now (2) — could GiveWell identify more top charities soon? GiveWell had already identified finding top charities as their top priority for 2011 (see here and here). However, by end April 2011, I wasn't convinced that they'd be successful. Thus, I decided to hold off my donation.

Independently, I started investigating other forms of philanthropy (such as those covered at the Breakthrough Philanthropy conference). I find some of them promising but don't yet feel confident to make a large donation to any of those organizations. In the mean time, I continue to check out GiveWell's updates on VillageReach and on their search for new top charities.

Somalia / East Africa famine relief donations

We’ve begun investigating the ongoing famine in Somalia / East Africa. We will be writing more on this topic as we learn more, but for the moment, we wanted to share a few preliminary thoughts:

  • This appears to be a very challenging situation for aid organizations, and it is difficult to determine who is in a position to use donations effectively.
  • That said, we see some reason to believe that it may be a promising giving opportunity for individual donors. It seems quite possible that donations from individuals are more helpful in a situation like this than in situations like the 2010 Haiti earthquake and 2011 Japan earthquake/tsunami.
  • At the moment, we recommend that donors wait until we publish more information, though if you’re looking to make your donation immediately, we provisionally recommend giving to Doctors Without Borders (MSF).

Details follow.

The situation, and why it is particularly challenging

On Wednesday, the United Nations declared a famine in the Bakool and Lower Shabelle regions of Somalia. The famine has caused extreme levels of acute malnutrition in southern Somalia. Much of the rest of the Horn of Africa (which includes Kenya, Ethiopia, Somalia, and Djibouti) is experiencing drought and a food crisis situation as well.

There are estimates that this is the worst famine in the region in 60 years and that it will only worsen in the next 2 months. About 4,000 to 5,000 Somalis per week are traveling across hundreds of miles of desert to reach the Dadaab refugee camps in eastern Kenya. Camps that were supposed to hold 90,000 people now hold about 380,000.

An Islamist militant group called al-Shabaab occupies regions that the famine has hit the hardest. Al-Shabaab has only allowed a few aid organizations to continue operating in southern Somalia and has killed WFP aid workers in the past. With safety concerns present, very few charities have access to the highest-need areas of Somalia.

Why this may be a promising opportunity

Despite the serious challenges, we want to note that

  • A consolidated appeal has been posted to Reliefweb and it is currently fairly far from being fully funded. This is a contrast with the recent earthquake in Japan, for which no such appeal was issued.
  • We’ve raised questions about whether Haiti relief had/has true room for more funding, due to the logistical difficulties in the aftermath of the earthquake – it seems possible that outside aid and money could have made some situations worse, not better. In this situation, there are concerns about the interactions between aid agencies and Al-Shabaab, but if money reaches refugees (for example, in camps in Kenya), the same concerns about logistics would not seem to apply.

The combination of an unusually dire situation, and the absence of some of the issues that held us back from wholeheartedly recommending that donors give to recent earthquake relief efforts, marks this as a situation worth investigating from a maximizing-impact-of-donations perspective.

What we’ve done so far, and our provisional recommendation

We cross-referenced the lists at InterAction and FTS with our list of disaster relief charities, and chose to contact the following:

We’ve only spoken briefly with these organizations (and have not yet heard back from WFP) and can’t yet report on the details. As we learn more, we’ll post updates to our blog.

The representative of MSF in the UK with whom we spoke stated to us that

  • MSF is working on the ground in Somalia providing care to those affected by the famine.
  • The scaling up of aid into Somalia, and to Somali refugees in neighboring countries, is being restricted.
  • MSF is urgently calling for obstacles to humanitarian assistance to be removed.

We have recommended in the past that donors support MSF in response to disasters and, for the time being, we recommend MSF again now. However, we continue to investigate the situation and are trying to speak with other organizations, and we will be publishing updates fairly soon.

Josh Rosenberg is a summer intern at GiveWell. He is currently an undergraduate at Pomona College.

A charity to watch: GiveDirectly

Note: we sent GiveDirectly an early draft of this post and have made modifications after some back-and-forth.

We’ve been interested for a long time in the idea of simply giving out cash to the very poor, as a promising form of charity, and we’ve long been puzzled over why there don’t seem to be any charities focusing on this approach. In 2009, while acknowledging the potential drawbacks of cash transfers, we argued:

Which would you bet on to get water to people in Kenya: an organization funded by wealthy Americans (motivated by guilt and the wish to display generosity, among other things), or an organization funded by Kenyan customers (motivated by a need for water)?

Why do cash handouts seem to be so rare in the charity world? Perhaps it’s because extensive experience and study have shown this approach to be inferior to others. Or perhaps it has more to do with the fact that giving out cash fundamentally puts the people, rather than the charity, in control.

As of a few weeks ago, there is a charity focused on cash transfers: GiveDirectly. GiveDirectly plans to use the M-PESA system to transfer money using SIM cards, and hopes to give out 90% of its total expenses as cash transfers.

We encountered this group – and had some back-and-forth – before its launch.

GiveDirectly is a new organization and we have not done full due diligence on it: this is not a review. This post simply gives preliminary thoughts on a charity we consider to be worth watching. We have some concerns (see below) and the organization is too new for us to be able to assess these concerns and its general track record. That said, we think it’s worth noting a few things that stand out about GiveDirectly relative to other charities at this preliminary stage, and that we consider to be good examples of the sort of thing we’re looking for and trying to encourage. Most of these revolve around the idea of putting one’s plans on the record so one can be held accountable in the future.

Reasons to be optimistic

  • GiveDirectly has committed to a clear and exclusive focus on a promising intervention. It’s our view that cash transfers ought to be considered the “starting point” for charity: if we had no evidence about the effectiveness of any intervention, cash transfers would be the one we’d support because it most directly empowers low-income people. GiveDirectly states that there is also a substantial case in the literature that cash transfers are helpful, and it’s our impression that this is true (though our investigation of this literature is still in progress and we have not reviewed GiveDirectly’s page on the matter).

    We do think there are some interventions (for example, health programs) with a strong enough track record that it’s reasonable to bet on them over cash transfers. But we’re glad to see donors’ options increasing, especially when the new option is one of the most intuitive ways of helping.

  • GiveDirectly makes clear and specific statements related to room for more funding, i.e., the impact of marginal (as opposed to average) donations. It has committed to give away 90% of received funds as cash transfers and has provided documents for us (see below) regarding how much funding it has the capacity for. Along these lines, we like its statement on transparency:

    To us transparency means more than publishing financial statements. That tells you how we used money last year; real transparency means a clear commitment to how we will use the next dollar you give. It means focusing on providing one, easy-to-understand service; explaining exactly what it costs to provide that service; and doing so without vague language like “overhead” and “program expenses.”

  • Not only is GiveDirectly conducting a randomized controlled trial; it has pre-announced the design of the study. GiveDirectly appears relatively convinced of the merits of cash transfers (see above) but is still conducting a major study to examine the impact of its own work and see whether some versions of the program are more successful than others.

    Our #1 suggestion for making social science research more credible is to “pre-register it,” i.e., announce in advance what data will be collected and how it is intended to be analyzed, so that the final result can be compared with the initial plan and a reader can form their view of whether the results are an artifact of publication bias. We made this case to GiveDirectly and it sent us (see below) a template for the full survey it will be using and a plan for analyzing the data. Now that we have seen these and posted them publicly, GiveDirectly won’t be able to cherry-pick results in the same way that we suspect many studies do. (Of course it will still be possible for the researchers to perform different analysis than they had originally planned; but they won’t be able to sweep any unfavorable conclusions of their analysis under the rug.)

These are major points in GiveDirectly’s favor. We do have some concerns.

Our reservations

  • GiveDirectly is a new organization and does not yet have a track record. Its work could end up failing for many reasons – political, cultural, logistical – and right now its plans for monitoring, evaluation and transparency are just that, plans. We will become much more confident in GiveDirectly if it executes over the next couple of years, succeeds in meeting its commitment of 90% of expenses given out as cash, and continues to self-monitor and publish its data.

    (From GiveDirectly’s response on this point: “What readers should know about potential risks is that (a) while our organization is new, cash transfers are not (see discussion of conflict risk below); (b) we locate recipients prior to accepting donations, so there is little risk of our being unable to electronically transmit a donation; (c) we are therefore able to send 90% of donations to the poor as a commitment, not as a target, and would refund donations if we were unable to follow through.”)

  • We’re uneasy about the idea of giving out cash, transparently, to some people and not others within a community. Incitement of jealousy and conflict seems possible, and if power dynamics are imbalanced enough, we’re worried that more powerful village members may take the benefits away from others. We’ve discussed these concerns with GiveDirectly, and it appears they have taken some reasonable measures to put themselves in a position to assess the extent to which this becomes a problem, but we remain uneasy at the moment.

    (From GiveDirectly’s response on this point: “targeted cash transfers per se are not a new thing by any means, but in fact one of the most widely used and evaluated development interventions … cash transfers now cover between 750 million and 1 billion people worldwide (p. 10) and are one of most researched and evaluated form of development intervention (p. 30), though they remain uncommon in the charity world … What donors should know about us specifically is that we include detailed questions on conflict in our follow up surveys and also as part of our ongoing, independent impact evaluation, which will assess whether transfers increase serious conflicts (such as crime). In the follow up interviews we have conducted to date, we have learned that some people have complained about GiveDirectly because they did not receive a transfer, but we have not learned of any conflict between community members.”)

  • We wouldn’t be comfortable with GiveDirectly raising as much money as its stated capacity. In discussing room for more funding with GiveDirectly, we asked for a statement of the maximum amount that could be given out as (90%) cash grants in the short term. They responded:

    If one supervisor can add enough capacity each month to handle $0.5M for 4 years, or a total of $2M, then $2M per month is an upper bound on our capacity with a single supervisor (i.e. Jeremy). Based on that I’d say I’d start to worry if we were pulling in more than $20M per year — where by “worry” I mean hire more supervisors. I’ll be frank and say I don’t have an objective assessment of how many of these we can readily get our hands on; I don’t think 5 would be hard but I suspect 25 (to get us to $500M annual capacity) would.

    While GiveDirectly’s analysis is logical, we just wouldn’t be comfortable seeing a charity this new and unestablished raise more than a few million dollars (serving several thousand people at $500 per year) over the short term – the risks of poorly spending (or harmfully spending) those funds if its model hit unforeseen challenges would be too great.

    (From GiveDirectly’s response on this point: “Given that cash transfers have received arguably more extensive scrutiny than any other development intervention I think there is plenty of evidence to address such concerns, but of course readers should examine the evidence and decide for themselves.”)

Bottom line: GiveDirectly is too new for us to fully assess the concerns above, but it is definitely a charity to watch.

Attachments GiveDirectly has sent us:

A good volunteer is hard to find

We’ve written before about how volunteers often end up costing charities more in time and resources than the work they do for the organization is worth. (Charities seem to justify taking on these volunteers because of they often become donors and informal fundraisers for the charity.)

In our experience, valuable volunteers are rare. The people who email us about volunteer opportunities generally seem enthusiastic about GiveWell’s mission, and motivated by a shared belief in our goals to give up their free time to help us. Yet, the majority of these people never complete useful work for us.

We ask new volunteers to first complete a test assignment that takes about 2-4 hours. The assignment involves fixing the formatting of our list of sources on two practice pages and allows us to get a sense of their attention to detail and commitment to volunteer hours. Of the 34 people who emailed us expressing an interest in volunteering between September 2010 (when we started keeping track) and May 2011, only 7 have completed the test assignment and gone on to complete valuable work for us.

Of the 34, 10 never responded to my email outlining what GiveWell volunteers do and asking them if they’d like me to send the first assignment. 13 responded to this email and I sent them the first assignment, but they didn’t complete it. The final 4 completed the test assignment, but didn’t send back the next (real) assignment I sent.

It seems rather surprising that almost 80% of people who take the initiative to seek us out and ask for unpaid work fail to complete a single assignment. But maybe this shouldn’t be surprising. Writing an email is quick and exciting; spending a few hours fixing punctuation is not.

Our overall success rate may be low, but I think the system works fairly well. Benefits include:

  • It allows us to concentrate our management resources on those individuals who provide a credible signal of their commitment and work ethic. This screen works well for vetting people interested in jobs and internships, as well as volunteering.
  • In cases where volunteers go through the initial screen but don’t turn into long-term contributors, they generally add value by giving us feedback on our work.
  • We’ve identified people who have added significant value. We’ve hired two volunteers: one who is now full-time staff member and another who contributed useful part-time work for about 6 months and is now working with us full-time for the summer. Another volunteer has taken the lead on a difficult and important research project that wasn’t a good fit for any of our staff.

We’ll keeping working with volunteers, not because the time is usually well spent, but because, in rare cases, it’s a great investment.