The GiveWell Blog

Weighing organizational strength vs. estimated cost-effectiveness

A major question we’ve asked ourselves internally over the last few years is how we should weigh organizational quality versus the value of the intervention that the organization is carrying out.

In particular, is it better to recommend an organization we’re very impressed by and confident in that’s carrying out a good program, or better to recommend an organization we’re much less confident in that’s carrying out an exceptional program? This question has been most salient when deciding how to rank giving to GiveDirectly vs giving to the Schistosomiasis Control Initiative.

GiveDirectly vs SCI

GiveDirectly is an organization that we’re very impressed by and confident in, more so than any other charity we’ve come across in our history. Reasons for this:

But, we estimate that marginal dollars to the program it implements — direct cash transfers — are significantly less cost-effective than bednets and deworming programs. Excluding organizational factors, our best guess is that deworming programs — which SCI supports — are roughly 5 times as cost-effective as cash transfers. As discussed further below, our cost effectiveness estimates are generally based on extremely limited information and are therefore extremely rough, so we are cautious in assigning too much weight to them.

Despite the better cost-effectiveness of deworming, we’ve had significant issues with SCI as an organization. The two most important:

  • We originally relied on a set of studies showing dramatic drops in worm infection coinciding with SCI-run deworming programs to evaluate SCI’s track record; we later discovered flaws in the study methodology that led us to conclude that they did not demonstrate that SCI had a strong track record. We wrote about these flaws in 2013 and 2014.
  • We’ve seen limited and at times erroneous financial information from SCI over the years. We have seen some improvements in SCI’s financial reporting in 2016, but we still have some concerns, as detailed in our most recent report.

More broadly, both of these cases are examples of general problems we’ve had communicating with SCI over the years. And we don’t believe SCI’s trajectory has generated evidence of overall impressiveness comparable to GiveDirectly’s, discussed above.

Which should we recommend?

One argument is that GiveWell should only recommend exceptional organizations, and so the issues we’ve seen with SCI should disqualify them.

But, we think that the ~5x difference in cost-effectiveness is meaningful. There’s a large degree of uncertainty in our cost-effectiveness analyses, which is something we’ve written a lot about in the past, but this multiplier appears somewhat stable (it has persisted in this range over time, and currently is consistent with the individual estimates of many staff members), and a ~5x difference gives a fair amount of room for SCI to do more good even accounting both for possible errors in our analysis and for differences in organizational efficiency.

A separate argument that we’ve made in the past is that great organizations have upside that goes beyond the value of conducting the specific program they’re implementing. For example, early funding to a great organization may have allow it to grow faster and increase the amount of money going to their program globally, either through proving the model or through their own fundraising. And GiveDirectly has shown some propensity for potentially innovative projects, as discussed above.

We think that earlier funding to GiveDirectly had this benefit, but it’s less of a consideration now that GiveDirectly is a more mature organization.  We believe this upside exists for what we’ve called “capacity-relevant” funding, which is the type of funding need that we consider to be most valuable when ranking the importance of marginal dollars to each of our top charities, and refers to funding gaps that we expect will allow organizations to grow in an outsized way in the future, for instance by going into a new country.

Bottom line

Our most recent recommendations ranked SCI’s funding gap higher than GiveDirectly’s due to SCI’s cost-effectiveness. We think that SCI is a strong organization overall, despite the issues we’ve noted, and we think that the “upside” for GiveDirectly is limited on the margin, so ultimately our estimated 5x multiplier looks meaningful enough to be determinative.

We remain conflicted about this tradeoff and regularly debate it internally, and we think reasonable donors may disagree about which organization to support.

Comments

  • Gregory Lewis on July 29, 2016 at 11:45 am said:

    I expect organizational performance to be a multiplier of impact given a particular intervention: a charity can reduce an in-principle outstanding intervention to zero impact by mismanagement, but an intervention with no impact is very unlikely to get much better no matter how effective the organization is at implementing it.

    Nonetheless, it seems likely that the intervention rather than the organization is the bigger factor for overall impact. From DCCP we find (pace regression to the mean) global health interventions vary by orders of magnitude: although many are less cost-effective than cash transfers, that some global health interventions could beat cash transfers by 5- 10- or greater fold is not surprising.

    By contrast, I imagine variation of organizational performance varies much less: probably normal, and to find one charity is 10x better organized than another is rare. Although I agree the factors noted favour GiveDirectly having better organizational performance than SCI, I’d be very surprised if this amounts to a factor of 2 (leave alone 5), no matter how we cash out organizational effectiveness and what second order variables we include.

    Given this, one may anticipate the list of ‘best charities’ in global health will be dominated by those pursuing outstanding interventions rather than those with exceptional organisational quality.

    One may still arrive at recommendations slanted towards a less important factor if this can be estimated more precisely. Yet for all the challenges Givewell documents surrounding the evidence base for deworming, at least interventions in principle allow crisp empirical evidence of efficacy. By contrast organizational efficacy is vague, and its estimation often relies upon fairly subjective interpretation of fairly distal proxies.

    I aver Givewell’s recommendations (particularly of Givedirectly) are thrown somewhat off by two errors: 1) mistakening overweighing organisational quality in predicting impact – little variation is likely explicable by this metric, and intervention explains far more, and 2) over-confidence in Givewell’s ability to ascertain organisational quality.

  • Ben Todd on August 13, 2016 at 1:17 am said:

    Hey Greg,

    I think there’s a lot of truth in your comment, especially if you think of organisational quality as things like operational efficiency and transparency. However, I could see definitions of ‘organisational quality’ under which you can get 10x variations in impact.

    e.g. a highly able team should be able to scale up much faster, increasing RFMF, which is highly relevant because GW will fund any charities that meet its criteria and are clearly more effective than GiveDirectly. It’s better to back these orgs because they have better growth potential, and therefore long-term impact.

    e.g.2. a highly able team are better able to take unforeseen upside opportunities in the vicinity of their project e.g. GiveDirectly set up Segovia and is working to promote cash transfers as a new baseline in philanthropy, which could influence billions of dollars towards more effective approaches.

  • jamie cassidy on August 22, 2016 at 11:20 am said:

    One extra argument that is likely relevant to the debate is whether Givewell’s involvement in an organization can significantly improve the quality of the organization and the programs it runs.
    For example if, by continuously questioning SCI’s financial information and internal quality control prodcedures, Givewell can improve the effectiveness of SCI’s deworming programs then this is definitely an argument in favour of staying involved with lower quality organizations.

  • Gregory Lewis on September 1, 2016 at 5:00 am said:

    Ben,

    On reflection, I agree with your remarks: I can see how a less organizationally effective version of GiveDirectly may have turned out much less successfully. My hunch is that at the present time, though, this no longer is such a big multiplier for GD. I guess it might be analogous to start-ups versus more mature companies: one can see in the former case how marginal improvements in organizational performance may yield large variations in expected value, but less so as they become established on a given trajectory.

    My impression is Givewell has so far looked at charities towards the ‘more established’ end of the spectrum, where I believe ‘fundamentals’ like the intervention likely are more important than general measures of organizational performance. Perhaps OpenPhil may get better results if it has a similar ‘organization-first’ (or at least highly) ethos.

  • Ben Todd on September 11, 2016 at 2:25 pm said:

    Greg, I agree it seems reasonable that team quality is a more effective predictor for startups than established organisations (and I think that’s widely reflected in for-profit investment practice).

Comments are closed.