Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at email@example.com or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.
You can view previous open threads here.
Where are you up to implementing suggestions from the Change Our Minds contest winners or other submissions?
What are you main misgivings about the Happier Lives Institute’s approach to charity evaluations, and/or in the absence of major concerns, are you planning to redo your CEAs with WELLBYs as the comparative unit?
(FYI I am not affiliated with HLI, I just admire their work and think it should be more central to EA grantmaking.)
Hello communications expert,
I browsed the cogent summary of cost-effectiveness of syphilis screening. After browsing the section, I interpreted that this particular syphilis screening might equal top-rated charities. That interpretation reassured my confidence in diligent GiveWell accounting.
But one phrase was:
Only the above phrase distracted from the assurance. The article transparently advises in many cases to disregard the precision of the number 29. The precision of the number 29 distracted a train of thought. The number 28 or 30 is less than 5% different than 29. A misinterpretation would be a margin of error less than a 5%. Carefully reading the section advises the reader to not misinterpret the usage of 29 as meaning greater than 28 and less than 30. A couple questions come to mind:
1. GiveWell published 29 instead of 28 or 30. Has GiveWell also published something like a margin of error or confidence interval around the number 29? Would something like a confidence interval clear up the misinterpretation of confidence in that the number was not 28 or 30?
2. If such a margin or interval is out-of-scope of an analysis, has GiveWell considered future articles to also regard the point-estimate like 29 to also be out-of-scope for an uncertain model compounded by uncertain evidence? In case of uncertainty, rather than the number 29, would a GiveWell reader feel clearer to read a vague summary of optimism?
First of all, thank you for all the work that you do in analyzing charities!
In your analyses of Helen Keller International and Malaria Consortium, you analyze the most effective program in each of the charities (vitamin A supplementation and seasonal chemoprevention), and you base your recommendations based on these single programs. On your donations page (https://www.givewell.org/donate/more-information#supportcharities), you recommend earmarking the funds to these specific programs rather than giving unrestricted donations to the organizations.
However, there is a GiveWell blog post (https://blog.givewell.org/2009/12/16/room-for-more-funding-continued-why-donation-restricting-isnt-the-easy-answer) that advises against restricting donations. In particular, the post says this increases administrative burdens on charities and generally does not actually change the allocation, since money can be funged around internally. The post is old, but the logic still seems convincing to me.
This raises two related questions in my mind:
1) Why does GiveWell now recommend restricted donations in contrast to the older recommendation against them?
2) If funging negates the effects of any earmarks, is a donation to HKI (earmarked or not) less effective than one to Against Malaria Foundation, where the charity has a single program and thus funging is not an issue?
Thank you for any insights or recommendations you can provide here!
I’ve just learned about a charity called Family Empowerment Media which runs contraception education campaigns in Nigeria. During their initial rollout, Kano state (the area in which FEM’s radio programs were aired) saw a 75% increase in the use of contraceptives. FEM stated that, even if their messaging were only responsible for 15% of that increase (a very conservative estimate), the cost to save a life through reduce maternal mortality was around $2200 USD. I would love to know if GiveWell has evaluated Family Empowerment Media before, and if so, what concerns it has considering the very low cost to save lives.
Thanks for your questions!
Re: the Change Our Mind Contest, we have concrete plans to address the topics of the two first-place winners, “An Examination of GiveWell’s Water Quality Intervention Cost-Effectiveness Analysis” and “GiveWell’s Uncertainty Problem.” These will likely take the form of some updates to our cost-effectiveness analysis for in-line chlorination and Dispensers for Safe Water, and a write-up on how we plan to approach uncertainty going forward. We don’t yet have a publication date set for these updates.
For the entries we recognized with an honorable mention (see the blog announcement for descriptions), we’ve added the research questions raised by these critiques to our queue. We’re currently considering how to prioritize these alongside other research questions that come up in the course of our grant investigations, so there’s no clear timeline for when those updates will be incorporated.
Re: the Happier Lives Institute’s recommendation of WELLBYs and its corresponding recommendation of StrongMinds (more here for anyone reading this who isn’t familiar with their work), we are working on a report summarizing our review of the evidence behind interpersonal psychotherapy group (IPT-G), the intervention carried out by StrongMinds. That report, which should be published soon, will be the best source of information for our views on this subject. We’ll post another comment here to let you know when that’s published (and feel free to sign up here if you’d like to be notified of future research reports).
I hope that’s helpful!
Thanks for your careful engagement with our work! Currently, we don’t have a margin of error or confidence interval to express our uncertainty about the 29x cash estimate for the grant you mention, or any of our other cost-effectiveness estimates. We also don’t currently have a plan to stop referring to these point estimates of cost-effectiveness in our public write-ups.
We have historically tended to factor uncertainty into our analyses “up front,” by applying adjustments for things like external validity (the applicability of study findings to real-world settings) or replicability (the likelihood of another study finding a result similar to the study results we’re drawing on). We try to think through and adjust for as many such variables as we feasibly can in our public cost-effectiveness analyses, so that the resulting number represents our best guess of cost-effectiveness, inclusive of our uncertainties. (And, as you noted, we heavily caveat the final number when we publish it.) It’s also important to note that we use these cost-effectiveness estimates mostly for comparative purposes, to decide what programs we should prioritize funding—not to serve as an absolute judgment of their impact.
All that said, we have gotten a fair amount of feedback recently suggesting that we adopt a more systematic way of expressing uncertainty in our analyses. In fact, one of the top two winning entries in our Change Our Mind Contest was a critique of our approach to uncertainty. We’re currently exploring how we might update our approach based on this and similar critiques.
I hope this is helpful!
Thanks so much for your engagement! We are aware of Family Empowerment Media’s work and have had several conversations with them recently. We plan to continue investigating this program.
Thank you for your great work! While I believe it is far from a primary issue for GiveWell charities, I came to your website looking for a description of how you vet charities regarding internal corruption and general malpractice.
I found some related considerations on this page,
https://www.givewell.org/charities/top-charities/2020/qualitative-assessments, but would be interested to hear if you had a page you would rather direct me toward to describe how GiveWell systematically approaches such issues.
Following up to share that we’ve just published our assessment of HLI’s analysis of StrongMinds here.
Thanks for your interest!
Thanks for your support of our work, and for this question! Apologies for the delay in responding.
We think our current top charity recommendations are not very susceptible to this funging risk, because we do a lot of research to ensure they meet the two conditions laid out in Holden’s post: they can productively use more funding, and unrestricted funds are not allocated to them. We conduct extensive room for more funding analyses on these programs, which include their funding needs, anticipated funding from sources outside of GiveWell, and how much unrestricted funding we expect them to receive from parent organizations. For example, in our room for more funding analysis for Helen Keller’s vitamin A supplementation (VAS) program, we estimate that Helen Keller allocates $0 in unrestricted funding to VAS (see here, including the cell note in C22).
In addition, our cost-effectiveness analyses include adjustments for “within-org fungibility,” which is the risk that if GiveWell directs funding to one of these programs, the organization might spend less time fundraising for that program than they would have otherwise. So this risk is to some extent baked into our overall cost-effectiveness estimate for charities we recommend that work on multiple programs. (In the case of Against Malaria Foundation and New Incentives, we put the risk at 0%.)
I hope that’s helpful!