The GiveWell Blog

June 2024 open thread

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view previous open threads here.

Comments

  • Dr. Georg Naggls on June 9, 2024 at 9:08 am said:

    R21 Matrix M has been approved as safe and effective, the WHO now recommends it and GAVI funded the rollout. Will you guys have a look into this and reevaluate the malaria-Situation accordingly? Will GAVI receive a recommendation? Are there other charities distributing R21 more effectively? Could the vaccine be more effective than bed-nets? Are the effects of both complementary, or does one shadow out the other?

    • Chandler Brotak on July 1, 2024 at 4:32 pm said:

      Hi Dr. Naggls,

      Thanks for your question, and sorry for the delayed response here! Our research team is currently discussing the R21 vaccine and its implications on our work. We’ll follow up with a more detailed response when we can.

  • AnonymousAntelope on June 25, 2024 at 12:58 pm said:

    In December 2022, you announced the results of the “Change Our Mind Contest”, where you awarded joint first place to an essay that recommends using Monte Carlo simulation to properly model uncertainty in cost-effectiveness.

    That was 18 months ago. But as far as I can tell, you haven’t yet implemented this recommendation in any of your research. Is this correct? I thought the contest and prize money were great (as you know, writing high-quality critiques is a lot of work), but I’m a bit disappointed to see that little seems to have come of it?

    Are you still planning to do this? If not, why not? And if yes, why the long delay?

    Thanks! 🙂

    I also posted the same question with a bit more context here: https://forum.effectivealtruism.org/posts/3peM8FhzaFeGTMdEm/does-givewell-still-plan-to-model-the-uncertainty-of-their

    • Chandler Brotak on June 26, 2024 at 3:49 pm said:

      Hello,

      Thanks for your question! We did say we’d include a 25th/75th percentile range on bottom line cost-effectiveness (in addition to the one-way sensitivity checks). We haven’t added that yet, and we should. We ran into some issues running the full sensitivity analyses (instead of the one-way sensitivity checks we do have), and we prioritized publishing updated intervention reports and cost-effective analyses without them.

      We’ll add those percentile ranges to our top charity intervention reports (so the simple cost-effective analyses will also include a bottom line cost-effectiveness 25/75 range, in addition to one-way sensitivity checks) and ensure that new intervention reports/grant pages have them included before publishing. We think it’s worth emphasizing how uncertain our cost-effectiveness estimates are, and this is one way to do so (though it has limitations).

      That said, we’re not planning to base our decision-making on this uncertainty in the bottom line cost-effectiveness (like the “Change Our Mind Contest” post recommended) or model uncertainty on every parameter. To defend against the Optimizer’s Curse, we prefer our approach of skeptically adjusting our inputs, rather than an all-in adjustment to bottom-line cost-effectiveness. We explain why in the uncertainty post.

  • John Birchall on June 29, 2024 at 12:10 pm said:

    Measuring cost-effectiveness in mental health appears to be dependent on peer-reviewed publications. The “Assessment of Happier Lives” report calls for 25% downward adjustment for publication bias. Consider a charity whose objective is to support people who feel their lives and their families’ lives have been made a misery by long-term dependence on psychiatric drugs, that their potential usefulness to society has been drastically reduced, and that on the available (largely anecdotal) evidence a responsible and well-supported withdrawal program can undo that damage. Because of the financing of research by drug producers, it is arguable that without a very large *upwards* adjustment for bias leading to absent or skewed research, the neglectedness of the cause will be missed. How can an organization seeking funding for an area which is neglected, owing to huge vested interests in keeping it neglected, overcome the habit of evaluators to make downward adjustments, but rarely if ever to make substantial upwards adjustments in any measure?

    • Chandler Brotak on July 9, 2024 at 4:14 pm said:

      Hi John,

      Thanks for your question! We think it’s possible there are promising programs that don’t have strong evidence yet, for whatever reason. We’re on the lookout for that and have funded programs where we think there’s insufficient evidence (e.g. oral rehydration solution in Bauchi, Nigeria; spillover effects of GiveDirectly’s unconditional cash transfers in Kenya; Malengo’s educational migration program; etc.). We think the issue here is less about capture by industry, though, and more just about lack of funding for exploring promising global health and development programs. We’d be interested in finding cases where programs are not being studied due to poor incentives.

      Related to publication bias/internal validity, we do think there’s meaningful differences in quality of research and likelihood of bias. For example, we apply only small adjustments to meta-analyses of SMC, which are based on several well-conducted RCTs, with fairly objective outcomes. For psychotherapy programs, there seems to be more evidence of bias. We talk about those in our assessment of Happier Lives Institute’s cost-effective analysis of StrongMinds.

New Comment

Your email address will not be published. Required fields are marked *

*