The GiveWell Blog

December 2023 open thread

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view previous open threads here.

Comments

  • Max Michalik on December 12, 2023 at 7:48 am said:

    I have recently seen multiple GiveWell sponsorships on YouTube. What is the rationale behind and the budget for these ads? Can I read about that anywhere?

    • Chandler Brotak on December 22, 2023 at 12:02 pm said:

      Hi Max,

      Thanks for your question! The primary goal of our advertising efforts is to drive more total dollars to our recommended organizations.

      This is the third year we’ve run ads on YouTube. We think that YouTube could be an effective way to find new GiveWell donors, and so far, we’ve been able to reach audiences who had not previously donated to GiveWell that have now become donors.

      We haven’t published any information about our YouTube advertising specifically, but we have written about our podcast advertising, and many of those conclusions informed our YouTube advertising efforts.

      Our most recently published budget is from the attachments to our July 2022 board meeting. We expected to spend $2-$2.5MM per year on advertising and public relations in 2022 and 2023. We spent about $400K on YouTube advertising in 2023. We’ll assess the effectiveness of these ads as we close out the year.

  • Mitchell on December 18, 2023 at 12:20 am said:

    The WHO recently recommended the R21 Malaria vaccine in October, and results of its phase 3 trial are available. Does Givewell have updated analysis on this vaccine? It’d be awesome if the “Malaria Vaccines” page [1] could be updated with the latest thoughts on both RTS.S and R21, when available.

    In general, I’m curious how Givewell thinks the recent emergence of 2 recommended vaccines may impact the cost effectiveness of the current top charities’ SMC and bed net malaria interventions are in the next few years. In the post about funding malaria vaccine rollout this year [2], it’s noted that Givewell is funding a study to compare SMC + vaccine to the vaccine on its own. Are there any preliminary thoughts on this?

    I’m worried that while SMC + vaccine will surely be better than just the vaccine, the marginal benefit of funding SMC may meaningfully decrease as vaccines get rolled out, and so they won’t meet the 10X cash bar that they currently meet. In addition, the note for the latest grant to the Malaria Consortium [3] mentions that the funding provides three years of runway, which seems like plenty of time for things to change given that vaccines are such a recent and efficacious development.

    [1] https://www.givewell.org/international/technical/programs/malaria-vaccines#Phase-3_clinical_trial_results
    [2] https://blog.givewell.org/2023/05/12/why-givewell-funded-malaria-vaccine-rollout/
    [3] https://www.givewell.org/research/grants/Malaria-Consortium-SMC-renewal-Nigeria-Burkina-Chad-Togo-January-2023

    • Chandler Brotak on December 22, 2023 at 6:30 pm said:

      Hi Mitchell, thanks a lot for your questions!

      We have recently reviewed new evidence on the R21 vaccine from a phase III trial (preprint) and plan to incorporate these results into our intervention report in the near future. Broadly speaking, we believe that both vaccines offer similar levels of protection, but R21 is significantly cheaper to manufacture than RTS,S.

      Country demand for malaria vaccines is very high, and the supply of RTS,S and R21 will likely be sufficient to meet this demand. We believe that they are likely to be rolled out widely in the coming years. Because we believe that both vaccines will decrease malaria mortality rates, this will most likely reduce the marginal impact of other antimalarial interventions, at least in countries with high vaccination rates. For example, recent results from a large-scale implementation pilot show a reduction in all-cause mortality by 13% among children who were eligible for vaccination. While we haven’t modeled this explicitly yet, we expect it to have a moderate impact on the cost-effectiveness of nets and seasonal malaria chemoprevention (SMC).

      The study we directed funding to actually aims to compare perennial malaria chemoprevention (PMC) plus RTS,S to the vaccine by itself, and results are expected for late 2026. In the meantime, we reviewed trials that look at how a combination of malaria vaccines and SMC compares to either intervention alone (initial trial, extension study). The results are very encouraging: In settings with highly seasonal transmission, RTS,S offers similar levels of protection as SMC. On top of that, the combination of both reduced malaria incidence by an additional 60%, compared to either intervention on its own.

      Going forward, we think that the introduction of malaria vaccines at scale might affect our decision to fund SMC or nets in certain countries that are currently close to our funding threshold, although we expect both interventions to still be above our bar in many other cases. Layering interventions can still be very cost-effective since none of the interventions is perfect, and the malaria burden remains high in much of sub-Saharan Africa.

      I hope that answers your questions, but please feel free to follow up on this!

      • Mitchell on December 23, 2023 at 2:19 am said:

        Thanks for the detailed response and links! That answers all my questions for now – glad to hear how promising these new vaccines are looking.

  • What charities does Give Well recommend to support Haiti? It can be in any category (e.g., poverty relief, housing, medicine, agricultural development, education, etc.).

    • Chandler Brotak on December 22, 2023 at 1:53 pm said:

      Hi there,

      Thanks for your question! We aim to recommend the most promising opportunities we can find, regardless of location. At this time, we are not supporting any programs working in Haiti specifically, but our research team is constantly identifying and evaluating new programs, and this could change in the future. Most (but not all) of the programs we support are in Africa and South Asia, which is where we’ve generally found the most cost-effective opportunities so far.

      The same program can vary a lot in cost-effectiveness depending on where it takes place. When we consider recommending funding, we evaluate the cost-effectiveness of a specific opportunity, rather than an entire program or organization. Cost-effectiveness matters because we aim to direct funding to the highest-impact opportunities we can identify. With our current cost-effectiveness threshold, this means recommending programs we believe are at least 10 times as impactful per dollar as unconditional cash transfers to people living in extreme poverty.

  • Ethan Kennerly on December 31, 2023 at 2:13 pm said:

    GiveWell acknowledged the sampling uncertainty of a statistic and the structural uncertainty of a model. Evolving the consideration of uncertainty reaffirmed my confidence.

    The table of the 25th percentile of cost-effectiveness for mosquito bed nets in Chad felt like a brilliant example. A link follows.

    https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/uncertainty-optimizers-curse#How_we_plan_to_do_this

    Two questions follow:

    1. The 25th percentile expressed 7x. The number 7x did not conventionally, obviously, and intuitively communicate the confidence that the bed net felt like a better choice than a cash transfer. On a scale from 50% to 100%, did a data analyst at GiveWell guess the confidence that a mosquito bed net in Chad is (at all) more cost-effective than a cash transfer? To reiterate, this confidence would be the confidence of the smallest detectable relative cost-benefit of a bed net compared to a cash transfer.

    2. GiveWell acknowledged that the structure of a model also felt uncertain. GiveWell wisely compensated for structural uncertainty by considering other perspectives that sometimes overruled the spreadsheet entirely. Did a data analyst at GiveWell guess the confidence from 50% to 100% on the structure of the spreadsheet for mosquito bed nets in Chad?

    • Chandler Brotak on January 4, 2024 at 5:17 pm said:

      Hi Ethan,

      Thanks for your comment! In answer to your two questions:

      1. The 25th percentiles on the bottom-line cost-effectiveness number are the output of a monte carlo simulation. To do this, we essentially: i) specify 25th & 75th percentiles on key inputs in our models (e.g. costs/burden/treatment); ii) fit a distribution to these percentiles (e.g. uniform, lognormal); and iii) repeatedly draw from them. This gives us a distribution of final cost-effectiveness estimates—of which 7x is the 25th percentile. So, to answer your question, this figure is entirely the output of a monte carlo exercise, rather than one of our data analysts making an ‘all-in’ guess about how likely a mosquito bed net is more cost-effective than a cash-transfer.

      2. No, we don’t try to quantify structural uncertainty by e.g. assigning a percentage on how confident we feel our model is capturing the ‘true’ model of reality. We view the issue of structural uncertainty as pointing to broader limitations of a purely quantitative approach, and think it’s best dealt with qualitatively (e.g. considering perspectives ‘beyond the model’).

      Let us know if you have any further questions. Thanks again!

      • Ethan Kennerly on January 7, 2024 at 3:42 pm said:

        Chandler,

        The simulated sample makes perfect sense. In that article, the table published that the 25th percentile approximated x7 compared to a cash transfer. Did a GiveWell data analyst simulate the percentile at which a distribution of bed nets in Chad would be slightly more cost-effective than a cash transfer? For example, x1.1 is greater than x1.0. So, x1.1 is more cost-effective than x1.0. The percentile of slightly better cost-effectiveness would inform the confidence for a distribution of bed nets in Chad.

        • Chandler Brotak on January 17, 2024 at 4:45 pm said:

          Ethan,

          Thanks for clarifying – in the Monte Carlo simulation, the 6th percentile is 1x; that is, when we run the model multiple times, the Against Malaria Foundation (AMF) long-lasting insecticide-treated net (LLIN) campaign in Uganda comes out better than a direct cash transfer 94% of the time. (There was an error in the original post stating the AMF LLIN campaign depicted in Figure 1 was in Chad – it’s now correctly labeled for the campaign in Uganda.)

          • Ethan Kennerly on January 21, 2024 at 7:21 pm said:

            Chandler,

            Reading the percentile of a simulated benefit from a mosquito bed net reaffirmed my confidence. An interpretation was that an extra bed net benefited a family in Uganda more than extra cash in about 9 out of 10 simulated trials. The frequency of 9 out of 10 would be an upper-bound estimate. A lower-bound estimate depended on the confidence in the simulation.

            The article acknowledged uncertainty in the model that informed the simulation. The article mentioned that GiveWell applies other criteria that sometimes overrule the estimated cost-effectiveness. The virtue of humility and practical strategy deeply resonated with me. For example, the article cited a blog post from 2014-2016 titled: Sequence thinking vs. cluster thinking. During uncertainty, applying non-quantified criteria seemed sound.

            Is GiveWell aware of an example from any year between 2020 and 2023 where GiveWell judiciously overruled the estimated cost-effectiveness when distributing from the Top Charities Fund?

            Ethan

          • Chandler Brotak on February 7, 2024 at 3:31 pm said:

            Ethan,

            One example is the funding for Togo in a grant to Malaria Consortium’s SMC program in January 2023. At the time of the grant decision, we estimated that the SMC program in Togo was around 9x, which was only slightly under our 10x cost-effectiveness threshold. There was also one pending change to our SMC cost-effectiveness model which was not finalized in time for this grant decision, but which would take the Togo SMC program over our cost-effectiveness threshold of 10x; we believed it was highly probable that this change would ultimately be incorporated into our model. This, and the comparatively smaller size of Togo’s SMC program (5% of the grant), justified our decision to maintain funding for the program despite lower cost-effectiveness than Burkina Faso and Nigeria.

  • Magnus on January 8, 2024 at 8:24 am said:

    Have you looked into the effectiveness of landscape restoration / water bunds charities, such as Justdiggit (justdiggit.org) and other similar organizations?

    • Chandler Brotak on January 11, 2024 at 4:23 pm said:

      Hi Magnus,

      Thanks for your question! We did a preliminary analysis of a land restoration program early last year. At that time, there was an evaluation of the program ongoing – we’ll be interested to see those results. Generally speaking, we don’t look at every program, but instead recommend a small number of evidence-backed programs that we estimate to be the most cost-effective.

  • Does GiveWell do any modelling on loss of trust/chance of corruption in either GiveWell or any of its recommended charities (which would then impact GiveWell’s credibility).

    e.g. for example has the team thought about whether it is worth it to audit AMF x times a year on site etc to actually see if their claims are legitimate?

    Appreciate your work!

    • Chandler Brotak on February 12, 2024 at 10:09 am said:

      Hi Steve,

      Thanks for your question! We consider fraud/corruption risks where they seem salient (e.g. conditional cash transfers), and we’re aware that large-scale health and development programs like the ones we fund often carry some risk of fraud. Two examples where we’ve incorporated this risk into our modeling are New Incentives (repeat enrollments and fraud at the expense of program participants) and IRD’s vaccination program (fraud generally). We consider this when investigating programs and prefer to fund organization that seem to have robust processes in place for detecting, preventing, and responding to fraud, but we want to do more work on this in the future.

Comments are closed.