The GiveWell Blog

Research Strategy: Cross-Cutting

GiveWell’s cross-cutting team works to improve GiveWell’s grantmaking by tackling complex research questions that cut across grantmaking areas, reviewing our research and grants, and ensuring transparency and legibility in our findings. This post explains more about our team’s role, how we think our work helps GiveWell’s grantmaking, and our current areas of focus.

What does the cross-cutting team do?

GiveWell’s research team aims to find and fund the most cost-effective giving opportunities in global health and development. While our grantmaking teams are focused on funding programs in their specific areas (malaria, vaccines, nutrition, water, livelihoods, and new areas), the cross-cutting team addresses research questions that span across different areas of our work.

We do this in a few ways:

  • Tackling thorny questions that cut across grantmaking areas. This includes: How should we use cost-effectiveness estimates to make grant decisions when these estimates have so much uncertainty?[1] Are we overestimating the cost-effectiveness of our top charities by double counting lives saved each year?[2] Are programs that increase subjective well-being more cost-effective than our top charities?[3] How large are non-mortality benefits of prevention programs (e.g., long-term income increases[4] or medical costs averted[5])?
  • Rigorously testing our conclusions. We “red team” our grantmaking to find holes in our research,[6] make grants to organizations to test key assumptions,[7] consult outside experts on our findings,[8] engage with and respond to critiques of our work,[9] and experiment with different approaches for soliciting feedback, such as criticism competitions.[10]
  • Making our research clear and transparent. We set standards for the legibility of grant pages and other research write-ups.[11] This includes providing simple versions of our cost-effectiveness analyses, explicit estimates of uncertainty in key parameters, and “outside-the-model” considerations (such as learning value or organizational track record).[12]

Why do we think this work is important?

The cross-cutting team’s priorities rely on several hypotheses about what makes for good grantmaking decisions across GiveWell. We think:

  • Addressing cross-cutting questions may change our grantmaking. We think we’ve overlooked some important considerations in favor of research that more immediately affects grantmaking. Turning our attention to these questions may change our mind or reveal mistakes.[13]
  • External views and outside research can change our minds. We think more input from a range of experts will strengthen our analysis and help us identify oversights, and we should be on the lookout for ways to fund research that could test our conclusions.[14]
  • Transparency and legibility are essential to making good decisions. We think clearly explaining our reasoning exposes gaps and invites scrutiny. When our logic is hard to follow, that’s a red flag. We think it’s important to legibly explain both the rationale for specific grants and also broader aspects of our worldview.[15]
  • Consistency checks are valuable. Having teams specialize by program area may risk inconsistent decision-making across grantmaking areas.[16] We think systematic comparisons may surface discrepancies that we should investigate.

What are some questions we expect to share more about in the near future?

We’re prioritizing work in the following areas:

  • “Lookbacks” that assess how well previous grants have gone. We’ll work with grantmaking teams to publish lookbacks on roughly $190 million in recent grants to programs for chlorination, vitamin A supplementation, and conditional cash transfers for vaccination. We also plan to publish lookbacks on how successful our approximately $120 million in technical assistance (TA)[17] grantmaking has been (in areas like syphilis screening and treatment for malnutrition).[18] These lookbacks will examine whether grants achieved their intended impact, whether TA addressed key implementation barriers, and whether our support increased intervention coverage.
  • Some long-standing questions on our research. This includes:
    • How should we deal with ongoing questions about the quality of disease and mortality burden data underlying our cost-effectiveness models?[19]
    • How should we model the effect of programs like seasonal malaria chemoprevention, nets, vitamin A supplementation, distribution of oral rehydration solution, and others when they’re delivered concurrently in the same area?
    • How should we compare the impact of programs that provide modern contraception to those that aim to increase income, reduce morbidity, and avert deaths?
    • How concerned should we be that the organizations we fund are diverting healthcare workers from the government?
    • How concerned should we be about insecticide-treated nets being used for fishing?

Plans for follow-up

We plan to report back in 2025 to share what we’ve done and what we’ve learned. We’d be eager to hear any feedback or questions in the meantime!

New Comment

Your email address will not be published. Required fields are marked *

*