Last week, we published an updated analysis on “spillover” effects of GiveDirectly‘s cash transfer program: i.e., effects that cash transfers may have on people who don’t receive cash transfers but who live nearby those who do receive cash transfers.((For more context on this topic, see our May 2018 blog post.)) We concluded: “[O]ur best guess is that negative or positive spillover effects of cash are minimal on net.” (More)
Economist Berk Özler posted a series of tweets expressing concern over GiveWell’s research process for this report. We understood his major questions to be:
- Why did GiveWell publish its analysis on spillover effects before a key study it relied on was public? Is this consistent with GiveWell’s commitment to transparency? Has GiveWell done this in other cases?
- Why did GiveWell place little weight on some papers in its analysis of spillover effects?
- Why did GiveWell’s analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?
These questions apply to GiveWell’s research process generally, not just our spillovers analysis, so the discussion below addresses topics such as:
- When do our recommendations rely on private information, and why?
- How do we decide on which evidence to review in our analyses of charities’ impact?
- How do we decide which outcomes to include in our cost-effectiveness analyses?
Finally, this feedback led us to realize a communication mistake we made: our initial report did not communicate as clearly as it should have that we were specifically estimating spillovers of GiveDirectly’s current program, not commenting on spillovers of cash transfers in general. We will now revise the report to clarify this.
Note: It may be difficult to follow some of the details of this post without having read our report on the spillover effects of GiveDirectly’s cash transfers.