The GiveWell Blog

Why we do donor calls

As part of our plan to allocate more staff capacity to outreach efforts this year, we’ve been reaching out to a number of our donors to proactively offer a telephone call with a GiveWell staff member to discuss our work. We are also offering the option for donors as well as other interested individuals to sign up for a call on our website for the first time. You can do so here.

We have connected with a small number of donors over the phone at various times in the past. Due to limited staff capacity, we were only able to have calls and meetings with a small handful of some of our very largest donors, and occasionally did so when someone had a specific question or concern that they wanted to discuss. We also have tried at times to call donors to thank them for their donations, as staff capacity allowed, in addition to sending thank-you emails.

Still, we’ve had relatively few conversations with the individuals who support GiveWell or our top charities over the years. This is partly because we have not prioritized outreach in the past. Now that our staff capacity has grown substantially, we plan to spend more time discussing GiveWell’s work with individuals outside of the organization in 2016.

We have a few reasons to believe that offering additional calls will be an important aspect of GiveWell’s outreach going forward:

  1. We recognize that GiveWell has a very information-dense website, and that it can be challenging to stay on top of all of our past and current work, even if you’re a fairly close follower of our blog or other website updates. Offering calls provides:
    • The chance for donors to ask any questions they have about our research and recommendations, including plans for giving.
    • In past calls, donors have sometimes asked questions that indicate they’ve misunderstood something about our research. These calls have helped us see which parts of our research we haven’t communicated well (which has then led us to publish that information more explicitly) and help donors we talk to better understand our thinking.
    • The chance for us to update our donors on GiveWell’s recent work and priorities.

    We consider these updates an important part of our work because a core part of our mission is to share our research so that donors can make informed giving decisions.

  2. The calls are an opportunity for us to learn more about the donors who use our work, and to receive feedback on our priorities, strategies, and more.

An ongoing goal at GiveWell, like other nonprofits, is to build relationships with the supporters who use our work. We would guess that providing more information to individuals who use our research, clearing up misunderstandings, and answering any questions is likely to increase the connection between donors and GiveWell and our recommended charities, which may make it more likely for donors to support our top charities and/or spread the word about giving to the causes we recommend.

We also imagine that some donors may be hesitant to take our call either because (a) they don’t think that a call would be worth our time since they don’t think that they have questions for us or (b) they fear we’ll solicit additional donations over the phone. We hope that this post helps alleviate these concerns: we value the opportunity to talk to donors who use our research and we don’t directly solicit funds over the phone.

If you’d be interested in receiving a call from us, please fill out this short form.

New posts on the Open Philanthropy Blog: Our approach to grantmaking so far, and the launch of the Alliance for Safety and Justice

We are now posting content relevant to the Open Philanthropy Project to the Open Philanthropy Blog, rather than the GiveWell Blog. For a period of time, we will be posting notices here when new content appears, in order to ease the transition. We encourage those interested in the Open Philanthropy Project to follow it via RSS, Facebook or Twitter.

The newest two posts on the Open Philanthropy Blog are:

New post on the Open Philanthropy Blog: David Roodman on whether there’s been a notable recent crime wave

We are now posting content relevant to the Open Philanthropy Project to the Open Philanthropy Blog, rather than the GiveWell Blog. For a period of time, we will be posting notices here when new content appears, in order to ease the transition. We encourage those interested in the Open Philanthropy Project to follow it via RSS, Facebook or Twitter.

The newest post on the Open Philanthropy Blog is: America’s recently heralded urban “crime wave” may already have peaked

The importance of “gold standard” studies for consumers of research

There’s been some interesting discussion on how the world of social science experiments is evolving. Chris Blattman worries that there is too much of a tendency toward large, expensive, perfectionist studies, writing:

each study is like a lamp post. We might want to have a few smaller lamp posts illuminating our path, rather than the world’s largest and most awesome lamp post illuminating just one spot. I worried that our striving for perfect, overachieving studies could make our world darker on average.

My feeling – shared by most of the staff I’ve discussed this with – is that the trend toward “perfect, overachieving studies” is a good thing. Given the current state of the literature and the tradeoffs we perceive, I wish this trend were stronger and faster than it is. I think it’s worth briefly laying out my reasoning.

Our relationship to academic research is that of a “consumer.” We don’t carry out research; we try to use existing studies to answer action-relevant questions. The example question I’ll use here is “What are the long-term benefits of deworming?”

In theory, I’d prefer a large number of highly flawed studies on deworming to a small number of “perfectionist” studies, largely for the reasons Prof. Blattman lays out – a large number of studies would give me a better sense of how well an intervention generalizes, and what kinds of settings it is better and worse suited to. This would be my preference if flawed studies were flawed in different and largely unrelated ways.

The problem is my fear that studies’ flaws are systematically similar to each other. I fear this for a couple of reasons:

1. Correlated biases in research methods. One of the most pervasive issues with flawed studies is selection bias. For example, when trying to assess the impact of the infections treated by deworming, it’s relatively easy to compare populations with high infection levels to populations with low infection levels, and attribute any difference to the infections themselves. The problem is that differences in these populations could reflect the impact of deworming, or could reflect other confounding factors: the fact that populations with high infection rates tend to be systematically poorer, have systematically worse sanitation, etc.

If researchers decided to conduct 100 relatively easy, low-quality studies of deworming, it’s likely that nearly all of the studies would take this form, and therefore nearly all would be subject to the same risk of bias; practically no such studies would have unrelated, or opposite, biases. In order to conduct a study without this bias, one needs to run an experiment (or identify a “natural experiment”), which is a significant step in the direction of a “perfectionist” study.

Even if we restrict the universe of studies considered to experiments, I think analogous issues apply. For example:

  • Poorly conducted experiments tend to risk reintroducing selection bias. If randomization is poorly enforced, a study can end up looking at “people motivated to receive deworming” vs. “people not as motivated,” which might be also confounded with general wealth, education, sanitation, etc.
  • There are many short-term randomized studies of deworming, but few long-term randomized studies. If my concern is long-term effects, having a large number of short-term studies isn’t very helpful.

2. Correlated biases due to academic culture. The issue that worries me most about academic research is publication bias: the fact that researchers have a variety of ways to “find what they want to find,” from selective reporting of analyses to selective publication. I suspect that researchers have a lot in common in terms of what they “want to find”; they tend to share a common culture, common background assumptions, and common incentives. As a non-academic, I’m particularly worried about being misled because of my limited understanding of such factors. I think this issue is even more worrying when studies are done in collaboration with nonprofits, which systematically share incentives to exaggerate the impact of their programs.

The case of microlending seems like a strong example. When we first confronted the evidence on microlending in 2008, we found a large number of studies, almost all using problematic methodologies, and almost all concluding that microlending had strong positive effects. Since then, a number of “gold standard” studies have pointed to a very different conclusion.

Many of the qualities that make a study “perfectionist” or “overachieving” – such as pre-analysis plans, or collection of many kinds of data that allow a variety of chances to spot anomalies – seem to me to reduce researchers’ ability to “find what they want to find,” and/or improve a critical reader’s ability to spot symptoms of selective reporting. Furthermore, the mere fact that a study is more expensive and time-consuming reduces the risk of publication bias, in my view: it means the study tends to get more scrutiny, and is less likely to be left unpublished if it returns unwanted results.

Some further thoughts on my general support of a trend toward “perfectionist” studies:

Synthesis and aggregation are much more feasible for perfectionist studies. Flawed studies are not only less reliable; they’re more time-consuming to interpret. The ideal study has high-quality long-term data, well-executed randomization, low attrition, and a pre-analysis plan; such a study has results that can largely be taken at face value, and if there are many such studies it can be helpful to combine their results in a meta-analysis, while also looking for differences in setting that may explain their different findings. By contrast, when I’m looking at a large number of studies that I suspect have similar flaws, it seems important to try to understand all the nuances of each study, and there is usually no clearly meaningful way to aggregate them.

It’s difficult to assess external validity when internal validity is so much in question. In theory, I care about external validity as much as internal validity, but it’s hard to pick up anything about the former when I am so worried about the latter. When I see a large number of studies pointing in the same direction, I fear this is due to correlated biases rather than to a sign of generalizability; when I see studies finding different things, I suspect that this may be a product of different researchers’ goals and preferences rather than anything to do with differences in setting.

On the flipside, once I’m convinced of a small number of studies’ internal validity, it’s often possible to investigate external validity by using much more limited data that doesn’t even fall under the heading of “studies.” For example, when assessing the impact of a deworming campaign, we look at data on declines in worm infections, or even just data on whether the intervention was delivered appropriately; once we’ve been sold that the basic mechanism is plausible, data along these lines can fill in an important part of the external validity picture. This approach works best for interventions that seem inherently likely to generalize (e.g., health interventions).

Rather than seeing internal and external validity as orthogonal qualities that can be measured using different methods, I tend to see a baseline level of internal validity as a prerequisite to examining external validity. And since this baseline is seldom met, that’s what I’m most eager to see more of.

Bottom line. Under the status quo, I get very little value out of literatures that have large numbers of flawed studies – because I tend to suspect the flaws of running in the same direction. On a given research question, I tend to base my view on the very best, most expensive, most “perfectionist” studies, because I expect these studies to be the most fair and the most scrutinized, and I think focusing on them leaves me in better position than trying to understand all the subtleties of a large number of flawed studies.

If there were more diversity of research methods, I’d worry less about pervasive and correlated selection bias. If I trusted academics to be unbiased, I would feel better about looking at the overall picture presented by a large number of imperfect studies. If I had the time to understand all the nuances of every study, I’d be able to make more use of large and flawed literatures. And if all of these issues were less concerning to me, I’d be more interested in moving beyond a focus on internal validity to broader investigations of external validity. But as things are, I tend to get more value out of the 1-5 best studies on a subject than out of all others combined, and I wish that perfectionist approaches were much more dominant than they currently are.

Update on GiveWell’s web traffic / money moved

In addition to evaluations of other charities, GiveWell publishes substantial evaluation of itself, from the quality of its research to its impact on donations. We publish quarterly updates regarding two key metrics: (a) donations to recommended charities and (b) web traffic. This post is being published late due to staff focusing on updating GiveWell’s charity recommendations in the fourth quarter; it also includes a preliminary view of our money moved since the end of our third quarter.

Preliminary estimate of 2015 money moved (since February 1, 2015)

As of early January 2016, we have tracked about $98 million in money moved to our recommended charities. Excluding Good Ventures, we have tracked about $28 million (of which, roughly half has come from donors giving $1 million or more).

These data are preliminary. We expect that in some cases we are currently overstating our impact (e.g. due to double counting or incorrect attribution of our influence) and in other cases we are understating our impact (since there are several weeks left in our metrics year and there are delays entering data); overall, we would guess that we are currently underestimating our annual money moved. We plan to publish our annual metrics (covering February 1, 2015 – January 31, 2016) in March, at which point we will have more confidence in our data and be able to share more details.

GiveWell’s web traffic / money moved through Q3 2015

The tables and chart below present basic information about our growth in money moved and web traffic in the first three quarters of 2015 compared to the previous two years (note 1).

Money moved and donors: first three quarters

Table_2015Q3MoneyMoved.png

Money moved by donors who have never given more than $5,000 in a year increased about 80% to $2.44 million. The total number of donors in the first three quarters increased about 80% to about 8,300 (note 2). These growth rates are reasonably consistent with the growth we previously reported in our first and second quarter metrics.

Web traffic through October 2015

Table_2015Q3WebTraffic.png

Growth in web traffic excluding Google AdWords increased about 25% in the first three quarters. Last year, we saw a drop in total web traffic because we removed ads on searches that we determined were not driving high quality traffic to our site (i.e. searches with very high bounce rates and very low pages per visit).

GiveWell’s website receives elevated web traffic during “giving season” around December of each year. To adjust for this and emphasize the trend, the chart below shows the rolling sum of unique visitors over the previous twelve months, starting in December 2009 (the first period for which we have 12 months of reliable data due to an issue tracking visits in 2008).

Chart_2015Q3WebTraffic.png

We use web analytics data from two sources: Clicky and Google Analytics (except for those months for which we only have reliable data from one source). The raw data we used to generate the chart and table above (as well as notes on the issues we’ve had and adjustments we’ve made) is in this spreadsheet (note 3, on how we count unique visitors).



Note 1: Since our 2012 annual metrics report we have shifted to a reporting year that starts on February 1, rather than January 1, in order to better capture year-on-year growth in the peak giving months of December and January. Therefore, metrics for the “first three quarters” reported here are for February through October.

Note 2: Our measure of the total number of donors may overestimate the true number. We identify individual donors based on the reported name and email. Donors may not share all of this information or may update it (for example, using a different email), in which case, we may mistakenly treat a donation as if it was made by a new donor. We plan to investigate how large of an overstatement there may be and possibly adjust the total for our next annual metrics report.

Note 3: We count unique visitors over a period as the sum of monthly unique visitors. In other words, if the same person visits the site multiple times in a calendar month, they are counted once. If they visit in multiple months, they are counted once per month.

December 2015 update on GiveWell’s funding needs

This post provides an update on GiveWell’s operating budget and funding needs. It is aimed at close followers of GiveWell, particularly those who have a high degree of trust in and alignment with us and are primarily seeking to make the highest-impact gift according to our (admittedly biased) opinion.

In brief:

  • We are in a relatively stable financial situation. We anticipate projected revenues remaining in line with projected expenses over the next 12 months.
  • This relies on the assumption that (a) most donors who have supported our operations in the past will continue to do so and that (b) some new donors choose to support our operations.
  • We spent $3.0m in the 12 months from December 1, 2014 to November 30, 2015. We currently project expenses of $4.9m for December 1, 2015 to November 30, 2016.
  • The overall effect of our growth over the last couple of years has been to substantially grow the Open Philanthropy Project (which now accounts for approximately 70% of our overall budget), while maintaining (or slightly increasing) the amount of capacity we put into top charities. Much of our staff growth is also fairly recent and hasn’t yet translated into increased output, but it’s likely that this pattern (substantial growth in the Open Philanthropy Project accompanied by less growth for our work on top charities) will continue to hold.
  • We plan on separating GiveWell from the Open Philanthropy Project financially in the next year, at which point they will be separate organizations, but we haven’t gotten there yet. We currently ask Good Ventures to provide 50% of Open Philanthropy’s budget. It is possible that once we separate GiveWell from the Open Philanthropy, we will ask Good Ventures to provide additional support. Funding for GiveWell’s operations now gives us flexibility as we figure out (over the next year) how we should support each entity. Update: August 2016. At our June 2016 board meeting, we told the board that we had asked Good Ventures to cover 100% of the Open Philanthropy Project’s costs for the period covering April 1, 2016-September 30, 2016. We expect that the Open Philanthropy Project will be an independent organization (separated from GiveWell) within the next year, possibly by the end of 2016 and are tentatively planning to continue to ask Good Ventures to cover 100% of the Open Philanthropy Project’s costs in the future.
  • For donors who have a high degree of trust in us and are looking to give as effectively as possible from our perspective, we recommend donating to support GiveWell’s operations. Such donations allow us to maintain a diversified donor base and continue operating as we wish to with minimal distractions. Note that we have a policy in place to ensure that we don’t accumulate reserves excessively (we don’t expect this policy to come into play).

Below, we provide more details on our current funding situation. For more background on our philosophy on fundraising, see our October 2013 post. We are planning to make a future post – in a month or two, while doing our annual self-evaluation – that goes into more detail on the past and future effects of our increased staff size.

Details

We currently project expenses of $4.9m for December 1, 2015-November 30, 2016. We project $4.9m in revenues over this period.

We currently hold about $3.3m in reserves (and project holding $4.4m after December’s “giving season”).

This file (.xlsx) provides more detail on our forecasts.

Revenue projection

Our revenue projection includes (the numbers below don’t add up due to rounding):

  • $1.8m from Good Ventures, consistent with our request that Good Ventures fund 50% of the costs of the Open Philanthropy Project and 20% of non-Open Philanthropy GiveWell costs.
  • $1.9m from donors giving more than $10,000, of which:
    • $1.5m comes from donors who have given previously. The largest 11 of these donors (including two institutional donors) account for approximately 85% of this total. Based on our knowledge of each donor, we estimate the likelihood that each will give again.
    • $0.4m from new donors (projected based on past growth).
  • $0.8m from many donors giving less than $10,000
  • $0.1m in “one-off” gifts, i.e., donations that are due to special circumstances. We do not anticipate any of these donations recurring, but we project receiving $0.1m of this type of donation based on past experience.

The above implies that if donors who have supported us in the past continue to do so and new donors continue to support us at rates similar to what we have experienced in the past, we will remain in a stable financial position.

Expenses

Our expenses have grown significantly over the past year.

We spent $3.0m in the 12 months from December 1, 2014 to November 30, 2015. We currently project expenses of $4.89m for December 1, 2015 to November 30, 2016. In brief:

  • As of December 1, 2014, we had 18 full-time staff members, of which 5 were dedicated to the Open Philanthropy Project. Several other staff members spent some time on Open Phil-specific work, adding up to an equivalent of approximately .75 additional full-time staff.
  • As of December 1, 2015, we had 31 full-time staff members, of which 9 are dedicated to the Open Philanthropy Project, and we are working with two full-time trial hires that we hope to convert to full-time employees. Many staff members spend some time on Open Phil-specific work, which now adds up to 3.5 additional full-time staff. We currently estimate that the Open Philanthropy Project accounts for approximately 70% of our total expenses.
  • We are planning to make a future post – in a month or two, while doing our annual self-evaluation – that goes into more detail on the past and future effects of our increased staff size. In brief, the overall effect of our growth over the last couple of years has been to substantially grow the Open Philanthropy Project, while maintaining (or slightly increasing) the amount of capacity we put into top charities. Much of our staff growth is also fairly recent and hasn’t yet translated into increased output, but it’s likely that this pattern (substantial growth in the Open Philanthropy Project accompanied by less growth for our work on top charities) will continue to hold. We plan on separating GiveWell from the Open Philanthropy Project financially in the next year, at which point they will be separate organizations, but we haven’t gotten there yet. We currently ask Good Ventures to provide 50% of Open Philanthropy’s budget. It is possible that once we separate GiveWell from the Open Philanthropy, we will ask Good Ventures to provide additional support.

At what point would we consider our funding gap closed?

At the point where we hit our excess assets policy, we would regrant any funds given to GiveWell to our recommended charities.

Using a conservative revenue projection (which we believe is appropriate when considering granting out funds), we project 12-month-forward expenses as of November 2016 (i.e., expenses we would incur from November 2016 to October 2017) of $5.0 million more than what we project holding in reserves. Therefore, we would require $5.0 million in additional funding before we would begin to grant out funds.

What will we do if we raise more or less funding than we anticipate?

Raising more funding than we anticipate would reduce the likelihood that senior staff have to spend significant time fundraising in the next year. Staff time put into fundraising is currently quite low (approximately 10 hours/year for each of Elie, Holden and Natalie and some additional support from more junior staff). We currently plan to maintain this limited time commitment to fundraising and are optimistic that posts like this enable us to raise the funding we need without devoting more time to fundraising.

If we raise less funding than we anticipate, Elie and Holden would spend more time on fundraising. If this step didn’t succeed in raising the funding we need, we would consider the following options (likely in this order): (a) slowing or halting planned staff expansion, (b) requesting additional funding from Good Ventures, and (c) laying off staff. Note that we believe these scenarios are highly unlikely given our current situation, but we require continued, growing support to ensure that we avoid them.

Are GiveWell’s projected operating expenses reasonable or excessive in light of its impact?

In 2015, we anticipate more than $90m moved to top charities and anticipate the Open Philanthropy Project funding approximately $16.5m of grants.

We spent $3.2m in 2015 and project total expenses in 2016 of $4.9m. We estimate that 70% ($3.4m) of the 2016 figure is attributable to the Open Philanthropy Project and 30% ($1.5m) to GiveWell. We previously wrote that we believe expenses that are 15% of money moved are well within the range of normal.

What is our recommendation?

For donors who have a high degree of trust in us and are looking to give as effectively as possible from our perspective, we recommend donating to support GiveWell’s operations. Such donations allow us to maintain a diversified donor base and continue operating as we wish to with minimal distractions.

For donors who want to support our work because they value it but are otherwise primarily interested in supporting charities based on neutral recommendations, strong evidence, etc., we recommend giving a portion of the donation to GiveWell. If you’d like to give GiveWell a 10% “tip” to support our operations, you can do so by selecting the box labelled “Add 10% to help fund GiveWell’s operations” on our donations to charities page or by sending us a check and filling out our check donation form with the allocation for your donation. Admittedly, in light of the role the Open Philanthropy Project is playing in our budget, it’s possible that the right figure this year for a “tip” this year is something under 10%.