The GiveWell Blog

Sharing notes from conversations: Case study in pursuing transparency in philanthropy

Early in our history, we got most of our information in the form of documents: grant applications and other documents sent to us by charities, academic literature, etc. Being transparent about why we believe what we believe was thus relatively straightforward: we sought to publish whatever documents we could.

Over time, phone and in-person conversations have become a much larger part of our process. The biggest reason for this is that our “access” – our ability to get people to talk to us – has improved (see our annual review for 2011). Conversations add a lot of value to our process, above and beyond what we could get from documents, but being transparent about this source of information – i.e., publicly sharing what we’ve learned in conversations – has been a challenge.

Below, we outline the trial-and-error by which we arrived at our current process for sharing notes from conversations; lay out what that process is at the moment; and discuss this case as an illustration of a general dynamic that we think applies fairly broadly when it comes to transparency in philanthropy:

  • It is possible to arrive at reasonable processes for conducting philanthropic investigations transparently.
  • But developing these processes can take time and can have significant short-term costs.
  • A funder that holds transparency as a high value can often find ways to fulfill the value. But a funder that has traditionally operated without transparency as a high value will not necessarily have an easy time making the switch.
  • We see ourselves as “trailblazing for transparency”: while being transparent isn’t easy for most philanthropic funders today (and it presents challenges for us), we are developing and writing publicly about our experiences and processes, so that future philanthropists will find it easier to emphasize transparency from day one.

Good Ventures has had a similar goal of sharing notes from conversations, and it has been an active participant in the evolution we describe below.


Early on, when we had a particularly informative conversation, we would often write up a summary (or rough transcript) of it and ask the person we had spoken with what they would think of our publishing it publicly. In all cases, we offered the person the opportunity to make edits to the document beforehand – our intent was not to capture the conversation word for word, but to share the insights and knowledge of the person in question.

There were cases in which this worked out (for example, a 2009 conversation with VisionSpring), but there were also a couple of cases in which the person we spoke with was taken aback at our request. In the latter cases, the problem seemed to stem from the fact that we had asked about publishing notes after the conversation rather than before it (we had been inconsistent on this point): we received complaints along the lines of “I spoke with you frankly because I didn’t know you would be seeking to publish anything publicly; now I feel ambushed.” In response to these cases, we determined that we needed to standardize our process for conducting conversations to ensure that we always raised the possibility of publishing notes before the conversation began. We also sought to clean up our notes before sending them for review, to reduce the editing burden on the person we sent them to and increase the odds of their being approved.

As our volume of conversations increased, taking, cleaning and sending notes for review became a major burden on our capacity, particularly because it was hard to specialize the note-taking role (since notes generally had to be put together by someone who had been present for the conversation).

We discussed the idea of simply recording conversations (with permission) and posting the recordings, but in the few cases where we had raised this as a possibility, it had generally seemed to make people uncomfortable (though there were exceptions, such as a conversation with William Easterly). Instead, we started asking to record the conversations for internal purposes only, and we dedicated one of our staff members to transcribing these conversations after the fact. We settled on relatively standardized language for making our requests (language that Good Ventures helped craft and also uses in its communications), and adopted a procedure of putting this language in pre-conversation emails (usually when finalizing the time/date/venue for a conversation):

A goal of ours is to share as much as possible about our research publicly, so that others can learn from our work. So if you’re up for it, we’d like to take notes on this call for the purpose of posting them on our website later, pending your review and approval.

After the call, we’d run the notes by you, and if there were anything you wanted to keep confidential, or any changes you wanted to make, we’d be happy to do so. If you decided you’d rather we not publish the notes at all, we’d be happy to oblige, because we never want to create a disincentive for people to speak frankly with us. What do you think?

Along the same lines, would it be OK to record the call? The recording would be for our internal use only, so that we can focus on listening rather than on taking notes.

Good Ventures also currently uses (and helped to develop) the language and procedure described above.

We found that implementing this process solved a lot of our former problems: we no longer had cases where people were taken aback at our requests (people who weren’t comfortable with notes or recording could tell us so in advance of the call), and we had a dedicated staff member producing transcripts and saving the rest of staff’s time. The dedicated staff member has, over time, become better at taking notes that concisely hit the main points of the call and are highly likely to get approved with few changes (at one point we experimented with simply producing full transcripts, but these weren’t well received because they required the people we had spoken with to do a lot of reading and editing).

So we now have a process that we feel works fairly well, with the result that the vast majority of our conversations lead to public conversation notes capturing the highlights, and the costs in terms of relationships and our capacity seem quite manageable. But as detailed above, it took a good amount of time, trial and error to get to this point.

Notes from our conversations are available at our conversations page and via our new content feed.

Update on GiveWell’s web traffic / money moved: Q3 2012

In addition to evaluations of other charities, GiveWell publishes substantial evaluation on itself, from the quality of its research to its impact on donations. We publish quarterly updates regarding two key metrics: (a) donations to top charities and (b) web traffic.

The charts below present basic information about our growth in money moved and web traffic thus far in 2012. Website traffic tends to peak in December of each year (circled in the chart below). Growth in web traffic has generally remained strong in 2012.

So far in 2012, there have been 342,551 monthly unique visitors (calculated as the sum of unique visitors in each month) to the website, compared with 238,172 at this time in 2011, or 44% annual growth. A significant contributor to this growth was an increase in our Google Grants budget (which provides us free advertising via Google AdWords). Excluding this, we have had 23% annual growth in web traffic.

Growth in money moved has remained strong as well. The majority of the funds GiveWell moves come from a relatively small number donors giving larger gifts. These larger donors tend to give in December, and we have found that growth in donations from smaller donors throughout the year tends to provide a reasonable estimate of the growth from the larger donors by the end of the year.

Below, we show two charts illustrating growth among smaller donors.

Thus far in 2012, GiveWell has directed $604,327 to our top charities from donors giving less than $10,000. This is approximately 2.4x the amount we had directed at this point last year.

Most donors give less than $1,000; the chart below shows the growth in the number of smaller donors giving to our top charities.

Overall, 1531 donors have given to GiveWell’s top charities this year (compared to 624 donors at this point last year).

In total, GiveWell donors have directed $1,394,071 to our top charities this year, compared with $766,561 at this point in 2011. For the reason described above, we don’t find this number to be particularly meaningful at this time of year.

US Cochrane Center (USCC) gets our first “quick grant” recommendation

Recently, we did something that may strike many GiveWell followers as out of character. We recommended a $100,000 grant to the US Cochrane Center, despite the fact that we have done relatively little investigation of it so far (compared with our investigations of current top charities)—and have many unanswered questions. Good Ventures, which helped with our investigation and therefore followed it closely, was a part of the conversation in which we came to the conclusion that this grant represented a good giving opportunity, and it committed the funds shortly afterward (before we had finalized our writeup; we considered this appropriate since, as we discuss below, speed was desirable in this situation.*)

This post covers two topics:

  • Why we believe it is important to be able to make quick grants (i.e., grants with far less than our usual level of investigation) when warranted, and we are working on principles for doing so.
  • Why we believe that the grant discussed in this post meets our working criteria for a quick grant.

In brief:

  • We believe that in certain cases, speed is valuable in grantmaking: sometimes because circumstances demand action by a certain date (for example, some projects involve close coordination between multiple entities, including governments, and may need to happen on timelines that work for these entities), and sometimes because speed in giving can help organizations make better planning decisions.
  • We have developed working principles for when a “quick grant” is called for, which are discussed in this post. We intend to experiment with “quick grants” while ensuring that they represent only a small portion of our money moved in the near future.
  • We see the USCC as an ideal recipient for our first “quick grant” partly because we are continuing to investigate the USCC as a potential top charity (in which case we would recommend it to individual donors). The grant discussed in this post meets our working criteria for “quick grants,” and in addition, we are confident that we will learn more over time about the extent to which this “quick grant” was warranted.

Note that the bulk of this post was written in July, following the grant recommendation and grant. We held publication of the post, after drafting it, for a variety of reasons including (a) the desire to get and respond to feedback from all parties discussed in this post (as we generally do); (b) the desire to publish more public communications about our evolution as an organization in order to give more context on how this recommendation fits in.

The importance of making “quick grant” recommendations

To date, our usual approach to making recommendations has been:

  • Survey an entire field, looking for the organizations that perform best according to various heuristics.
  • Deeply investigate the top-contender organizations, making a major effort to answer all major questions to the degree that we reasonably can.
  • Set a deadline (usually giving season) by which we must make or refresh our recommendations; this ensures that we avoid perfectionism and make the best recommendations we can with the information we have.

We’ve put an increasing amount of effort into investigating contenders for our #1 ranking; we’ve found it very important that we be able to fully stand behind any such recommendation, with reasonable answers to any question that one might raise.

There are a lot of positive things about this approach, and we intend to retain it for the recommendations that comprise the bulk of our money moved. We believe that being systematic, deliberate and thorough is likely to lead to much better giving, in general, than relying on happenstance and intuition alone; at the same time, having regular deadlines leads to regular giving (which we favor).

That said, we also believe that in certain cases, speed is valuable. Sometimes speed is valuable because circumstances demand action by a certain date (for example, some projects involve close coordination between multiple entities, including governments, and may need to happen on timelines that work for these entities). Sometimes the argument for speed is more subtle – for example, giving quickly can help organizations make better decisions about how to plan their budgets as well as how much time to invest in fundraising from different sources.

We’ve long been intrigued by the ideas of people like Bill Somerville, and wondered whether the approach of taking calculated risks after a shorter period of research than has been our norm has something to recommend it at times. Since connecting with Good Ventures, this issue has become more salient to us for a couple of reasons:

  • We were the recipient of one such “quick grant.”After our first interaction with Cari and Dustin (a 90-minute meeting between them and myself in February 2011), they expressed an intention to contribute $100,000 over two years in operating support to GiveWell. At that time, we were facing a substantial projected deficit and were investing more time than usual in fundraising. This “quick grant” was extremely helpful for our planning and efficiency, in a way that it couldn’t have been if it had required the kind of investigation that GiveWell normally puts in. Noticing this further highlighted the advantages of “quick grants.”
  • In addition, our relationship with Good Ventures means that we are able to make “quick grant” recommendations that can be taken up quickly; while we have some other connections who provide similar opportunities, the bulk of our other money moved tends to be concentrated in the month of December.

On May 8, two GiveWell representatives (Stephanie Wykstra and I) visited the US Cochrane Center (the USCC) (note that we have posted notes from the visit). For reasons discussed below, we came away with the impression that (a) the USCC plays an important role in meta-research; (b) the USCC faced a drastic shortage of operating funding, even to the point where it might not be able to continue uninterrupted, minimal staff support. Over the next six weeks (as discussed below), we tried to find persuasive counterarguments to this viewpoint and failed to do so. While I still had many unanswered questions, it struck me that a grant to the USCC, made sooner rather than later, could potentially do an enormous amount of good in terms of helping the USCC plan intelligently; the mere fact that I wasn’t confident in the value of such a grant didn’t seem to change the fact that the expected value of making such a grant quickly seemed high.

While principles such as thoroughness and transparency are important, the principle of doing as much good as possible with one’s giving is more core to GiveWell than any other. We strongly believe that we should never find ourselves passing up what we see as an opportunity to do maximal good, just because the opportunity is an awkward fit with our existing habits and processes. We are constantly watching the actions of other funders and asking whether they are finding great opportunities to do good, in a way that our existing approach would fail to (and if so, what we can do about it).

Thus, once we saw the potential value of a “quick grant” to the USCC, we started actively considering the idea of modifying our approach to facilitate such a grant (rather than passing on the grant because it didn’t fit well with our existing approach). At the same time, we wanted to make sure that we were adopting and discussing a principled modification to our general approach, not simply recommending a “quick grant” on a whim. As such, we worked to develop the best set of general principles we could for when a “quick grant” is called for. These principles were inspired by the case of the USCC, but we took our best shot at making them reasonable for general application.

Principles and process for making “quick grant” recommendations

This section discusses (a) the key questions we feel are appropriate for a potential “quick grant”; (b) the process we intend to follow for answering these questions quickly and efficiently and making “quick grants.”

Key questions for a potential “quick grant”

  1. Is there a reason that speed, in and of itself, is valuable for this grant? Some possible reasons that speed may be valuable:
    • There may be a specific reason that funds are needed by a particular date in order to go forward with a particular project.
    • There is always a potential argument that (as outlined above) an early commitment can help an organization plan better and use its time more efficiently. This argument will generally be stronger in the case when (a) a relatively small grant can make a big difference to an organization’s planning (usually because the organization has little in the way of unrestricted support), and (b) we have a positive view of the organization and its people overall (as opposed to a small subset of the work it does).
    • An early grant can also be helpful for learning purposes. If an organization is highly promising from our perspective, but hesitant to engage in our process because of how it perceives the costs and benefits of doing so, an early grant may be valuable as a way to improve our access and ability to learn.
  2. Is the organization/project in question focused on work that seems valuable, reasonably cost-effective, suited to philanthropy (as opposed to other approaches) and thus “worth doing” overall? For the purposes of a “quick grant,” the approach to this question will be highly intuitive and perhaps unsatisfying compared to the cost-effectiveness analysis we often do. Explicit cost-effectiveness analysis is very difficult to do with reasonable speed, without sacrificing reasonable robustness, and when comparing radically different approaches to doing good it can be nearly impossible to say with much confidence how they compare in terms of “bang for the buck.” This question acts only as a basic screen: is the organization/project in question filling a valuable role in an important ecosystem, leveraging the work of others, and overall taking on an approach to helping people that is at least defensible from a cost-effectiveness standpoint?
  3. Do we see a convincing reason that this organization would not be able to raise the funding it needs in the relevant time frame, even if it made a good case for such funding? As discussed below, one of the things that excites us about the USCC as a funding opportunity is the sense that the USCC is “structurally underfunded”: it struggles to raise funds not because major funders have thoughtfully considered and rejected its case, but because major funders largely have program areas and issue focuses that don’t leave room for the kind of work the USCC does.

    This is not often the case. We often find ourselves saying, “It seems that if this work were as valuable as the organization claims, funder X would support it.” When such a statement can be convincingly refuted, the case for a “quick grant” (as for a recommendation in general) becomes much stronger.

  4. Are the people involved impressive, competent, and capable of making good on-the-fly decisions, such that we’re comfortable with grants that we have fairly little visibility into the specific intended use of? We’ll be writing more in the future about how we evaluate people. This question is important for all recommendations but is particularly important for “quick grants,” since it generally takes us a long time to feel confident about the specifics of how funds will be used; we are much more likely to recommend “quick grants” when these specifics aren’t necessary in order to feel the funds will be used well.
  5. How much will we learn in the future about the organization/project and the extent to which the “quick grant” was a good idea? Since “quick grants” are likely to be smaller than the money we moved to our top charities, it isn’t a given that we will engage in the same sort of followup on them that we do for our top charities. In addition, we’ve found that it’s much easier to learn about an organization when we invest up front in defining goals, metrics, etc., which is more difficult to do in the case of a “quick grant.” So by default, there is a risk that we won’t learn much from or about a “quick grant”; we need to be attentive to this and have a strong preference for grants with more learning potential.

    We expect that there will often be cases in which we consider “quick grants” to organizations that we also plan to evaluate more thoroughly (with the possibility of moving significantly more money to them). In these situations, the situation looks much better in terms of learning about whether our grant was a good one and what ultimately came of it. The USCC is one of these cases.

Our process for making “quick grants”

We’ve provisionally agreed to the following process for making quick grants:

  • The process starts when we have some unusually strong signs that the answers to the above key questions are positive. We do not need a definitive case; the case may be largely intuitive and suggestive, but it should be unusually strong in the scope of opportunities we come across.
  • We then pick any “low-hanging fruit” in terms of further investigating the answers to our questions – any investigative work that can be done quickly and is likely to lead to a substantially better understanding of the situation.

    While all of the above key questions are important, we are likely to have a fairly quick and intuitive read on questions #2, #4, and #5, and the questions that we are most likely to focus on investigating are #1 (is there a reason that speed is likely to be helpful?) and particularly #3 (are there other funders who are a logical fit to fund this organization/project or is it underfunded for structural reasons?)

  • We actively seek out counterarguments to our views on key questions, as effectively and efficiently as we can. The main approach we used for the USCC – an approach we are likely to use for future “quick grants” as well – is to seek out conversations with funders who seem like the closest logical fit for the funding opportunity, and try to understand whether they are (a) planning to fund the project/organization; (b) planning not to fund the project/organization, for reasons we find compelling; (c) planning not to fund the project/organization, for reasons we don’t find compelling. (A “quick grant” should be made when (c) holds, not when (a) or (b) holds.)
  • Before any “quick grant,” we hold a meeting or conference call that combines (a) all staff who have been highly involved with the investigation; (b) some staff who haven’t; (c) funders who are particularly likely to follow the recommendation for a “quick grant.” The staff recommending the “quick grant” summarize the answers to key questions as well as what investigations have been done to learn more about these questions and to identify counterarguments. Others on the call focus on (a) evaluating the strength of the arguments (answers to key questions) given the information already available; (b) determining whether there is other information that would be likely to quickly and substantially shift answers to the key questions (i.e., whether there is “low-hanging fruit” in investigative terms).
  • If the basic case for a “quick grant” is accepted, the next step is to determine the size of the grant recommendation (i.e., the dollar amount past which we would stop recommending a “quick grant”). When the “quick grant” is to meet a specific need or fund a specific project, we should understand the nature of the time sensitivity and the size of the need, and the current funding status, before recommending the award. When the “quick grant” is more along the lines of general support for purposes of helping the organization plan and/or improving our access to the organization, we should pay more attention to the size and variance of the organization’s budget (as well as any available room for more funding analysis) and try to aim for something that provides substantial benefit (in terms of planning and/or access) but doesn’t come close to meeting all the organization’s needs.
  • When we do recommend a “quick grant,” we publicly write up the recommended grant amount, recipient, and a summary of our process and reasoning in making the recommendation.

How we decided to recommend a “quick grant” to the USCC

We’ve long been familiar with the work of the Cochrane Collaboration, having used it in our research. We’ve noted before that

We have found that its reports generally review a large number of studies and are very clear about the findings, strengths and weaknesses of these studies. For health programs, when there are often many high-quality studies available, we therefore use Cochrane as our main source of information on “micro” evidence when possible.

In April of this year, I attended a meeting on preregistration in development economics and encountered Kay Dickersin of the USCC, who stated to me that (a) the USCC is struggling to attract unrestricted support; (b) if the USCC had sufficient funds, it would provide general support to US-based Cochrane entities, including direct financial grants in cases where these entities appeared underfunded. We scheduled a full-day visit to the USCC in Baltimore to learn more, because (a) we have long respected the Cochrane Collaboration’s work and were surprised to hear that the USCC was struggling to attract unrestricted funding; (b) this was the first concrete giving opportunity we’d encountered in the area of meta-research, which is a new high-priority focus area for us and which we’re seeking to learn more about; (c) the USCC showed a high level of interest in engaging with us and offered to put together a full-day meeting with multiple representatives, which both raised our expectations about what how much we could learn and served as an additional signal that the USCC was struggling to attract sufficient operating funding.

Our notes from the full-day meeting are published online (DOC). We came away from the meeting feeling there was a strong preliminary case for the USCC based on the five key questions above (though we had not yet formalized these questions as the key ones for “quick grants”):

  • Is there a reason that speed, in and of itself, is valuable for this grant? The USCC appeared to have a concrete and time-sensitive need for unrestricted funding, including the immediate need for funds to continue uninterrupted, minimal, staff support. In our view, this situation is notable not just because of the specific consequences that a grant might have (allowing the USCC to retain core staff), but also because it more broadly illustrates that the USCC does not have a stable situation in terms of unrestricted funds, and thus that support could help it to plan and set priorities more effectively.
  • Is the organization/project in question focused on work that seems valuable, reasonably cost-effective, suited to philanthropy and thus “worth doing” overall? We are positive on the quality of the Cochrane Collaboration’s work, as discussed above; at the meeting we also came away with preliminary reasons to believe the work is influential as well (though we plan to investigate this more). As for the USCC’s role in the Cochrane Collaboration, we saw fairly strong arguments on this point. The Cochrane Collaboration relies on training and supporting volunteers, many of whom are academics. The U.S. has many potential volunteers, including those based within the country’s large university system. But in the U.S. there is far less funding for Cochrane infrastructure (i.e., to train and support volunteers) than in other English-speaking countries such as the UK, Canada and Australia. Jeremy Grimshaw, co-chair of the Cochrane Collaboration’s International Steering Group and Director of Canada’s Cochrane Center, was present by phone at the meeting and supported the message being sent that the USCC is a point of particularly high leverage and importance for the Cochrane Collaboration as a whole. For the reasons discussed above, we don’t feel that formal cost-effectiveness analysis is likely to be helpful in this case.
  • Do we see a convincing reason that this organization would not be able to raise the funding it needs in the relevant time frame, even if it made a good case for such funding? We questioned the USCC about many potential sources of funding and were told that the lack of funding was largely for structural, not substantive reasons. That is, the funding needed would support the infrastructure required for the Cochrane Collaboration’s work, such as staff for training, and methodological support for reviews, and not hypothesis-testing research. As such, the Collaboration’s needs do not fit into the pre-defined categories and issue areas of major funders. Thus, potential funders have considered and declined Cochrane requests and applications for general operating support based on “not a good fit” rather than the quality or importance of the work, overall. This was the point we felt we most needed to examine further after the meeting, and we did so, as discussed below.
  • Are the people involved impressive, competent, and capable of making good on-the-fly decisions, such that we’re comfortable with grants that we have fairly little visibility into the specific intended use of? Multiple representatives were present, and overall we felt they answered our questions reasonably clearly and well; the USCC also appears comfortable with transparency, having signed off quickly and permissively on our notes from the meeting. Our general positive impression of the Cochrane Collaboration’s work is also relevant here. We currently have moderate confidence on this point; we anticipate learning more as we investigate Cochrane further.
  • How much will we learn in the future about the organization/project and the extent to which the “quick grant” was a good idea? We are currently performing an in-depth investigation of the USCC, considering recommending it for more funding than the initial “quick grant,” so we believe that we will learn a great deal about the extent to which this “quick grant” was warranted.

After the meeting, we agreed that the USCC was a promising organization, and our top priority became looking efficiently for counterarguments to the case for funding it. With Good Ventures’s help, we sought out conversations with the major funders that seemed to us like potential fits for the USCC, based both on our prior knowledge and from conversations with the USCC, hoping that we would gain more context on (a) whether it’s true that the USCC doesn’t fit into the issue areas of existing major funders; (b) whether there were general counterarguments to our preliminary views on the USCC’s value and need for more funds.

Feedback was solicited from:

  • Representatives of the Gates and Hewlett Foundations (we had no reason to believe they were a fit for the USCC, but thought they might know who would be, and see them as relatively impact-oriented funders in general; we are not cleared to share notes about these interactions).
  • A representative from the Wellcome Trust, a large medical research funder (notes available as DOC)
  • Representatives from U.S. government agencies: the National Institute of Child Health and Human Development (which contracts with one of the U.S.-based Cochrane review groups but does not provide unrestricted support to the USCC), the NIH Office of Medical Applications of Research in the Office of the Director (which we were pointed to in order to explore whether the USCC might be a fit for funding from the Office of the Director), and the Agency for Healthcare Research and Quality (which has funded the USCC in the past and, like the Cochrane Collaboration, commissions systematic reviews). We are not cleared to share our notes from these interactions.
  • A representative from the HIV/AIDS department of the World Health Organization. We are not cleared to share our notes from this interaction.

In these interactions, we asked for general impressions of the Cochrane Collaboration, thoughts on what sorts of funders might be structurally able to support the USCC, thoughts on what other groups do the sort of work that the Cochrane Collaboration does, and (when relevant) reasoning behind an entity’s support (or lack thereof) for USCC. We came away with the impression that the Cochrane Collaboration’s work is widely respected and seen as high-quality and important, that we can’t easily identify any major funders that are a structural fit for the USCC, and that the main other group focused on systematic reviews is AHRQ, which we plan to investigate further. (AHRQ’s role relative to Cochrane’s is discussed in the notes from our visit to the USCC; our takeaways from other conversations were broadly consistent with these notes.)

We also spoke to

  • Jeremy Grimshaw, co-chair of the Cochrane Collaboration’s International Steering Group and Director of Canada’s Cochrane Center. We sought to press the question of whether the USCC, specifically, is the best entity to fund in order to supporting the overall mission of the Cochrane Collaboration. Dr. Grimshaw conferred with other international Cochrane Collaboration representatives, including the other Steering Group co-chair, the interim Executive Director and the Editor-in-Chief of the Cochrane Library, and informed us that they endorsed supporting the USCC and would seek the formal endorsement of The Cochrane Collaboration Steering Group. Since then (following the grant), we have further pressed the issue of whether there might be other opportunities to support the Cochrane Collaboration that are higher-leverage than the USCC, and we are continuing to speak with Dr. Grimshaw about how best to work with international representatives to investigate this question. We therefore intend to investigate other Cochrane entities as well, and believe that doing so will take a significant amount of time, so given that the USCC had been endorsed as a strong opportunity in terms of potential leverage for a donation at the time of the grant recommendation (though no single opportunity within the Cochrane Collaboration was put forth as the “best”), we feel it was the right decision to move forward.
  • John Ioannidis, whom we see as a leading figure in the field of meta-research generally (and meta-research for medicine in particular). We have published extensive notes from this conversation in transcript form (DOC); Dr. Ioannidis has been involved in the Cochrane Collaboration in the past and believes the USCC to be a strong funding opportunity.
  • Professor Steven Goodman of the Stanford School of Medicine, a referral from one of the funders we spoke with (summary forthcoming).

Finally, we obtained detailed room for more funding analysis from the USCC; this analysis is now available online (DOCX).

Having done the above investigations, we felt that

  • We had strong – though far from conclusive – reasons to believe that the USCC has strong answers to our key questions. In particular, we believed that it had an urgent need for more unrestricted funding to assist with its planning; that its struggle to attract unrestricted funding could largely be attributed to structural issues (the fact that major funders often focus on particular diseases and do not prioritize meta-research) rather than to substantive objections to the USCC’s work; and that it was a strong candidate for “best leverage point for supporting the overall mission of the Cochrane Collaboration.”
  • We had many remaining questions about the USCC, and planned a thorough investigation to answer them. However, we didn’t see any “low-hanging fruit” remaining on the investigative end, and believed it would take a lot of work to obtain more satisfying answers to our key questions.

We discussed a $100,000 grant – enough to replace a specific source of support the USCC expects to lose, and enough to ensure that it can continue uninterrupted, minimal staff support. We came to the conclusion that $100,000 was enough to make a significant difference to the USCC without coming anywhere near meeting its funding needs (as expressed in the “room for more funding” analysis, linked above), and thus that such a grant could be justified not only based on its specific effect (allowing the USCC to retain uninterrupted, minimal staff support) but also on the more general principle (discussed above) of helping a highly underfunded organization with its ability to plan and prioritize. We checked in one more time with the USCC to make sure its funding situation had not changed materially, and recommended the grant.

*There is usually a substantial lag between our coming to a conclusion about a giving opportunity and our writing up & publishing our reasoning. In this case, for reasons discussed below, we did not want to accept the lag of writing up & publishing our reasoning before driving donations, so we made our recommendation via a discussion with Good Ventures and are publishing our reasoning now. In the future, we will generally publish our reasoning publicly before making a recommendation to any particular donor in cases where the funding gap is large and we are seeking to drive donations from many donors, and/or where a lag between the recommendation and the funding commitment is acceptable, but we may act as we have in this case when these conditions do not hold.

Recent board meeting on GiveWell’s evolution

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

This year, GiveWell has been evolving in a couple of significant ways:

  • We’ve been exploring giving opportunities that may involve restricted/project-specific funding (as opposed to unrestricted support of charities), as well as giving opportunities that could be relatively speculative, hard to evaluate and high-risk (contrast with our previous focus on “proven cost-effective”) charities. (Previous discussion)
  • We’ve been working closely with Good Ventures, a major funder (previous discussion). We’ve also been reflecting on whether we ought to be focusing our outreach efforts more on major funders (relative to our current target audience of people giving $250,000 or less per year).

We recently held a Board meeting to discuss these shifts, and some of the potential challenges and decisions that may come up as a result. We have now published audio from this meeting, as well as the attachment featured in it that summarizes the issues we see ourselves as facing. This post gives a high-level overview of the issues we discussed and what we’ve concluded for the time being.

Summary:

  • GiveWell continues to prioritize research aimed at finding outstanding giving opportunities for individual donors. GiveWell continues to place a high importance on providing enough of these opportunities to keep up with demand, i.e., the amount of money we expect to move from individual donors to our top charities.
  • GiveWell’s research process is evolving in ways that we feel are necessary in order to find the best giving opportunities possible for all donors, both small and large. Since GiveWell’s staff capacity is increasing, it is able to increase its work on “proven cost-effective” interventions while also exploring other areas.
  • GiveWell will continue to work closely with Good Ventures, and may prioritize outreach to other potential major-donor partners. However, it does not plan to become a consultant to Good Ventures or any other “major donor.” The purpose of GiveWell’s working with Good Ventures, and of outreach to potential major donors, is to find people who share GiveWell’s core values and seek to support its mission – not to customize or alter its work to suit major donors. And transparency remains a core value of GiveWell’s; it continues to seek to publicly publish as much as possible of what goes into its reasoning and recommendations.

Evolution of our research process
As discussed previously, we feel that we’ve hit diminishing returns to our approach of focusing on no-strings-attached donations to organizations focused on proven cost-effective interventions. We’ve begun broadening the universe of giving opportunities we will consider.

We previously aimed to draw a bright line between our “traditional” research and GiveWell Labs, which is open to any giving opportunity regardless of form or sector. However, because our traditional approach has hit diminishing returns, we now are focusing the bulk of our research capacity on investigations that are “experimental” in some sense – either because they may involve project-specific funding or because they are in sectors outside of “direct aid.” Accordingly, we no longer find it helpful to draw a bright line between “GiveWell traditional” and “GiveWell Labs” – instead, we have laid out a research agenda and focus area for GiveWell as a whole.

That said, we are still committed to

  • Finding the most proven, cost-effective giving opportunities for individual donors to support. We believe that finding more of these opportunities (beyond our current top charities) requires being open to project-specific funding, as discussed previously.
  • Continuing to provide regular updates on previously recommended giving opportunities, including both good news and bad.
  • Continuing to maintain, assess, and update our top charities list, and clearly communicating the difference between this list (which focuses on proven cost-effective charities) and any giving opportunities we may recommend that fall into other categories.
  • Doing everything we can to provide enough “proven cost-effective” giving opportunities to meet the demand for them (i.e., the amount of money we expect to move to them) from our audience.

Since GiveWell’s staff capacity is increasing, it is able to increase its work on “proven cost-effective” interventions while also exploring other areas.

Relationship with Good Ventures
As we wrote previously, we have been working closely with Good Ventures in multiple ways. We find the relationship to be highly mutually beneficial; at the same time, it is important to us that

  • We retain our independence: the ability to prioritize giving opportunities based on what we find most promising, and to allocate our resources in line with our own prioritization.
  • We retain our transparency, continuing to publicly publish the full details of our analysis and other items of interest to donors.
  • We are not perceived as being unduly influenced – in our research direction, our use of resources, or otherwise – by Good Ventures.
  • We continue to serve other donors and to bring them enough outstanding giving opportunities to meet demand (i.e., the amount of money we expect to move from them).
  • We remain open to working closely with other major funders, as we are with Good Ventures.

In order to accomplish the above goals, we are planning to develop and publish some general guidelines regarding how we work with major donors, including policies for ensuring that we retain our independence and for ensuring that the role of any major donor in our research process is made transparent.

In addition, as discussed previously, we are thinking of putting more of our outreach efforts into reaching major funders (relative to our current target audience of people giving $250,000 or less per year). However, this concerns only our outreach efforts, not our research efforts or our commitment to transparency.

Seeking your feedback
If you’re a user of GiveWell’s research, we’d like to hear your thoughts on the above. We’d particularly like to hear from you if you have any concerns or see any risks to GiveWell’s value for you as a source of independent, in-depth research on how to accomplish the most good possible with your giving.

The ideal form of feedback (from our perspective) would be comments on this blog post, since that allows anyone to see the exchange, but we are also happy to be contacted privately.

Updated thoughts on our key criteria

For years, the 3 key things we’ve looked for in a charity have been (a) evidence of effectiveness; (b) cost-effectiveness; (c) room for more funding. Over time, however, our attitude toward all three of these things – and the weight that we should put on our analysis of each – has changed. This post discusses why

  • On the evidence of effectiveness front, we used to look for charities that collected their own data that could make a compelling case for impact. We no longer expect to see this in the near future. We believe that the best evidence for effectiveness is likely to come from independent literature (such as academic studies). We believe that if a program does not have a strong independent case, there is unlikely to be a charity that can demonstrate impact with such a program.
  • We have continually lowered our expectations for how much role cost-effectiveness analysis will play in our decisions. We still believe that doing such analysis is worthwhile when possible – partly because of the questions it raises – but we believe the cases where it can meaningfully distinguish between two interventions are limited.
  • We have continually raised our expectations for how much role room for more funding analysis will play in our decisions. Questions around “room for more funding” are now frequently the first – and most core – questions we ask about a giving opportunity.

Evidence for effectiveness
In our 2007-2008 search for outstanding charities, we took applications and asked charities to make their own case for impact. In 2009, we identified evidence-backed “priority programs” using independent literature, but still actively looked for charities (even outside these programs) with their own evidence of effectiveness. In 2011, we continued this hybrid approach.

In all of these searches, we’ve found very little in the way of “charities demonstrating effectiveness using their own data.”

We believe the underlying dynamic is that

  • Evidence on these sorts of interventions is very difficult and expensive to collect.
  • It’s particularly difficult to collect such evidence in a way that addresses various common concerns that we believe to be very common and important in the context of evaluating charitable programs.
  • Studies that can adequately address these issues are generally “gold-standard” studies, and are therefore of general interest (and can be found by searching independent/academic literature).

Accordingly, our interest in “program evaluation” – the work that charities do to systematically and empirically evaluate their own programs – has greatly diminished. We are skeptical of the value of studies that fall below the “gold standard” bar that usually accompanies high-reputation independent literature.

This shift in our thinking has greatly influenced how our process works and what we expect it to find. Rather than putting a lot of time into scanning charities’ websites for empirical evidence, as we did previously, we now are focused on identifying the evidence-backed interventions, then finding the vehicles by which donors can fund these interventions.

Cost-effectiveness
The ultimate goal of a GiveWell recommendation is to help a donor accomplish as much good as possible, per dollar spent. Accordingly, we have long been interested in trying to estimate how much good is accomplished per dollar spent, in terms such as lives saved per dollar or DALYs averted per dollar.

Over the years, we’ve put a lot of effort into this sort of analysis, and learned a lot about it. In particular:

  • In sectors outside of global health and nutrition, it is generally impractical to connect measurable outcomes to meaningful outcomes (for example, we may observe that an education program raises test scores, but it is very difficult to connect this to something directly related to improvements in quality of life). Not surprisingly, the vast majority of attempts to do cost-effectiveness analysis (including both GiveWell’s attempts and others’ attempts) have been in the field of global health and nutrition.
  • Within global health and nutrition, even the most prominent, best-resourced attempts at cost-effectiveness analysis have had questionable quality and usefulness.
  • Our own attempts to do cost-effectiveness analysis have turned out to be very sensitive to small variations in basic assumptions. Such sensitivity is directly relevant to how much weight we should put on such estimates in decision-making.
  • That said, we continue to find cost-effectiveness analysis to be very useful when feasible, partly because it is a way of disciplining ourselves to make sure we’ve addressed every input and question that matters on the causal chain between interventions (e.g., nets) and morally relevant outcomes (e.g., lives saved). In addition, cost-effectiveness analysis can be useful for extreme comparisons, identifying interventions that are extremely unlikely to have competitive cost-effectiveness (for example, see our comparison of U.S. and international aid).

While we still intend to work hard on cost-effectiveness analysis, and we still see value in it, we do not see it as holding out much promise for helping to resolve difficult decisions between one giving opportunity and another. We find other criteria to be easier to make distinctions on – criteria such as strength of evidence (discussed above) and room for more funding (discussed below).

Room for more funding
For the first few years of our history, we knew that the issue of room for more funding was important, but we made little headway on figuring out how to assess it. We tried asking charities directly how additional dollars would be used, but didn’t receive very helpful answers (see applications received for our 2007-2008 process).

In 2010, as a result of substantial conversations with VillageReach, we developed the basic approach of scenario analysis, and since then we’ve used this approach to reach some surprising conclusions, such as the lack of short-term room for more funding for the Nurse-Family Partnership and recommending KIPP Houston rather than the KIPP Foundation due to “room for more funding” issues.

By now, room for more funding is in some ways the “primary” criterion we look at, in the sense that it’s often the first thing we ask for and sits at the core of our view on an organization. This is because

  • Asking “what activities additional dollars would allow” determines what activities we focus on evaluating.
  • Many of the charities and programs that may seem to have the most “slam-dunk” case for impact also seem – not surprisingly – to have their funding needs already met by others. We’ve found it relatively challenging to find activities that are both highly appealing and truly underfunded.
  • In the absence of reliable explicit cost-effectiveness analysis, an alternative way of maximizing impact is to look for the most appealing activities that have funding gaps. The analytical, “sector-agnostic” approach we bring to giving seems well-suited to doing so in a way that other funders can’t or won’t.

Many people – including us early in our history – may be inclined to think that maximizing impact consists of laying out all the options, estimating their quantified impact-per-dollar, and ranking them. We’ve seen major limitations to this approach (though we still utilize it). We’ve also, however, come across another way of thinking about maximizing impact: finding where one can fit into the philanthropic ecosystem such that one is funding the best work that others won’t.

Surveying the research on a topic

We’ve previously discussed how we evaluate a single study. For the questions we try to answer, though, it’s rarely sufficient to consult a single study; studies are specific to a particular time, place, and context, and to get a robust answer to a question like “Do insecticide-treated nets reduce child mortality?” one should conduct – or ideally, find – a thorough and unbiased survey of the available research. Doing so is important: we feel it is easy (and common) to form an inaccurate view based on a problematic survey of research.

This post discusses what we feel makes for a good literature review: a report that surveys the available research on a particular question. Our preferred way to answer a research question is to find an existing literature review with strong answers to these questions; when necessary, we conduct our own literature review with the same questions in mind.

Our key questions for a literature review

  • What are the motivations of the literature reviewer? A biased survey of research can easily lead to a biased conclusion, if the reviewer is selective about which studies to include and which to focus on. We are generally highly wary of literature reviews commissioned by charities (for example, a 2005 survey of studies on microfinance commissioned by the Grameen Foundation) or advocacy groups. We prefer reviews that are done by parties with no obvious stake in coming to one recommendation or another, and with a stake in maintaining a reputation for neutrality (these can, in appropriate cases, include government agencies as well as independent groups such as the Cochrane Collaboration).
  • How did the literature reviewer choose which studies to include? Since one of the ways a literature review can be distorted is through selective inclusion of studies, we take interest in the question of whether it has included all (and only) sufficiently high-quality studies that bear on the question of interest.In some cases, there are only a few high-quality studies available on the question of interest, such that the reviewer can discuss each study individually, and the reader can hold the reviewer accountable if s/he knows of another high-quality study that has been left out. However, for a topic like the impact of insecticide-treated nets on malaria, there may be many high-quality studies available. In these cases, we prefer literature reviews in which the reviewer is clear about his/her search protocol, ideally such that the search could be replicated by a reader.
  • How thoroughly and consistently does the literature review discuss the strengths and weaknesses of each study? As we wrote previously, studies can vary a great deal in quality and importance. When we see a literature review simply asserting that a particular study supports a particular claim – without discussing the strengths and weaknesses of this study – we consider it a low-quality literature review and do not put weight on it. In our view, a good literature review is one that provides a maximally thorough, consistent, understandable summary of the strengths and weaknesses of each study it includes.
  • Does the literature review include meta-analysis, attempting to quantitatively combine the results of several studies? In some cases it is possible to perform meta-analysis: combining the results from multiple studies to get a single “pooled” quantitative result. In other cases a literature review limits itself to summarizing the strengths and weaknesses of each study reviewed and giving a qualitative conclusion.

Strong and weak literature reviews
In general, we feel that the Cochrane Collaboration performs strong literature reviews by the criteria above. Examples of its reviews include a review we discussed previously on deworming and a review on insecticide-treated nets to protect against malaria.

  • The Cochrane Collaboration is an independent group that aims to base its brand on unbiased research, and does not take commercial funding.
  • Cochrane reviews generally explicitly lay out their search strategy and selection criteria in their summaries.
  • Cochrane reviews generally list all of the studies considered along with relatively in-depth discussions of their methodology, strengths and weaknesses (full text is required to see these).
  • Cochrane reviews generally perform quantitative meta-analysis and include the conclusions of such analysis in their summaries.

An example of a more problematic literature review is King, Dickman and Tisch 2005, cited in our report on deworming. This review does well on some of our criteria: it is clear about its search and inclusion criteria (see Figure 1 on page 1562), and it performs quantified meta-analysis (see Table 1 on page 1565). However,

  • It provides a list of all studies included, but unlike the Cochrane reviews we’ve seen, it does not provide any information for these studies (methodology, sample size, etc.) other than the reference.
  • It does not discuss individual studies’ strengths and weaknesses at all.
  • It does not make it possible for the reader to connect the study’s conclusions (in Table 5) to specific studies. (Figures 2-4 break down a few, but not all, of the study’s conclusions with lists of individual studies.) Since over 100 studies were included, we do not see a practical way for a reader to vet the literature review’s conclusions.
  • There is also ambiguity in what the reported conclusions mean: for example, Table 5 does not specify whether it is examining the impact of deworming on the level or change of each listed outcome (i.e., impact on weight vs. impact on change in weight over time).

We have at times seen advocacy groups and/or foundations put out literature reviews that are far more flawed than the study discussed above. Though we generally don’t keep track of these, we provide one example, a paper entitled “What can we learn from playing interactive games?” A representative quote from this paper:

There is also evidence that game playing can improve cognitive processing skills such as visual discernment, which involves the ability to divide visual attention and allocate it to two or more simultaneous events (Greenfield et al., 1994b); parallel processing, the ability to engage in multiple cognitive tasks simultaneously (Gunter, 1998); and other forms of visual discrimi-nation including the ability to process cluttered visual scenes and rapid sequences of images (Riesenhuber, 2004). Experiments have also found improvements in eye-hand coordination after playing video games (Rosenberg et al., 2005).

The paper does not discuss selection, inclusion, strengths, or weaknesses of studies, or even their basic design and the nature/magnitude of their findings (for example, how is “parallel processing” measured?)

All else equal, we would prefer a world in which all literature reviews were more like Cochrane reviews than like the more problematic reviews discussed above. However, it’s worth noting that Cochrane reviews appear to be quite expensive, upwards of $100,000 each. Conducting a truly thorough and unbiased literature review is not necessarily easy or cheap, but we feel it is often necessary to get an accurate picture of what the research says on a given question.