The GiveWell Blog

GiveWell’s history of philanthropy/philanthropy journalism project

Programs’ track records have always been a major input into our research process. For example, when assessing the case for distributing nets to prevent malaria, we’ve looked for information about the track record of similar programs.

As we begin to research other areas where philanthropy could play a role, we similarly want to learn from history about philanthropy’s track record. We’ve done some minimal work looking for literature, but what we’ve found was either not on the topic we’re most interested in (i.e., what has philanthropy accomplished?) or wasn’t at a sufficient level of depth to adequately answer the question “what role did philanthropy, as opposed to other factors, play in the outcome in question?” (For more, see our 2012 post on the best source we’ve found so far for this sort of information.)

Because we’ve struggled to find relevant literature, we’ve begun a project to investigate the possibility of funding someone to do a more thorough job of synthesizing what already exists – or to create better literature. We think it’s possible that we might seek to fund this type of work in the future. Such funding would be modest in size, at least to start, and would be thought of more as “costs of research” than as “top giving opportunities.” We would view this work, at least in the short term, as a potential way to increase our total “research capacity” by answering questions that we would otherwise try to answer internally.

Some examples of projects we might consider include:

  • An annotated bibliography of what relevant materials already exist – materials written by academics and think tanks as well as materials available in foundation archives (e.g., the Rockefeller Archives house archival information from multiple foundations and make this information available to researchers).
  • Literature reviews of topics covered by existing literature.
  • A list of the 20 most important philanthropic “success stories” (policy changes, scientific/technical contributions, or other) of the last 25 years.
  • A list of 20 major philanthropic failures (e.g., cases in which philanthropists spent large amounts of money with disappointing results).
  • In-depth case studies of the above aiming to questions such as “What role did philanthropy play in this change?”, “What other, non-philanthropic factors played a major role?”, “Who (if anyone) was opposed to the change in question and how did they try to prevent it from occurring?” and “What was the social impact of this change?” These case studies could take the form of ~10,000-word “long-form journalism,” academic papers, or think-tank white papers.
  • We could also imagine supporting work that’s more focused on reporting on events as they develop. For example, we could fund a journalist to visit NGOs and report back, much in the way we’ve done on our site visits to our top charities. Or, we could support a journalist to report on the ongoing way in which philanthropists develop strategies and how those strategies play out.

We’re very early in our investigations. We still aren’t sure whether this work would be best suited to academics, think tanks, journalists, or someone else, and we have little idea of what scope (or how much funding) we will eventually find worthwhile. As always, we plan to be fully transparent with the results of this work, so the output of what we produce will be publicly available.

What we’ve done so far and plans for 2013

Thus far, I’ve spoken with about 15 people including journalists, academics, and people who have worked (or work) at think tanks. The conversations have generally been short and we haven’t produced notes from individual conversations. What we have taken away from these conversations is a broad consensus that (a) there isn’t much information of the type we’re looking for already available and (b) this is an interesting project that people would be excited to participate in. The book that’s been most frequently recommended to me as fitting what we’re looking for is Steven Teles’s The Rise of the Conservative Legal Movement.

The people I’ve spoken with (who gave me permission to put their names in this post) are:

While this project may become something much bigger, our goals for 2013 are to undertake several small projects (as a very rough benchmark, we’ve thought of potentially funding up to $250,000 this year) to see if we’re able to produce the type of information we’re looking for.

Deep value judgments and worldview characteristics

One purpose of this blog is to be explicit about some of the deep value judgments and worldviews that underlie our analysis and recommendations. As we raise the priority of expanding our research into new causes, this seems like a good time to lay out some of the things we believe – and some of the things we’re unsure about – on topics that could be of fundamental importance for the question of where to give.

In general, the below statements broadly describe the values of the GiveWell staff who have final say over our research. There may be cases in which different individuals would give different levels of weight/confidence to the various statements than I have, but at a high level we expect these statements to be a reasonably good guide to the values underlying GiveWell’s research.

Values

We don’t believe it would be productive to try to produce a complete explicit characterization of the fundamental values that guide our giving recommendations, but we think it’s worth noting some things about them.

  • We are global humanitarians, believing that human lives have equal intrinsic value regardless of nationality, ethnicity, etc. We do believe there may be cases where helping some people will create more positive indirect effects than helping others (for example, I stated in 2009 that I preferred helping people in urban areas for this reason, though this represents my view and not necessarily the view of others at GiveWell). However, we do not agree with the principle that “giving begins at home”: we do not assign more moral importance to people in our communities and in our country than to others.
  • The primary things we value are reducing suffering and tragic death and improving humans’ control over their lives and self-actualization. We also place value on reducing animals’ suffering, though substantially less than on human suffering. We also place value on reducing animals’ suffering, though our guess is that the type of suffering animals experience is of a kind that we would not weigh as heavily as the type of suffering that humans experience. (We do not have clear consensus views on how to weigh these things against each other.) This bullet point edited for clarity on Sep. 5, 2013.
  • We do not put strong weight on “achievements” (artistic endeavors, space exploration, etc.) as ends in themselves, though these may contribute to the things we do value (details above). We also don’t put strong weight on things like “justice,” “equality,” “fairness,” etc. as ends in themselves (though again, these may contribute to the things we do value).
  • We are broadly consequentialist: we value actions according to their consequences.
  • We are operating broadly in an “expected value” framework; we are seeking to “accomplish as much good as possible in expectation,” not to “ensure that we do no harm” or “maximize the chance that we do some good.”

There are many questions that we do not have internal consensus on, or are individually unsure of the answers to, such as

  • How should one value increasing empowerment vs. reducing suffering vs. averting deaths?
  • How should one value animal suffering in comparison to human suffering helping animals in comparison to helping humans? This line edited for clarity on Sep. 5, 2013.
  • Is it better to bring someone’s quality of life from “extremely poor” to “poor” or from “good” to “extremely good?”
  • Is creating a new life a good thing? Can it be a bad thing? How “desirable” or “undesirable” must the life be for its creation to count as a good/bad thing? Should we value “allowing future lives to exist that would never come into existence otherwise” similarly to “lives saved?”
  • Is it better to save the life of a five-year-old or fifteen-year-old?

We don’t believe it is practically possible to come to confident views on these sorts of questions. We also aren’t convinced it is necessary. We haven’t encountered situations in which further thought on these questions would be likely to dramatically change our giving recommendations. When we have noticed a dependency, we’ve highlighted it and encouraged donors to draw their own conclusions.

Worldview

We view the questions in the previous section as being largely “fundamental,” in that empirical inquiry seems unlikely to shift one’s views on them. By contrast, this section discusses views we have that largely come down to empirical beliefs about the world, but are very wide-ranging in their consequences (and thus in their predictions).

There are two broad worldview characteristics that seem, so far, to lie at the heart of many of our disagreements with others who have similar values.

1. We are relatively skeptical. When a claim is made that a giving opportunity can have high impact, our default reaction is to doubt the claim, even when we don’t immediately see a specific reason to do so. We believe (based partly on our experiences investigating charities) that most claims become less impressive on further scrutiny (and the more impressive they appear initially, the steeper the adjustment that happens on further scrutiny). As a result, we tend to believe that we will accomplish more good by recommending giving opportunities we understand relatively well than by recommending giving opportunities that we understand poorly and look more impressive from a distance.

We have written about this aspect of our worldview previously, and have done some rudimentary work on formalizing its consequences:

  • A Conflict of Bayesian Priors? lays out the basic fact that we have a skeptical prior (by default, we expect that a strong claim will not hold up to scrutiny).
  • Why We Can’t Take Expected-Value Estimates Literally does some basic formalization of this aspect of worldview and explores some of the consequences, defending our general preference for giving where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good. It also explains why we put only limited weight on formal, explicit calculations of “expected lives saved” and similar metrics.
  • Maximizing cost-effectiveness via critical inquiry expands on this framework, laying out how it can be vital to understand a giving opportunity “from multiple angles.”
  • We will likely post more in the future on this topic.

2. We believe that further economic development, and general human empowerment, is likely to be substantially net positive, and that it is likely to lead to improvement on many dimensions in unexpected ways. This is a view we haven’t written about before, and it has strong implications for what causes to investigate. While we see great value in directly helping the poorest of the poor, we’re also open to the viewpoint that contributing to general economic development may have substantial benefits for the poorest of the poor (and for the rest of the world). And while we are open to arguments that particular issues (such as climate change) are particularly important to the future of humanity, we also believe that by default, we should expect contributions to economic development and human empowerment to be positive for the future of humanity; we don’t feel that one must necessarily choose between improving lives in the short and long term. (This view is part of why we put more weight on helping humans than on helping animals.)

Because of this view, we are open to outstanding giving opportunities across a wide variety of causes; we aren’t convinced that the best opportunities must be in developing-world aid, or mitigation of global catastrophic risks, or any other particular area. Even if a particular problem is, in some sense, the “most important,” it may be possible to accomplish more good by working in another cause where there is better room for more funding. We will discuss this view more in a future post.

Why we recommend so few charities

This post seeks to address a common misconception about our work, and will in the future be linked from an FAQ.

We often encounter confusion around the fact that we recommend so few charities. Some take this as a statement that “very few charities are accomplishing good,” but this is very much an incorrect interpretation. We recommend few charities by design, because we see ourselves as a “finder of great giving opportunities” rather than a “charity evaluator.” In other words, we’re not seeking to classify large numbers of charities as “good” or “bad”; our mission is solely to identify, and thoroughly investigate, the best.

The upshot is that the charities we don’t recommend may be doing great work, and our lack of recommendation shouldn’t be taken as evidence to the contrary. However, our top charities are the ones that we believe best fit our criteria: evidence-backed, cost-effective, and capable of effectively using more funding.

We take this approach because

  • Thoroughly understanding even one charity is a great deal of work. We’ve put hundreds of hours into each of our current top charities. Our investigations include thoroughly reviewing the research behind charities’ programs, researching possible concerns about these programs, extensive back-and-forth with charities to gain full understanding of their processes and past and future uses of funds, multi-day site visits to charities’ operations in the field, and ongoing updates, as well as extremely time-intensive cost-effectiveness analysis (estimating how much good is accomplished per dollar spent). (For example, see this 2011 blog post about our due diligence on former #1 charity VillageReach; since then our process has intensified significantly).
  • Thoroughly understanding our top charities makes a substantial difference to our donors. One of the ways in which our money moved has increased is that donors have given higher percentages of their income to our top charities over time, as their confidence in our recommendations has grown. As evidenced by the public discussions we’ve had, donors tend to have many questions about our top charities and to value the work we put into being thorough.
  • Thoroughly understanding a charity is harder when the charity is less outstanding. Our top charities are characterized by partly by qualities that make them easy to understand: extreme transparency (which makes it relatively easy to communicate with them, understand them, and write about them) as well as strong relevant evidence bases (which make it relatively easy to answer key questions).By contrast, the experiences we’ve had attempting thorough reviews of less outstanding charities have frequently involved time-consuming confusion over answers to key questions and contentious, and time-consuming discussions about what is appropriate for us to publish.

    Note that establishing that a charity is failing would be even more difficult than establishing that a charity is succeeding. The latter is difficult because even well-done studies generally have many complexities and shortcomings. The former is more difficult because there is generally little data available (even if it exists it is unlikely to be made available to us).

  • Thoroughly reviewing less outstanding charities – while more difficult than reviewing top charities – would be far less benefit to our mission. We have written about this issue previously. In the past, we’ve tried recommending larger numbers of charities, with lower confidence; these efforts have included recommendations in popular causes such as microfinance and U.S. equality of opportunity. But we’ve seen the vast majority of dollars go to the charities that win our highest recommendations, and the lower-tier recommendations have attracted relatively little in the way of donations.It may be true that, all else equal, having more recommended charities raises our total money moved. It may be true that if we could comprehensively review all major charities, giving them a definitive positive or negative rating, our product would have more appeal. But these activities would take more resources than we have (or, in the case of the latter, more resources than we believe we could ever realistically have). The return on investment of reviewing less outstanding charities is far inferior to the return of focusing on the best giving opportunities.
  • We need all the capacity we have for the goal of finding the best giving opportunities possible. As discussed previously, we are currently trying to increase the time we spend trying to explore giving opportunities outside of our traditional criteria. Doing so while maintaining confidence in our current charity recommendations takes all the capacity we have. And if we had more capacity, we would try to improve the speed and quality of our efforts to identify even more outstanding giving opportunities, long before we tried to review less outstanding charities.

Our process is built around having maximal confidence in our recommendations for how to do as much good as possible with one’s giving. This goal is distinct from the goal of reviewing and/or recommending large numbers of charities, and trying harder to accomplish the latter would detract from our success at the former.

Trying (and failing) to find more funding gaps for delivering proven cost-effective interventions

There are interventions that we believe are – or may be (pending a literature review) – very well supported by evidence, that we’ve been unable to find charities focused on. In 2012, we put a significant amount of effort into trying to find ways donors could pay for further delivery of these interventions, even if it required working with a large organization (such as UNICEF or GAVI) rather than with a small charity dedicated to the intervention in question. Good Ventures played a major role in these investigations and was particularly helpful in getting engagement from these larger organizations.

The bulk of our efforts focused on immunizations – which we consider to have the strongest evidence base of any intervention we know of – and micronutrient supplementation (particularly salt iodization and vitamin A supplementation, which we perceive as the most evidence-supported micronutrient interventions; writeups on these interventions are forthcoming).

Despite substantial effort, we did not find any such giving opportunities. The basic pattern we saw was that:

  • Government and multilateral funders provide substantial funding for these interventions.
  • It often appears that the greatest obstacle to universal coverage is a logistical bottleneck rather than a simple lack of funding for more direct execution.
  • We asked persistently for areas where more funding was needed to do more direct delivery. In many cases, we were told that there were opportunities, but then (a) these opportunities became funded by others while we were investigating them, or (b) we tried to follow up on these opportunities and ultimately were met with unresponsiveness, and/or concluded that funding was not the primary bottleneck to progress in these cases.

Overall, this pattern of observations fits a model in which the most proven, cost-effective interventions are often already being appropriately funded by the international community (though not in every single case; LLIN distribution is the clearest exception we’ve found).

Unfortunately, many of the details of our investigations cannot be shared, because the organizations we worked with sometimes shared information only under condition of confidentiality. What we can share is the following:

  • Immunizations. Our immunization landscape writeup shares most of what we’ve learned about immunization funding. In brief, it appears that developing-world governments fund much of the basic costs of routine immunization, while GAVI provides substantial support for routine immunizations, newer more expensive vaccines, and campaigns (additional opportunities for children to receive a few key vaccines, as a way to reach children missed by the routine vaccination system and to provide additional doses to increase immunity to the targeted diseases). GAVI exceeded its fundraising target for 2011-2015 as of June 2011 (with much of the funding coming from developed-world governments, as seen on page 4 of this PDF) and is currently raising funds for 2016 and beyond (as discussed in this conversation (DOC)). We looked into several other organizations, speaking with UNICEF, the World Health Organization, and several campaign-focused organizations, and in no cases found the sorts of funding opportunities we were looking for.
  • Salt iodization. We investigated two apparent funding opportunities in depth: (a) a project with GAIN in Ethiopia, in which we concluded that the key issue to be resolved was not one of funding (more detail at the link), and (b) the possibility of funding in Eastern Europe (which a conversation with UNICEF highlighted to us), which we investigated but have not produced publicly available information on.
  • Vitamin A supplementation. It appears to us that the Canadian government is a major funder of vitamin A supplementation (for example, it recently granted ~$150 million for this purpose), and that UNICEF is a major supporter/implementer (see this link). We spoke with UNICEF and others in an attempt to find areas where more funding was needed to directly deliver more vitamin A supplementation, and were unable to identify such funding opportunities. The details are mostly confidential, with the exception of an initial conversation with UNICEF.
  • Other programs. Our 2012 efforts also included looking into the evidence behind zinc supplementation (both therapeutic and non-therapeutic), which had been highlighted to us as an area with large funding gaps by the Micronutrient Initiative. We concluded that the evidence case and likely cost-effectiveness were inferior to those of our top charities’ interventions.

For the time being, we’ve provisionally concluded that

  • The path of trying to fund the most proven interventions, when we can’t find charities that focus on them, doesn’t appear promising in the short term. This is partly because a lack of charities focusing on an intervention may be correlated with a lack of room for more funding to deliver the intervention directly; it is also because we’ve found it very difficult to work with and learn from large diverse organizations. We do expect to return to this path at some point, but we don’t expect to make it a major priority over the coming year.
  • In general, it appears that the most proven interventions do attract substantial funding from governments and others. There are some funding gaps (the best example being bednets); but overall, it appears that the most proven, cost-effective interventions are often already being appropriately funded by the international community.

Update on GiveWell’s plans for 2013

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

Previously, we wrote about the need to trade off time spent on (a) our charities that meet our traditional criteria vs. (b) broadening our research to include new causes (the work we’ve been referring to as GiveWell Labs). This post goes into more detail on the considerations in favor of assigning resources to each, and lays out our working plan for 2013.

Key considerations in allocating resources to traditional criteria vs. GiveWell Labs
We see major advantages to upping our allocation to GiveWell Labs:

  • Most importantly, we would guess that the best giving opportunities are likely to lie outside of our traditional work, and our mission and passion is to find the best giving opportunities we can.Our traditional criteria apply only to a very small subset of possible giving opportunities, and it’s a subset that doesn’t seem uniquely difficult to find funders for. (While there are certainly causes that are easier to raise money for than global health, it’s also the case that governments and large multilateral donors such as GFATM put large amounts of money into the interventions we consider most proven and cost-effective, including vaccines – hence our inability to find un-funded opportunities in this space – as well as bednets, cash transfers and deworming.) While we do believe that being able to measure something is a major plus holding all else equal – and that it’s particularly important for casual donors – we no longer consider ourselves to be “casual,” and we would guess that opening ourselves up to the full set of things a funder can do will eventually lead to substantially better giving opportunities than the ones we’ve considered so far.
  • We believe that we are hitting diminishing returns on our traditional research. We have been fairly thorough in identifying the most evidence-supported interventions and looking for the groups working on them, and we believe it’s unlikely that there are other existing charities that fit our criteria as well as or better than our current top charities.We have previously alluded to such diminishing returns and now feel more strongly about them. We put a great deal of work into our traditional research in 2012, both on finding more charities working on proven cost-effective interventions (nutrition interventions and immunizations) and on more deeply understanding our existing top charities (see Revisiting the case for insecticide-treated nets, Insecticide resistance and malaria control, Revisiting the case for developmental effects of deworming, New Cochrane review of the Effectiveness of Deworming). Yet none of this work changed anything about our bottom-line recommendations; the only change to our top charities came because of the emergence/maturation of a new group (GiveDirectly).

    Putting in so much work without coming to new recommendations (or even finding a promising path to doing so) provides, in our view, a strong sign that we have not been using our resources as efficiently as possible for the goal of finding the best giving opportunities possible. We believe that substantially broadening our scope is the change most likely to improve the situation.

  • GiveWell Labs also has advantages from a marketing perspective – improving our chances of attracting major donors – as discussed previously.

We also see major considerations in favor of maintaining a high level of quality for our more traditional work:

  • GiveWell Labs is still experimental, and we haven’t established that this work can identify outstanding giving opportunities or that there would be broad demand for the recommendations derived from such work. By contrast, we have strong reason to believe that we do our traditional work well and that there is broad and growing demand for it.
  • When giving season arrives this year, many donors (including us) will want concrete recommendations for where to give. At this point, the best giving opportunities we know of are the ones identified by our traditional process, and continuing to follow this process is the best way we know of to find outstanding giving opportunities within a year (though we believe that GiveWell Labs is likely to generate better giving opportunities over a longer time horizon).
  • The survey we recently conducted has lowered the weight that we place on a third potential consideration. We previously believed that “a sizable part of our audience values our traditional-criteria charities and does not value our cause-broadening work.”However, in going through our survey responses, we found that over 95% of respondents were interested in or open to (i.e., marked “1” or “2” for) at least one category of research that falls clearly outside our traditional work (we consider the first two categories on the survey to fall within our traditional “proven cost-effective” framework). Furthermore, over 90% of respondents were interested in or open to at least one category of research that would not be directly connected to evidence-backed interventions at all (i.e., would involve neither funding evidence-backed interventions nor funding the creation of better evidence). Even when looking at the four areas that we believe to be most controversial (the three political-advocacy-related areas and the “global catastrophic risks” area), ~70% of respondents expressed interest in or openness to these categories. These figures were broadly the same whether considering the entire set of survey respondents or subsets such as “donors” and “major donors.”

    More information on the survey is available at the end of this post.

Our working plan for 2013
The items that we consider essential for our “traditional” work are:

  • Continuing to do charity updates (example) on our existing top charities.
  • Reviewing any charity we come across that looks like it has a substantial chance of meeting our traditional criteria as well as, or better than, our current #1 charity (which would require not only that the charity itself has outstanding transparency, but also that the intervention it works on has an outstanding academic evidence base). We have created an application page for charities that believe they can meet these criteria.
  • Hiring. As mentioned previously, we believe our process has reached a point where we ought to be able to hire, train and manage people to carry it out with substantially reduced involvement from senior staff. We are currently hiring for the Research Associate role, and if we could find strong Research Associates we would be able to be more thorough in our traditional work at little cost to GiveWell Labs.

We plan to execute on all three of these items. We do not plan, in 2013, to prioritize (a) looking more deeply into the academic case for our top charities’ interventions; (b) searching for, and investigating, charities that are likely to be outstanding giving opportunities but less outstanding than our current #1 charity; (c) investigating new ways of delivering proven cost-effective interventions, such as partnering with large organizations via restricted funding; (d) reviewing academic evidence for possibly proven cost-effective interventions that we have not found outstanding charities working on. All four of these items may become priorities again in the future, depending largely on our staff capacity.

Between the above priorities and other aspects of running our organization (administration, processing and tracking donations, outreach, etc.) we have significant work to do that doesn’t fall under the heading of GiveWell Labs research. However, we expect to be able to raise our allocation to GiveWell Labs, to the point where our staff overall puts more total research time into GiveWell Labs than into our traditional work.

For similar reasons to those we laid out last year, we continue to prioritize expanding and maintaining our research above other priorities. We do not expect to put significant time into research vetting (see this recent post on the subject) or exploring new possibilities for outreach (though we will continue practices such as our conference calls and research meetings). We are continuing to see strong growth in money moved without prioritizing these items.

Survey details:
We publicly published our feedback survey and linked to it in a blog post and an email to our email list. We also emailed most people we had on record as having given $10,000 or more in a given year to our top charities, specifically asking them to complete the survey, if they hadn’t done so.

In analyzing the data, we looked at the results for all submissions (minus the ones we had identified as spam or solicitations); for people who had put their name and reported giving to our top charities in the past; for people who reported giving $10k+ to our top charities in the past, regardless of whether they put our name; and for people who we could verify (using their names) as past $10k+ donors. The results were broadly similar with each of these approaches.

We do not have permission to share individual entries, but for questions asking the user to provide a 1-5 ranking (which comprised the bulk of the survey), we provide the number of responses for each ranking, for each of the four categories discussed in the previous paragraph. These are available at our public survey data spreadsheet.

GiveWell annual review for 2012: Details on GiveWell’s money moved and web traffic

This is the final post (of five) we have made focused on our self-evaluation and future plans.

This post lays out highlights from our metrics report for 2012. For more detail, see our full metrics report (PDF).

1. In 2012, GiveWell tracked $9.57 million in money moved based on our recommendations, a significant increase over past years.

2. Our #1 charity received about 60% of the money moved and our #2 and #3 charities each received over $1 million as a result of our recommendation. Organizations that we designated “standouts” until November (when we decided not to use this designation anymore) received fairly small amounts. $1.1 million went to “learning grants” (details here and here) that GiveWell recommended to Good Ventures.

3. Growth was robust for every donor size. As in 2011, a majority of growth in overall money moved came from donors giving $10,000 or more.

This table excludes Good Ventures and donations for which we don’t have individual information. More in our full metrics report.

4. Web traffic continued to grow. A major driver of this growth was Google AdWords, which we received for free from Google as part of the Google Grants program. As in prior years, search traffic (both organic and AdWords) provided the majority of the traffic to the website. Traffic tends to peak in December of each year, circled in the chart below.

5. The group of donors giving $10,000 or more has grown from 55 to 96, but the characteristics of this group have changed little. Most of these donors are young (for major donors) and work in finance or technology. Of the $3.2 million donated by major donors who responded to our survey, $1.9 million (60%) came from donors under the age of 40. The most common ways they find GiveWell are through online searches and referral links and through Peter Singer.

6. As in 2011, the most common response when we asked donors giving $10,000 or more ‘how has GiveWell changed your giving?’ was ‘I would have given a similar amount to another international charity.’

What effect has GiveWell had on your giving?

For donors who responded that GiveWell caused them to reallocate their giving, where would you have given in GiveWell’s absence?

7. GiveWell’s website now processes more than twice as much giving as GuideStar’s and about 80% as much as Charity Navigator’s, though it offers far fewer charities as options. This comparison provides evidence that the growth we saw in 2012 is due not to generalized increases in online giving or use of charity evaluators, but rather to GiveWell-specific factors. (Note that the GiveWell figure in this chart includes only what was processed through our website – not all money moved – in order to provide a valid comparison to the others, for which we only have online-giving data.)