The GiveWell Blog

Some stats on GiveWell’s web traffic and influence on donations

Before we start giving our answers to the questions of this post, I wanted to share some raw data that we look at to gauge how things are going.

The charts/tables below cover the following:

  • “Money moved,” i.e., donations made to GiveWell-recommended charities due to GiveWell’s research
  • Website traffic.

This is just a subset of the information we have. We’ll be releasing a more complete set of charts/tables/data shortly.

The table below shows the support each of our recommended charities received in 2009. Update, 1/8/2010: VillageReach sent us an updated file that includes donors through the end of 2009. The updated table is below. Note: VillageReach’s total fell as we discovered that we had erroneously double-counted some funds.

You can view the original table we posted here.

  • “Pledgers” refers to people who made GiveWell Pledges (advance commitments to give based on our research) in 2008, before our recent report was completed, and followed through on these commitments in 2009.
  • “Large gifts” refers to donors who made large gifts, and directly told us (and the charities they were giving to) that GiveWell’s research had been the key factor in where they gave.
  • “Economic empowerment grant” refers to a grant made directly by GiveWell, with funds from a single donor.
  • “Through website” refers to gifts made through the “Donate Now” buttons on GiveWell.net (some through Google and some through Network for Good (NFG)).
  • “Grants” refers to grants made directly by GiveWell, mostly with funds that were restricted by donors for regranting (we also granted just under $15,000 in unrestricted funds).
  • “Direct to charity” refers to donations that VillageReach received, not through GiveWell’s website, and believes it can confidently attribute to GiveWell (this is due to the fact that VillageReach is a relatively small organization that does not get many donations from unfamiliar individuals). We are still awaiting data from 12/22/09-12/31/09, so we expect the final version of this number be higher.

The following two charts show the amount donated and number of donors through the GiveWell site, comparing 2007-2009 (and the beginning of 2010).


Finally, we show monthly web traffic to the GiveWell site and blog. Two notes: (1) we unfortunately lost tracking for much of 2008 — that explains the lack of data during that period. (2) We had an immense spike on 12/20/2007 due to media coverage; we’ve purposefully set the left-axis as it is to make it easier to view the rest of the chart.

GiveWell’s self-evaluation and plan

Our current top priority is assessing the state of GiveWell: what we’ve accomplished, where we stand, and where we should focus our limited resources next. Over the coming weeks, we’ll be trying to examine ourselves as dispassionately and critically as possible, and sharing our self-review in something close to real time via this blog.

GiveWell’s mission is to find outstanding charities and publish the full details of our analysis to help donors decide where to give. The ultimate goal is to have significant impact on the flow of donations, moving toward a world in which donors reward charities for success in improving lives. The major questions about GiveWell, as I see them, are as follows.

Questions for “customers,” i.e., people considering using GiveWell’s research to decide which charities to support

  • Does GiveWell provide quality research that highlights truly outstanding charities in the areas it has covered?
  • Is it practical for donors to evaluate and use GiveWell’s research in the areas it has covered?
  • Has GiveWell covered enough areas to be useful?

Additional questions for stakeholders, i.e., people considering giving their time, money and other support directly to GiveWell (these include the GiveWell Board and staff)

  • Is GiveWell’s research process “robust,” i.e., can it be continued & maintained without relying on the co-Founders?
  • Does GiveWell present its research in a way that is likely to be persuasive and impactful (i.e., is GiveWell succeeding at “packaging” its research)?
  • Does GiveWell reach a lot of potential customers (i.e., is GiveWell succeeding at “marketing” its research)?
  • Is GiveWell a healthy organization with an active Board, staff in appropriate roles, appropriate policies and procedures, etc.?
  • What is GiveWell’s overall impact, particularly in terms of donations influenced? Does it justify the expense of running GiveWell?

For all of these questions, we intend to discuss

  • The progress we’ve made since November 2008 (when we last laid out a business plan)
  • Where we stand today, relative to where we need/hope to be to consider GiveWell a success
  • What we can do to improve

Our self-review won’t be entirely comparable to the reviews we perform of other organizations. The latter tend to be focused on the “end product,” as we stay agnostic on progress other organizations have made and how they can improve. When evaluating ourselves, it is essential that we examine “intermediate indicators” as well as our ultimate impact, and think critically about the different paths we can take to improve.

Follow up re: Philanthropedia

Philanthropedia has responded to our take on its microfinance report:

We find the response to be encouragingly straightforward. For the most part, it agrees with the concerns we have raised and commits to addressing them. Good Intentions are Not Enough argues that the response raises more questions than it answers, and we agree in substance, though to us the most important part of the response is an honest recognition of shortcomings and an expressed intent to improve.

A couple of responses on points where there is some disagreement:

Re: incentives. Our original post argued that Philanthropedia, if it became highly influential under its current model, would create bad incentives for experts (allowing them to continue to keep their thoughts under wraps) and charities (encouraging them to win over experts in ways unrelated to improving their social impact).

  • Philanthropedia states, “Even if only 31 experts agreed to publish their bios and stand behind the results, that is exactly 31 more than before.” We disagree that this is a good thing. The experts have not been linked with recommendations (as far as we can tell) – all we know is that they shared their votes, and we know the aggregate result. The problem here is that no one is individually accountable for their own recommendations and reasoning. If it became influential, this model would allow experts to have the benefits of transparency (i.e., influence over individual donations) without the accountability, which we feel would be a net negative impact on the incentives for experts to be transparent.
  • Philanthropedia states that it intends to rigorously guard against “gaming of the system.” But the dynamics we are concerned about are less related to “rule-breaking” or outright conflict of interest (for example, a charity pays an expert for a positive rating) than to distortion of ratings in a softer sense. Becoming “popular in a certain crowd” can be accomplished in a lot of ways that have nothing to do with improving impact. That’s why it’s so important that people who recommend charities put as much as possible of their reasoning out in public, where others can ask, “Are these reasons strong enough to explain this person’s support?”

Should donors use Philanthropedia’s current report?

In its current form, we feel that Philanthropedia amounts to a set of recommendations unlinked to either people or reasoning. That’s essentially asking donors to trust anonymous people, which we feel is a bad idea and not helpful for impact-focused donors. (And if donors need a “Who’s Who in microfinance,” that information is already available.)

For context: we are generally sympathetic to the “It’s not perfect, but it’s better than nothing” argument. We are acutely aware that donors have very limited resources today, and we ourselves have generally erred on the side of publishing/sharing our research as quickly as possible. Our research process and our own knowledge have enormous room for improvement (and this was more true when we first published research in November 2007). But we have not gone as far as publishing recommendations while in the middle of our process – we wanted to make sure we could clearly present our criteria and the charities considered, so that people could check our process and hold us accountable if they wished to do so.

There are good arguments for Philanthropedia’s sharing/promoting its research even in its current state, and it’s ultimately Philanthropedia that makes that call.

Roundup of recent blogging

Now that giving season has ended, we will be shifting our priorities and slowing down the pace of blog posts. Here’s a quick overview of the highlights from our last few months:

Philanthropedia’s report on microfinance

I believe that Philanthropedia has a promising model. We were glad to join with them on a recent press release urging donors to look beyond administrative expense ratios. We are potentially interested in collaborating with them and/or incorporating their work in the future. I examined their new report on microfinance with great interest.

At this point, I feel that Philanthropedia’s execution of the model has too many serious problems to make it of use to a donor, or to make it a positive influence on the sector. Philanthropedia is a very young organization and we expect/hope to see major improvement, which is why we are sharing these detailed thoughts (we have also discussed them with Philanthropedia).

Overview:

  • The definition of a Philanthropedia “expert” is unclear, and we are concerned that the distinction between experience and expertise is being neglected.
  • Only 31 of 131 “experts” even appear to have their names accessible on the site. The little we know about the “experts” gives us substantial concerns about their representativeness and credibility.
  • Because of concerns about who the experts are, how they are chosen, and what criteria they are applying, we find that we can’t make sense of the final output (i.e., recommended microfinance organizations). We disagree strongly with the recommendations, but cannot see enough of what went into them to examine the sources of the disagreement.
  • We would not want donors to give based on this report, not only because of the problems listed above, but because we believe that doing so would reinforce bad incentives.

Details follow. This is a long post, but if you are a nonprofit professional interested in the Philanthropedia model – or a donor considering following its recommendations – I urge you to read it all. There are more substantial concerns than I can present briefly.


The definition of a Philanthropedia “expert” is unclear

How does Philanthropedia define an “expert”? The most specific answer I can find to this question is via the Philanthropedia blog: “To find these experts, we do a variety of things such as research thought leaders in the space, look for program officers at foundations who specialize in this area, research universities who have faculty focused on these issues, identify journalists who write extensively about the topic, look for executive directors or heads of nonprofits working in the space, etc.” The Philanthropedia FAQ adds that “We target experts in a social cause through a combination of cold emails and warm referrals (on the basis of professional and personal connections).”

But the numerical breakdown of experts cites 8% academics, 8% funders, 53% nonprofit executives, and 24% unspecified “others.” (We’re not sure why the percentages add up to <100% but consider this a minor issue.)

  • Why such a large proportion of nonprofit executives and “others”? Is this proportion deliberate, based on any particular view of the right proportions, or has it simply “fallen out” of which people Philanthropedia was able to reach and which chose to participate? Is it a function of the “personal connections” mentioned by the Philanthropedia FAQ?
  • It seems clear that these numbers don’t come close to the total numbers of academics, nonprofit executives, etc. who would have relevant perspectives. So why these “experts” and not others? Philanthropedia’s description of its process is too vague to give an answer to this question.
  • Who are the “others?” Are they all “journalists who write extensively on the topic” or do many fall into the “etc.” category?

Philanthropedia is all about trust. It provides very little information that donors can assess for themselves – the premise is that experts are better positioned to make recommendations. So it is essential to be crystal clear about just what constitutes the “expertise” that makes someone’s opinion so much more valuable and credible than an individual donor’s. We feel that Philanthropedia needs to add a substantial amount of information before it can be considered clear enough.

And as we have written before, we believe there is a huge difference between “experience” and “expertise” – particularly in the nonprofit sector with its broken feedback loops.

Who are the experts?

When we look at what information is available about the “experts” behind the microfinance report, we note:

  • The report appears to have surveyed 131 total “experts.” As mentioned above, the vast majority of these are nonprofit executives and “other” and only 8% (~10) are academics.
  • This leaves very little room for representation of microfinance skeptics. Nearly everyone, if not everyone, surveyed is likely coming from a worldview that embraces the core assumptions behind the pro-microfinance mentality, and thus embraces microfinance the way it is currently carried out. It seems fairly clear how this issue can lead to bias: for example, people who are already committed enough to microfinance’s potential to be funders/nonprofit executives seem likely to focus more on scale (reaching as many people as possible) rather than social impact (asking critical questions about microfinance’s effect on clients).
  • Names and affiliations are provided for only 31 of the 131 “experts.” For 21 of the 31, the only information given is the name, position and organization name (and not all of these organizations can easily be found on the web – for example, is this the WAVE Foundation that employs Anwar Hossain, who is listed as one of the “experts”?)
  • What we can see about these “experts” sheds very little light on how they were chosen and how representative they are.
    • Some are top-level executives and some are not.
    • 3 of the 31 represent Unitus; one represents Tearfund, a very large charity working in diverse areas; but other major U.S. microfinance-focused charities such as Grameen Foundation and FINCA have no representatives in this set. (Perhaps Unitus’s heavy representation has something to do with its low rank, since Philanthropedia requires that nonprofit professionals not vote for their own organization.)
    • In what regions have these “experts” done their work? With what kinds of programs? In what ways might their experience and allegiances be biased? Do they have valuable experience and impressive accomplishments or merely many years of being employed in this area? We don’t have the information that could address these questions.

The recommended charities

As stated above, Philanthropedia provides very little information on the recommended charities aside from their names, links to their websites, and figures whose exact meaning is unclear but which seem to indicate the level of “expert support.”

Our main observation is that the list reads like a “who’s who in large U.S. microfinance charities.” This pertains to our above observation that microfinance skeptics don’t seem well represented.

Most of the ratings are clustered fairly close together (7%-11% range). Accion is at the top with 19%, and CARE, FINCA and Unitus are at the bottom. It’s unclear exactly what these numbers mean – should we take from this that Accion is particularly outstanding or that Unitus has problems? Probably not, since Philanthropedia states that it doesn’t rank nonprofits.

We would not, at this point, recommend any of the charities listed:

The “expert quotes” provided by Philanthropedia do not address these concerns, and in fact raise questions about whether the “experts” are even trying to help donors accomplish good. Take the Kiva page for an example. (Click “Expert assessment” from that link to view quotes.) Nearly all strengths listed pertain to Kiva’s marketing ability: “They have a cute model,” “They are appealing to donors,” and more. In other words, you are being advised to give to a charity because it’s good at making you want to give to it.

The few comments that pertain to other aspects of Kiva raise other concerns:

  • “They are truly innovative as a capital source for microfinance institutions.” The exact nature of the innovation is unspecified, but probably amounts to more praise for the marketing.
  • “Kiva charges 0% interest rate on capital.” Kiva charges 0% to its partners, but borrowers have to pay interest. Is that a good thing, and if so, why?
  • “They provide a direct link from funder to borrower” – this is simply false.
  • “They are not neglecting the US domestic poverty issue” – this comment comes up in more than one organization’s profile. We have argued that it is better not to focus on US domestic poverty; in any case, this is clearly a difficult, value-laden judgment call.

The “expert quotes” on other organizations raise similar concerns. Overall, it is not apparent that the “experts” are giving advice based on how an individual donor can have the most impact. It is not clear what their recommendations mean.

As for our own take on which microfinance charities are best, we have deep concerns about this sector that have led us to go beyond U.S. charities and seek out outstanding microfinance institutions. We’ll be writing about more about the best we’ve found, the Small Enterprise Foundation.

We are coming from an attitude of skepticism toward microfinance in general. Our skepticism is certainly debatable, but we have been explicit about it and discussed our thoughts on microfinance at length. A donor who wants to know where our advice is coming from and what it’s based on can find out. We feel it is important that donors have similar context on the people cited by Philanthropedia.

Reinforcing bad incentives

We pay a lot of attention to incentives. When discussing how we should rate charities, we constantly ask, “If charities all knew we were handing out ratings on this basis and expected donors to give based on our ratings, how would they work the system? What would we expect to change?” (For example, we give negative reviews to charities that publish no information, not just charities whose materials raise concerns; we don’t want to provide incentives to hide information.)

We feel that if Philanthropedia stuck to its current approach and donors used it:

  • “Experts” would not have any incentives to increase their own transparency above current, unacceptable levels. They could influence others’ giving by giving recommendations without substantive reasons and without even disclosing their names.
  • Charities would have strong incentives to do whatever got them recommended by Philanthropedia’s set of experts. This could include networking, flattery, wining/dining, outright conflicts of interest, and learning to speak of their programs in ways that would appeal to Philanthropedia experts, all of which might end up being more cost-effective from the charities’ perspective than optimizing and demonstrating their social impact.

Bottom line
I urge Philanthropedia to:

  • Publish exhaustive details of how “experts” are defined, selected and invited.
  • Publish names and affiliations of all experts who are invited to participate, whether or not they accept.
  • It is particularly important to publish names and affiliations for those who do participate because donors should not be asked to trust anonymous people.
  • Address the concerns we have raised about the representativeness and credibility of experts, if these concerns still appear valid once the identities and process are disclosed.
  • Publish the templates for all surveys sent to experts. Be explicit about the criteria by which experts are asked to recommend charities.
  • Ask experts to make their best case publicly and to be clear about whether they’re recommending charities to impact-focused donors, praising charities for marketing abilities, or something else.

We believe the Philanthropedia model has a lot of potential. We would find great use, for donors and for ourselves, in a site that could credibly claim to have assembled the charities with the most support from people with relevant experience.

Philanthropedia has accomplished a lot in a short time, but we don’t believe its content is yet appropriate for distribution to individual donors looking for help with their giving decisions.

You can save a life

We ask you, as a donor, to turn down some great pitches – “Your interest-free loan will help this person escape poverty forever,” “You can give a cow to a poor family for Christmas,” etc. – and give instead to charities that aren’t terribly good at storytelling. Why?

It comes down to this. We think that most of those stories are just that – stories. (For more, see our summary of recent posts on “big-name” charities, which we feel are representative of the full set of charities we’ve reviewed.) But if you give to one of our top charities, you really can save (or dramatically change) a human life.

It hasn’t been easy to find charities that we can honestly say this about. That’s what our process is built around and where most of our energy goes. This week we’ve blogged about the best we’ve found, VillageReach and Stop Tuberculosis Partnership. There is plenty of room for doubt even with them, but overall we think there is a strong case – even for the skeptic – that your donation to them can save a life.

What do we mean when we say “save a life?”

By “you can save a life,” we don’t mean anything as simple, concrete, or easy to grasp as the stories charities usually tell.

  • Your gift can’t literally be linked to an individual. It will help an organization that, all things considered, is achieving a lot of impact for what it spends.
  • If you must know what “your” dollars are doing, it’s likely that they’ll be sitting in reserves to ensure financial stability, or enabling a slightly larger travel budget for evaluators, or something similarly unexciting.
  • It’s even highly possible that your donations will be wasted, and that the charity you give to – even the best you can find – will fail. We don’t think there are true guarantees in aid.
  • Even if these charities are succeeding, it’s very likely that your donation won’t ultimately result in the charity doing anything differently. It’s pretty hard to think about how $1000, by itself, could really change anything about Stop TB Partnership‘s plans for next year.
  • Yet donations add up. 50-100 of these donations could mean a significantly larger grant, more people getting tuberculosis treatment … and that could mean families staying intact instead of being struck by sudden death.

The truth is that it takes a lot of abstraction and analytical thinking to really think about how your donation saves a life. The life you can save is an “expected” life (“expected” in the sense of probabilistic expected value) – it isn’t a person we can point to or show you a picture of. More than with typical charities, you have to use your imagination. But more than with typical charities, your impact is real.

With opportunity comes responsibility

In The Life You Can Save (which prominently features GiveWell and which we have reviewed), Peter Singer writes:

By donating a relatively small amount of money, you could save a child’s life … we all spend money on things we don’t really need, whether on drinks, meals out, clothing, movies, concerts, vacations, new cars, or house renovation. Is it possible that by choosing to spend your money on such things rather than contributing to an aid agency, you are leaving a child to die, a child you could have saved? (pg 5)

Our corollary: is it possible that you are leaving a child to die when you choose to donate to a charity with a “feel-good” story rather than a charity with a great case for real impact?

It is true that, as our critics often point out, a charity can be impactful without being demonstrably impactful. But when one charity proves itself and another leaves you guessing, it seems clear to us which one offers the “better bet” – and more “expected lives saved” – given the information available. When you have the option of giving to an outstanding charity that demonstrably can save a life, how do you justify giving to a charity whose true impact is essentially a big question mark?

I’ll leave this blog’s last words for 2009 to Natalie, a relatively recent GiveWell hire (she has been working full-time on research since July).

Sometimes I’m almost tempted to give to a charity I know less about. I’ve been over VillageReach and I’ve seen how complex the situation is and how many questions there are. If I gave to some charity I know nothing about, I could just think about the story they tell and feel good and not have these nagging doubts. But I’m not going to do that – in the end it’s more important to me that I really make a difference.

GiveWell’s top-rated charities