The GiveWell Blog

Update on GiveWell’s web traffic / money moved: Q2 2013

In addition to evaluations of other charities, GiveWell publishes substantial evaluation of itself, from the quality of its research to its impact on donations. We publish quarterly updates regarding two key metrics: (a) donations to top charities and (b) web traffic.

The table and chart below present basic information about our growth in money moved and web traffic in the first half of 2013 (note 1).

Summary statistics: first two quarters

Growth in money moved accelerated in the second quarter. At the end of the first quarter, donations from donors giving less than $5,000 per year were up 45% year over year (note 2). This figure rose to 100% by the end of the second quarter. This increase is likely due mostly to media attention in the second quarter, including a TED Talk by Peter Singer, a piece on Washington Post’s Wonkblog and subsequent debate on various blogs, and a New York Times article.

A caveat to the above is that it is based solely on small donors. In the past we’ve seen that growth in small donors earlier in the year provides an indication of overall growth at the end of the year, but because a significant proportion of our money moved comes from a relatively small set of large donors, we don’t place significant weight on this projection.

Website traffic tends to peak in December of each year (circled in the chart below). Growth in web traffic has generally remained strong in 2013. So far in 2013, there have been 408,782 monthly unique visitors (calculated as the sum of unique visitors in each month) to the website, compared with 233,550 at this time in 2012, or 75% annual growth.


Note 1: Since our 2012 annual metrics report we have shifted to a reporting year that starts on February 1, rather than January 1, in order to better capture year-on-year growth in the peak giving months of December and January. Therefore metrics for the first half of 2013 reported here are for February through July.

Note 2: The majority of the funds GiveWell moves come from a relatively small number of donors giving larger gifts. These larger donors tend to give in December, and we have found that, in past years, growth in donations from smaller donors throughout the year has provided a reasonable estimate of the growth from the larger donors by the end of the year.

In total, GiveWell donors have directed $1,433,139 to our top charities this year, compared with $1,018,625 at this point in 2012. For the reason described, we don’t find this number to be particularly meaningful at this time of year.

 

Responses to objections on cash transfers

Since our recommendation of GiveDirectly last year, we’ve seen a fair amount of pushback and skepticism. We’ve recently been speaking with donors who have supported our other top charities – and not GiveDirectly – to get a better sense of what their reservations are.

This post lays out what we see as the most common objections people have expressed to our recommendation of GiveDirectly, and our responses to such objections. Most of our responses have already been written up previously, so to a large extent this post simply attempts to consolidate them.

At this point, we feel that we have put substantial effort into understanding and responding to people’s reservations about cash transfers, and after considering all objections we fully stand behind our ranking of GiveDirectly. We encourage those who continue to disagree with us to comment on this post, highlighting which objections they find most important (including any we may have missed) and laying out what they see as weaknesses in our responses.

  • Objection 1: the case for cash relies on intuition, while the case for bednets and deworming relies on rigorous research. We disagree with this, and have written that the evidence bases for cash transfers and deworming are comparable.
  • Objection 2: the studies used to support the case for cash transfers aren’t applicable to the case of GiveDirectly. For example, key studies were of conditional cash transfers, while GiveDirectly makes unconditional cash transfers. We acknowledge this concern but believe that it does not apply to cash transfers any more than to deworming. In both cases (and to a lesser degree in the case of bednets), there are important differences between the programs that were studied and the programs that are being carried out today, but there are also important reasons not to dismiss the studies that are available. More at the same post linked above: Evidence of Impact for Long-term Benefits.
  • Objection 3: it’s intuitively implausible that $1000 in cash for a single family (much of which is often spent on a metal roof) can do as much good as, say, 200 distributed bednets or 2000 deworming treatments. We believe that a closer study of the evidence behind all three interventions makes the case much more plausible. While we do believe bednets and deworming have strong evidence behind them, the evidence points to very small per-person effects that add up to a lot only when looked at across a large population. (We aren’t confident that deworming’s benefits are non-negligible.) Our cost-effectiveness comparisons imply that bednets and deworming are around 2-5x more cost-effective than cash, which isn’t a large multiplier: if deworming cost $2.50 instead of $0.50, or if bednets cost $25 each, we believe the calculation would weigh in favor of cash transfers (though we would guess that the same intuitive arguments would be voiced).
  • Objection 4: GiveWell concedes that cash transfers are 2-5x less cost-effective (in terms of “good accomplished per dollar”) than bednets and deworming; therefore, there would need to be overwhelming considerations on other factors (such as “upside” and “learning opportunities”) to justify giving to GiveDirectly instead. Broadly speaking, we think this objection overstates the reliability, and importance, of (a) abstract estimates of how much good an intervention does relative to (b) confidence in the organization and people behind implementation. Aside from the very real considerations of “upside” and “learning opportunities” (discussed briefly here), we think that the details of implementation matter greatly, and we don’t believe it’s wise to be confident in or dismissive of such details when one has little window into them. For more, see
  • Objection 5: giving out cash has more potential to do harm than bednet distribution or deworming programs. We broadly agree with this claim, but we also think that bednets and deworming each have higher probabilities of having negligible positive impact. Because bednets and deworming are very specific solutions to very specific problems, they’re less likely to empower people to do self-damaging things, but also more likely to turn out to be unhelpful if the details of the scenario are different from what our analysis suggests. (To give some specific examples: bednets may be ineffective in areas of high insecticide resistance, and deworming ultimately may have negligible impact overall.) In addition, large-scale government cash transfer programs are widespread and largely well regarded, implying that the scope of any harms that have emerged is limited. More to the point, the evidence we’ve reviewed is designed to capture average total impacts (positive and negative), and (as stated above) we believe that the evidence suggests a positive net impact for cash transfers that is of the same ballpark magnitude as the positive net impacts of bednets and deworming. We also don’t find the specific concerns that have been raised about cash transfers to be highly compelling, especially when juxtaposed with the data from GiveDirectly’s followup surveys.
  • Objection 6: cash transfers have worked poorly, or would work poorly, for the U.S. poor; therefore they are not a promising approach for the developing-world poor. We disagree with this objection and addressed it at length in a previous post, The Case for Cash.
  • Objection 7: cash transfers are inferior to loans, because loans are more leveraged (the money lent is repaid and can be lent again) and because loans encourage productive investment. We discussed these issues in a post entitled Cash Transfers vs. Microloans.

A final objection to our recommendation of GiveDirectly is along the lines of, “Even if GiveDirectly has important advantages relative to other groups you’ve looked at, it just doesn’t pass the smell test that giving money directly to the poor is the 2nd-best way to help them. It seems like an overly simple solution; there must be something (other than bednets) that’s better.”

In some sense we agree with this: we believe there is probably some giving opportunity out there that beats all of our current top charities, and we’re looking actively for it via GiveWell Labs. Given the information we have and the approach we’ve taken today, however – looking for interventions that have strong evidence behind them and concrete room for more funding (taking into account that some of the best-proven interventions have already attracted the funding needed for straightforward rollouts) – we think it’s fairly clear that GiveDirectly’s work makes the short list.

We’ve frankly been puzzled by the amount of pushback we’ve received on GiveDirectly, relative to SCI, since the evidence on deworming looks no better than the evidence on cash transfers and since we’ve voiced what we see as more serious concerns about SCI. We’ve seen a level of skepticism applied to evidence on cash transfers that we haven’t seen applied to anything else we’ve written – which is largely a good thing (we want skepticism applied to our work), but also raises the question of whether there are deeper-seated, more intuitive objections to GiveDirectly than what’s been explicitly voiced. One guess we’ve made is that to many, what’s exciting about GiveWell is the idea of using extraordinary analysis to produce extraordinary results. People expect “the best option of all” to look more like “saving lives for absurdly low amounts of money” than like “getting money directly to the poor and letting them spend it as they will.”

Our response to this line of thinking is that the challenges of analyzing and solving problems half a world away, at scale, are real and significant – not so significant that we should drop all attempts to do better than cash transfers, but significant enough that we shouldn’t assume we’ll see much better options than cash transfers either. Having looked far and wide for underfunded yet evidence-backed interventions, we’ve concluded that having a high enough level of technocratic knowledge to do “better than cash” isn’t impossible, but it’s extremely difficult. The bar is high, and we’ve only found one charity that (not overwhelmingly) clears it. And to us, doing extraordinary analysis means being willing to embrace that result, as many less informed donors (who end up taking charities’ bold claims at face value) will not.

With that said, we also don’t think cash transfers should be seen as either an “easy” or an “unexciting” intervention. The difference between wealthy developed-world citizens and the world’s poorest people is massive, and I find it continually stunning how high a percentage of someone’s income I can provide by giving a small percentage of my own. To me, being able to send my dollars directly to the world’s poorest people, living half a world away – with only ~10% diverted to costs along the way – is an astonishing opportunity.

Empowerment and catastrophic risk

In previous posts, I have:

  • Laid out the view that in general, further economic development and general human empowerment are likely to be substantially net positive, and are likely to lead to improvement on many dimensions in unexpected ways.
  • Listed possible global catastrophic risks that provide a potential counterpoint to this view, while also noting “global upside possibilities” in which progress could lead to a future that is far brighter than the present.

This post attempts to lay out my reasons for thinking that speeding the pace of global development and empowerment should be thought of as increasing humanity’s odds of an extremely bright future, relative to its odds of a future that is worse than the present. Note that

  • I focus here on slightly to moderately speeding or slowing the pace of global development and empowerment relative to what it is today; this takes for granted that we can expect to see substantial development and empowerment in our future, and simply asks whether it is desirable that this development/empowerment happen more quickly or more slowly.
  • I focus on the odds of an extremely bright future relative to the odds of a future that is worse than the present. This means that I’m not only considering the contribution of empowerment and development to catastrophic risk; I’m also considering their contribution to “global upside possibilities.”

1. Some catastrophic risks seem clearly reduced, and not exacerbated, by technological/economic progress. These include “non-anthropogenic” risks, such as asteroids, supervolcanoes, and non-engineered pandemics. Development may give us better tools for anticipating and responding to these risks, and is unlikely to make them worse. In addition, risks like #4 and #5 from the previous post on this topic – which involve risks of slowing growth due to shortage of a particular resource, or a slowdown in innovation – seem clearly mitigated by a faster pace of development.

2. Even for the catastrophic risks that seem exacerbated by development, I believe that faster development is likely safer than slower development (or, at worst, the net effect is highly ambiguous). This belief is based on the previously articulated concept of “global upside possibilities” – the belief that sufficient development may make the world not only better, but less at risk for major disruption by global catastrophe. If one accepts this view, it follows that faster overall development would mean less time between (a) the emergence of a given danger and (b) other developments that dramatically reduce risks. For example, faster development may bring the day closer when a highly dangerous synthetic pandemic can be designed, but it will also bring the day closer when we have the technologies and resources to manage such a risk (as well as potentially speeding the improvement of decision-making abilities and mental health worldwide, improving the capabilities of those who would mitigate such a risk and reducing the number of people who would contribute to it). Likewise, faster development may lead to higher carbon emissions, but is also likely to lead to better progress on alternative energy sources, more resources for adaptation mechanisms (much of the impact of climate change depends on these resources), and generally an environment more favorable to investing in climate change prevention.

There are certainly limitations to this reasoning. For one thing, it addresses “general” economic/technological development; the point remains that empowering people and developing technologies that are particularly likely to exacerbate risks can increase net risk, and that for any given risk there are particular kinds of growth that are more and less problematic in terms of that risk. (For example, the ideal scenario for dealing with climate change is one in which we see strong growth but also reduce carbon emissions.)

In addition, if there is a particular risk that has been clearly identified before it is yet technologically possible, and there is a promising plan for averting such a risk, it could be safer to experience slower development while the promising plan is executed. However, I know of no compelling examples of such dynamics today. (And in general, it is likely to be much easier to design a plan for responding to a risk when the risk is real and concrete rather than hypothetical.)

3. I believe that a large proportion of the risk of global catastrophe comes from the category of “risks that remain unarticulated and unimagined.” I don’t believe the list we made previously – or any list that can be constructed with today’s available information – is close to comprehensive: I expect that many of the most threatening risks are simply outside what we are able to anticipate today.

I would guess that some such risks become nearer as economic/technological development progresses, while some do not. But in all cases, I believe that economic/technological development is likely to improve our resources for anticipating, preventing and adapting to global catastrophes, and that for the reasons articulated above, faster development is more likely to reduce the lag between the emergence of risks and responses to them (including “global upside possibilities” that dramatically reduce risks).

4. A key part of my view is the belief that there are few outstanding cases in which it is clear that very particular actions need to be taken to avert particular risks. If there were a more compelling set of cases in which the right course of action were known, I would be more likely to believe that “slowing development until the right course of action can play out reduces risks, and generically speeding development increases them.” But as it is, I don’t see such clear-cut cases. The cases in which the necessary actions are clearest to me are those of asteroids (which I think is a clear-cut case in which development reduces risks) and climate change (which I see as highly ambiguous regarding the question of whether faster development is desirable, as discussed above). Thus, I don’t see a strong case for safety benefits to slower development.

I remain highly open to the possibility that particular risks represent excellent giving opportunities, and that focusing on them may do more good than simply focusing on increasing development and empowerment. But I am not aware of what I consider a strong case for believing that development in general increases the odds of a badly disrupted future relative to an extremely bright one, and I believe there are strong reasons to believe that development improves our prospects on net.

Our landscape of the open science community

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

We previously wrote about a decision to complete a “medium-depth investigation” of the cause of open science: promoting new ways of producing, sharing, reviewing, and evaluating scientific research. The investigation broadly fits under the heading of GiveWell Labs research, which we are conducting in partnership with Good Ventures.

We have now completed the “medium-depth investigation,” led by Senior Research Analyst Alexander Berger, and have written up the investigative process we followed and the output that process produced (XLS). This post synthesizes that output, and gives our current views on the following questions:

  • What is the problem? The traditional journal system, which plays a key role in academic research across a variety of fields, has many limitations that might be addressed by a less traditional, more web-based, more generally “open” approach to publishing research.
  • What are possible interventions? Organizations are engaged in a wide variety of approaches, including building tools that facilitate new ways of publishing and evaluating research and conducting campaigns to increase the extent to which researchers share useful (but not generally traditional-journal-worthy) information.
  • Who else is working on this? Some for-profit organizations have gotten significant funding; on the nonprofit side, there are several foundations working on various aspects of the problem, though most are relatively new to the space. We have the sense that there is currently little funding available for groups focused on changing incentives and doing advocacy (as opposed to building tools and platforms), though we don’t have high confidence in this view.
  • What are possible steps for further investigation? If we were to investigate this cause more deeply, we’d seek a better understanding of the positive consequences that a shift to “open science” might bring, the rate at which such a shift is already occurring, and the organizations and funders that are currently in this space.

Note that these questions match those we have been asking in our shallow investigations.

Overall, we feel that we’ve significantly improved our understanding of this space, though major questions remain. Our main takeaways are as follows:

  • We see less “room for more philanthropy” in the space of supporting tools and platforms than we expected, partly because of the presence of for-profit organizations, some of which have substantial funding.
  • We see more such room in the space of “advocacy and incentives” than we expected, as most of the organizations in that category seem to have relatively little in terms of funding.
  • We still have some major questions about this space. One set of questions regards how beneficial a transition to “open science” would be, and how much a philanthropist might hope to speed it along; we think we could gain substantial ground on this question with further work. Another set of questions, however, involves how new funders who are entering this space will approach the problem. These questions will be hard to answer without letting time pass.Details follow.

    What is the problem?
    The general picture that we felt emerged from our conversations was as follows:
    The traditional journal system plays a crucial role in modern academic research. Academics seek to publish in prestigious journals; academics largely assess each other (for purposes of awarding tenure among other things) by their records of publishing in prestigious journals. Yet the traditional system is problematic in many ways:

    • Journals usually charge fees for access to publications; an alternative publication system could include universal open access to academic research.
    • Journals use a time-consuming peer-review process that doesn’t necessarily ensure that a paper is reliable or error-free.
    • Journals often fail to encourage or facilitate optimal sharing of data and code (as well as preregistration), and the journal system gives authors little reason to go out of their way to share.
    • Journals often have conventions that run counter to the goal of producing as much social value as possible. They may favor “newsworthy” results, leading to publication bias; they may favor publishing novel analysis over replications, reanalyses and debates; they may have arbitrary length requirements that limit the amount of detail that can be included; they may have other informal preferences that discourage certain forms of investigation, even when those investigations would be highly valuable. This is particularly problematic because considerations about “what a top journal might publish” appears to drive much of the incentive structure for researchers.

    It is not difficult to imagine a world in which scientists habitually publish their work in online venues other than (or in addition to) traditional journals, and follow substantially different practices from those encouraged by the journal system. Depending on the norms and tools that sprung up around such a practice, this could lead to:

    • More widespread sharing of data and code.
    • More and better replications, and therefore potentially improved reproducibility.
    • More online debate and discussion that could provide alternatives to peer review in terms of evaluating the value of research. Such alternative evaluation methods could be faster, more reliable, and more flexible than peer review, thus encouraging many of the valuable practices that peer review does not sufficiently encourage.
    • More efficient and flexible collaboration, as researchers could more easily find other researchers working on similar topics and could more easily synthesize the work relevant to theirs.

    A unifying theme is the possibility of science’s becoming more “open” – of sharing academic research both widely (such that anyone can access it) and deeply (sharing far more information than is in a typical journal article) – leading to more possibilities for both critique and collaboration.

    Such changes could span a wide range of fields, from biology to development economics to psychology, leading to many difficult-to-forecast positive impacts. If we were to recommend this cause, we would ultimately have to do the best we could to evaluate the likely size of such benefits, but we haven’t undertaken to do so at this time, focusing instead on the landscape of people, organizations and approaches working to bring this transition about. (Much of our investigation to date on “open science” has focused on biomedical research because we believe that biomedical research is likely to deliver significant humanitarian value over the long term—and because it constitutes roughly half of all research funded in the in the U.S.—but this is something we would investigate further before committing to this area.)

    What are possible interventions?
    The “Organizations” sheet of our landscape spreadsheet (XLS) lists groups working on many different aspects of open science:

    • Altmetrics – metrics for evaluating the use/influence/importance of research that go beyond the traditional measures of “where a paper is published and how many citations it has.”
    • Post-publication peer review – tools that allow online critique and discussion of research, beyond the traditional journal-based prospective peer review process.
    • Innovative open access publishing, including preprints – models that facilitate sharing research publicly rather than simply publishing it in closed journals, sometimes prior to any peer review occurring.
    • Sharing data and code – projects that encourage researchers to share more information about their research, by providing tools to make sharing easier or by creating incentives to share.
    • Reproducibility – projects that focus on assessing and improving the reproducibility of research, something that the traditional journal system has only very limited mechanisms to address.
    • Attribution – tools allowing researchers to cite each others’ work in nontraditional ways, thus encouraging nontraditional practices (such as data-sharing).
    • Advocacy – public- or government-focused campaigns aiming to encourage open access, data/code sharing, and other practices that might have social benefits but private costs for researchers or publishers.
    • Alternative publication and peer review models – providing novel ways for researchers to disseminate their research processes and findings and have them reviewed (pre-publication).
    • Social networks – platforms encouraging researchers to connect with each other, and in the process to share their research in nontraditional forums.

    The process by which we found these groups and categorized them is described on our process page. We’ve posted an editable version of the spreadsheet on Google Drive, and we welcome any edits or additions to that version.

    Who else is working on this?
    The “Funders” sheet of our landscape spreadsheet (XLS) lists the major funders we’ve come across in this field.

    One important presence in the funding landscape is for-profit capital. MacMillan Publishers, owner of Nature (one of the most prestigious scientific journals), owns a group called Digital Science, which runs and/or funds multiple projects working to address these issues. In addition, there are three organizations we know of in the “social networks for researchers” category that have gotten substantial for-profit funding. According to TechCrunch, Academia.edu has raised several million dollars, ResearchGate has raised at least $35 million, and Mendeley was acquired by a major journal publisher this year for $69-100 million. It’s not clear to us just how we should think of for-profit capital; there seem to be large amounts of funding available for groups that are successfully changing the way researchers share their work, but it’s an open question how aligned the incentives of for-profit investors are with the vision of “making science more open” discussed in the previous section. All three of these companies do explicitly discuss efforts to “make science more open” as being an important part of their overall goals.

    Another important presence is the Alfred P. Sloan Foundation, which we have published conversation notes from. The Sloan Foundation appears to be mostly focused on funding platforms and tools that will make it easier for researchers to operate along the lines of “open science”:

    Researchers have various reasons for not sharing their data and code, but the difficulty of sharing it in a public context is often the easiest explanation for not doing so. If it became easier to share, then researchers might feel more pressure to share, because the technical excuse would cease to be credible.

    Other funders we encountered in this area were generally newer to the space:

    • The Gordon and Betty Moore Foundation is currently launching a 5-year, $60 million, Data-Driven Discovery Initiative.
    • The Laura and John Arnold Foundation recently made a $5 million grant through their Research Integrity program to launch the Center for Open Science.
    • The Andrew W. Mellon Foundation, which typically focuses on the humanities, has a Scholarly Communication and Information Technology program that spent $26 million in 2011 (big PDF), much of it going to support libraries and archives but some going to the kinds of novel approaches described above.

    In general, it seems to us that there is currently much more organizational activity on the “building tools and platforms” front than on the “changing incentives and advocating for better practices” front. This can be seen by comparing the “Advocacy” groups in our landscape spreadsheet to the other groups, as well as through the preceding two paragraphs, though the relative youth of the Moore and Arnold Foundations in this space is a source of significant uncertainty in that view. Another possibility is that much of the work being done to change incentives and improve practices happens at the disciplinary or journal level in ways that aren’t caught by the interview process that we conducted.

    What are possible next steps for further investigation?
    We are unlikely to put substantially more time into this cause until we’ve examined some other causes. A major justification for doing a “medium-depth” investigation of this cause was to experiment with the idea of a “medium-depth review” itself, and we intend to do more “medium-depth reviews” as our research progresses. That said, we are likely to take minor steps to improve our understanding and stay updated on the cause, and we are open to outstanding giving opportunities in this cause if they meet our working criteria.

    If we were to aim for the next level of understanding of this cause, we would:

    • Improve our understanding of the size, scope and consequences of the problems listed in the “What is the problem?” section, seeking to understand how much benefit we could expect from a transition from traditional to “open” science. We would also attempt to gauge the progress that has been made on this front so far, to get a sense of the likely returns to further funding (with the possibility that speedy progress to date may reflect an underlying inevitable process that may limit the need for much greater funding).
    • Try to improve our relationships with and understanding of other funders in the space. Since there are several funders that are relatively new and/or have agendas that we don’t know a great deal about, it is very important to understand how they’re thinking so that we can focus on underfunded areas.
    • Have further conversations with the organizations included in our landscape, with the hope of understanding their missions and funding needs.
    • General-purpose networking in order to deepen our understanding of the landscape and improve our odds of running into potential strong giving opportunities. Alexander plans to attend the Peer Review Congress in Chicago in September, since we see this as a relatively efficient way to interact with a lot of relevant people in a short amount of time. (We’re also hoping that the conference will give us more of a sense of the work going on in what we previously called the “efficiency and integrity of medical research” subset of the metaresearch community, which we have explicitly not included in this discussion.)

    We think these steps would be appropriate ones to take prior to committing substantial funding or undertaking a full-blown strategy development process, though we could envision recommending some funding to particular outstanding giving opportunities that we encountered in the process of learning more about this field.

    Note:

Grant to Center for Global Development (CGD)

Via the grantmaking process described previously, Good Ventures has decided – with GiveWell’s input – to make a grant to the Center for Global Development (CGD) for general operating support. The grant will be $300,000 paid evenly over the next three years. This post lays out the thinking behind this grant. As mentioned previously, this grant is distinct from our charity recommendations in terms of the primary justification.

The observation that led to this grant – and underlies much of the reasoning behind it – is that we (GiveWell and Good Ventures) are relying on CGD substantially for help with our learning agenda. This has implications for all of the principles we previously laid out for making grants:

  • A grant to CGD is likely to have “learning value” via increasing our access to CGD.
  • Because we’re directly engaging with CGD’s “product,” we feel relatively well positioned to evaluate the quality of that “product” and by extension the quality of CGD. Though our view of CGD is far from exhaustive, what we have seen of the organization is quite positive, which implies to us that general operating support is a good giving opportunity.
  • Because CGD’s work is important and valuable to us and because CGD relies on (and seeks) philanthropic funding, we believe that supporting CGD falls under the heading of “good citizenship” discussed previously.

In general, we’re planning to frequently consider grants to organizations whose work is highly valuable to our research, because such situations tend to be associated with the above points: they tend to be situations in which we value access, in which we are reasonably positioned to have a favorable view of at least a part of the organization’s work, and in which “good citizenship” principles call for providing support.

Our experience as “customers” of CGD
We have benefited from CGD’s work for several years, dating back to when we were focused exclusively on direct aid.

  • Millions Saved has been a valuable resource for us in identifying interventions that have worked at scale. It’s the only work we’ve seen that has collected success stories like these into one place, and we’ve vetted the report and found it to be of reasonably good quality. We are currently engaged in an update of the report and have found this project to be one of the most promising “shovel-ready” opportunities within the category of history of philanthropy.
  • We have been extremely impressed with – and helped by – David Roodman, one of CGD’s resident scholars. We have made use of his critiques of nonexperimental studies of microfinance, review of the problems with literature on the macro effects of aid, and review of higher-quality evidence regarding microfinance. He’s the person we know of who has made the biggest contributions to examining the validity and reliability of research relevant to foreign aid (something we have looked for a great deal). In addition, I found his book Due Diligence to be the best discussion of microfinance I’ve seen, and to be generally a model of analysis that is simultaneously thoughtful and careful, holistic (looking at many different angles of a problem), and transparent (in the sense that it is always clear what his claims are based on).
  • More recently, we have had conversations with Todd Moss and Michael Clemens (both CGD staff) as part of our shallow investigations of new causes. We spoke to Dr. Clemens because we had been repeatedly pointed to him as a leading scholar and advocate on the topic of international migration (and his work in reviewing the literature was a major contributor to sparking our interest in this cause as a potentially high-impact one in the first place). We spoke to Dr. Moss because we had run across his work in our review of discussions of developing-world infrastructure (writeup forthcoming). In both cases, we found the conversations helpful and informative, though we have not independently investigated the accuracy of the statements made in these conversations.
  • As we continue to go down our list of potential shallow investigations, there are many more for which we anticipate that speaking to a CGD staffer will be a good starting point. This is because of CGD’s relatively unique standing as an organization that examines both the intellectual and practical/political aspects of designing policy to help the global poor.
  • As a more minor point, we found our conversation with Lant Pritchett (also a CGD staffer) to be one of the more interesting open-ended conversations we’ve had on how a funder can accomplish as much good as possible. Dr. Pritchett had concrete ideas for areas that could plausibly be high-impact and do not seem to be already “crowded” with funders.

More on CGD as an organization
CGD describes itself as conducting “research and analysis on a wide range of topics related to how rich country policies impact people in the developing world.” Looking across CGD’s topics, initiatives, and experts, the consistent picture is of an organization doing analysis on practical policy ideas aimed at improving conditions for the world’s poorest. We know of few other organizations with similar missions.

We have not done an exhaustive review of CGD’s activities and the case for each, and in particular we know little about CGD’s influence. (We have also not done room for more funding analysis, though we do know that CGD is soliciting general operating support.) But the fact that the people and work we’ve seen so far are generally high-quality – and that CGD’s activities are both broad and consistently aimed at a population that we think is particularly appropriate as a target of philanthropic efforts – point to CGD as an organization that stands out on both mission and staff, and therefore can potentially do substantial good with general operating support.

One more point influencing our overall impression of CGD is its data disclosure policy, announced in 2011. We have previously written about our interest in data/code sharing, and we know of no other research organization (focused on social sciences) with a similar policy, even two years later.

Grant size and structure
We haven’t done an in-depth investigation of CGD, and since much of our goal is to fulfill the goals of “access” and “good citizenship” discussed previously, we have tried to settle on a grant large enough to show seriousness of support and to ensure that CGD will consider interacting with us to be worth its time. Via informal conversations with other funders and organizations, we have arrived at the figure of $100,000 per year as a reasonable figure to accomplish this for a fairly large (~$10 million per year) and established organization such as CGD.

The grant is a three-year grant ($100,000 each year for the next three years). As we will be discussing in a future post, we believe that providing multiple-year commitments is helpful for other organizations’ planning, and that a three-year grant is therefore substantially better for the grantee better than a one-year grant renewed twice.

Our take on “earning to give”

GiveWell exists to help people do as much good as possible with their financial giving. We’re interested in the related question of how to do as much good as possible with one’s talents and career choice, and so we’ve been interested in the debate that has sprung up around last month’s article by Dylan Matthews on “earning to give.”

One of the reasons that we have chosen to focus our analysis on how to give well – rather than on how to choose a career well – is that we feel the latter is much harder to provide general insight about. Everyone’s dollars are the same, but everyone’s talents are different – so even if two people have identical views about the most important causes, the most promising solutions and the best organizations, they may rightly end up doing two very different jobs if they have different abilities. As stated previously, we are generally skeptical of taking expected-value figures like “$2500 per life saved” literally in any context, and we don’t endorse choosing one’s career based on explicit quantification of expected good accomplished. I elaborated on this thinking in an interview with 80,000 Hours.

With that said, we believe that the “earning to give” idea has something very valuable about it: it represents a broadening of the set of options one considers as possibilities for doing good.

The conventional wisdom that “doing good means working for a nonprofit,” in our view, represents an “easy way out” – a narrowing of options before learning and deliberation begin to occur. We believe that many of the jobs that most help the world are in the for-profit sector, not just because of the possibility of “earning to give” but because of the general flow-through effects of creating economic value. Considering both nonprofit and for-profit jobs means that one will (hopefully) end up with a better-fitting, higher-impact (and more personally satisfying) job in one area or the other.

In a previous post, I alluded to a distinction between extreme quantification (basing one’s decisions on shaky, guesswork-filled estimates of expected value) and systematicity (examining as many options as possible and being deliberate and transparent about choosing between them). That distinction is relevant here. We wouldn’t be happy to see more people basing their career decisions on things like “lifetime earnings divided by cost per life saved estimate.” But we would be happy to see more people – with their jobs as well as with their giving – being proactive rather than reactive and putting all the options on the table.

In both giving and working, we feel that most people consider too few options, do too little reflection, and place too little weight on helping others. They give to the charities that they happen to come into contact with, and they make early decisions about careers that often are not fully informed and are not later revisited. When we speak of an “effective altruism” movement, we picture people asking not “How can I feel good?” or even “How can I do good?” but “How can I do as much good as possible?” – not out of obligation or guilt, but out of genuine excitement at the thought of making a positive difference and hunger to make that difference as big as they can. That’s a movement we’re excited to see growing, and we’re excited about “earning to give” as one option among many.