The GiveWell Blog

Empowerment and catastrophic risk

In previous posts, I have:

  • Laid out the view that in general, further economic development and general human empowerment are likely to be substantially net positive, and are likely to lead to improvement on many dimensions in unexpected ways.
  • Listed possible global catastrophic risks that provide a potential counterpoint to this view, while also noting “global upside possibilities” in which progress could lead to a future that is far brighter than the present.

This post attempts to lay out my reasons for thinking that speeding the pace of global development and empowerment should be thought of as increasing humanity’s odds of an extremely bright future, relative to its odds of a future that is worse than the present. Note that

  • I focus here on slightly to moderately speeding or slowing the pace of global development and empowerment relative to what it is today; this takes for granted that we can expect to see substantial development and empowerment in our future, and simply asks whether it is desirable that this development/empowerment happen more quickly or more slowly.
  • I focus on the odds of an extremely bright future relative to the odds of a future that is worse than the present. This means that I’m not only considering the contribution of empowerment and development to catastrophic risk; I’m also considering their contribution to “global upside possibilities.”

1. Some catastrophic risks seem clearly reduced, and not exacerbated, by technological/economic progress. These include “non-anthropogenic” risks, such as asteroids, supervolcanoes, and non-engineered pandemics. Development may give us better tools for anticipating and responding to these risks, and is unlikely to make them worse. In addition, risks like #4 and #5 from the previous post on this topic – which involve risks of slowing growth due to shortage of a particular resource, or a slowdown in innovation – seem clearly mitigated by a faster pace of development.

2. Even for the catastrophic risks that seem exacerbated by development, I believe that faster development is likely safer than slower development (or, at worst, the net effect is highly ambiguous). This belief is based on the previously articulated concept of “global upside possibilities” – the belief that sufficient development may make the world not only better, but less at risk for major disruption by global catastrophe. If one accepts this view, it follows that faster overall development would mean less time between (a) the emergence of a given danger and (b) other developments that dramatically reduce risks. For example, faster development may bring the day closer when a highly dangerous synthetic pandemic can be designed, but it will also bring the day closer when we have the technologies and resources to manage such a risk (as well as potentially speeding the improvement of decision-making abilities and mental health worldwide, improving the capabilities of those who would mitigate such a risk and reducing the number of people who would contribute to it). Likewise, faster development may lead to higher carbon emissions, but is also likely to lead to better progress on alternative energy sources, more resources for adaptation mechanisms (much of the impact of climate change depends on these resources), and generally an environment more favorable to investing in climate change prevention.

There are certainly limitations to this reasoning. For one thing, it addresses “general” economic/technological development; the point remains that empowering people and developing technologies that are particularly likely to exacerbate risks can increase net risk, and that for any given risk there are particular kinds of growth that are more and less problematic in terms of that risk. (For example, the ideal scenario for dealing with climate change is one in which we see strong growth but also reduce carbon emissions.)

In addition, if there is a particular risk that has been clearly identified before it is yet technologically possible, and there is a promising plan for averting such a risk, it could be safer to experience slower development while the promising plan is executed. However, I know of no compelling examples of such dynamics today. (And in general, it is likely to be much easier to design a plan for responding to a risk when the risk is real and concrete rather than hypothetical.)

3. I believe that a large proportion of the risk of global catastrophe comes from the category of “risks that remain unarticulated and unimagined.” I don’t believe the list we made previously – or any list that can be constructed with today’s available information – is close to comprehensive: I expect that many of the most threatening risks are simply outside what we are able to anticipate today.

I would guess that some such risks become nearer as economic/technological development progresses, while some do not. But in all cases, I believe that economic/technological development is likely to improve our resources for anticipating, preventing and adapting to global catastrophes, and that for the reasons articulated above, faster development is more likely to reduce the lag between the emergence of risks and responses to them (including “global upside possibilities” that dramatically reduce risks).

4. A key part of my view is the belief that there are few outstanding cases in which it is clear that very particular actions need to be taken to avert particular risks. If there were a more compelling set of cases in which the right course of action were known, I would be more likely to believe that “slowing development until the right course of action can play out reduces risks, and generically speeding development increases them.” But as it is, I don’t see such clear-cut cases. The cases in which the necessary actions are clearest to me are those of asteroids (which I think is a clear-cut case in which development reduces risks) and climate change (which I see as highly ambiguous regarding the question of whether faster development is desirable, as discussed above). Thus, I don’t see a strong case for safety benefits to slower development.

I remain highly open to the possibility that particular risks represent excellent giving opportunities, and that focusing on them may do more good than simply focusing on increasing development and empowerment. But I am not aware of what I consider a strong case for believing that development in general increases the odds of a badly disrupted future relative to an extremely bright one, and I believe there are strong reasons to believe that development improves our prospects on net.

Our landscape of the open science community

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

We previously wrote about a decision to complete a “medium-depth investigation” of the cause of open science: promoting new ways of producing, sharing, reviewing, and evaluating scientific research. The investigation broadly fits under the heading of GiveWell Labs research, which we are conducting in partnership with Good Ventures.

We have now completed the “medium-depth investigation,” led by Senior Research Analyst Alexander Berger, and have written up the investigative process we followed and the output that process produced (XLS). This post synthesizes that output, and gives our current views on the following questions:

  • What is the problem? The traditional journal system, which plays a key role in academic research across a variety of fields, has many limitations that might be addressed by a less traditional, more web-based, more generally “open” approach to publishing research.
  • What are possible interventions? Organizations are engaged in a wide variety of approaches, including building tools that facilitate new ways of publishing and evaluating research and conducting campaigns to increase the extent to which researchers share useful (but not generally traditional-journal-worthy) information.
  • Who else is working on this? Some for-profit organizations have gotten significant funding; on the nonprofit side, there are several foundations working on various aspects of the problem, though most are relatively new to the space. We have the sense that there is currently little funding available for groups focused on changing incentives and doing advocacy (as opposed to building tools and platforms), though we don’t have high confidence in this view.
  • What are possible steps for further investigation? If we were to investigate this cause more deeply, we’d seek a better understanding of the positive consequences that a shift to “open science” might bring, the rate at which such a shift is already occurring, and the organizations and funders that are currently in this space.

Note that these questions match those we have been asking in our shallow investigations.

Overall, we feel that we’ve significantly improved our understanding of this space, though major questions remain. Our main takeaways are as follows:

  • We see less “room for more philanthropy” in the space of supporting tools and platforms than we expected, partly because of the presence of for-profit organizations, some of which have substantial funding.
  • We see more such room in the space of “advocacy and incentives” than we expected, as most of the organizations in that category seem to have relatively little in terms of funding.
  • We still have some major questions about this space. One set of questions regards how beneficial a transition to “open science” would be, and how much a philanthropist might hope to speed it along; we think we could gain substantial ground on this question with further work. Another set of questions, however, involves how new funders who are entering this space will approach the problem. These questions will be hard to answer without letting time pass.Details follow.

    What is the problem?
    The general picture that we felt emerged from our conversations was as follows:
    The traditional journal system plays a crucial role in modern academic research. Academics seek to publish in prestigious journals; academics largely assess each other (for purposes of awarding tenure among other things) by their records of publishing in prestigious journals. Yet the traditional system is problematic in many ways:

    • Journals usually charge fees for access to publications; an alternative publication system could include universal open access to academic research.
    • Journals use a time-consuming peer-review process that doesn’t necessarily ensure that a paper is reliable or error-free.
    • Journals often fail to encourage or facilitate optimal sharing of data and code (as well as preregistration), and the journal system gives authors little reason to go out of their way to share.
    • Journals often have conventions that run counter to the goal of producing as much social value as possible. They may favor “newsworthy” results, leading to publication bias; they may favor publishing novel analysis over replications, reanalyses and debates; they may have arbitrary length requirements that limit the amount of detail that can be included; they may have other informal preferences that discourage certain forms of investigation, even when those investigations would be highly valuable. This is particularly problematic because considerations about “what a top journal might publish” appears to drive much of the incentive structure for researchers.

    It is not difficult to imagine a world in which scientists habitually publish their work in online venues other than (or in addition to) traditional journals, and follow substantially different practices from those encouraged by the journal system. Depending on the norms and tools that sprung up around such a practice, this could lead to:

    • More widespread sharing of data and code.
    • More and better replications, and therefore potentially improved reproducibility.
    • More online debate and discussion that could provide alternatives to peer review in terms of evaluating the value of research. Such alternative evaluation methods could be faster, more reliable, and more flexible than peer review, thus encouraging many of the valuable practices that peer review does not sufficiently encourage.
    • More efficient and flexible collaboration, as researchers could more easily find other researchers working on similar topics and could more easily synthesize the work relevant to theirs.

    A unifying theme is the possibility of science’s becoming more “open” – of sharing academic research both widely (such that anyone can access it) and deeply (sharing far more information than is in a typical journal article) – leading to more possibilities for both critique and collaboration.

    Such changes could span a wide range of fields, from biology to development economics to psychology, leading to many difficult-to-forecast positive impacts. If we were to recommend this cause, we would ultimately have to do the best we could to evaluate the likely size of such benefits, but we haven’t undertaken to do so at this time, focusing instead on the landscape of people, organizations and approaches working to bring this transition about. (Much of our investigation to date on “open science” has focused on biomedical research because we believe that biomedical research is likely to deliver significant humanitarian value over the long term—and because it constitutes roughly half of all research funded in the in the U.S.—but this is something we would investigate further before committing to this area.)

    What are possible interventions?
    The “Organizations” sheet of our landscape spreadsheet (XLS) lists groups working on many different aspects of open science:

    • Altmetrics – metrics for evaluating the use/influence/importance of research that go beyond the traditional measures of “where a paper is published and how many citations it has.”
    • Post-publication peer review – tools that allow online critique and discussion of research, beyond the traditional journal-based prospective peer review process.
    • Innovative open access publishing, including preprints – models that facilitate sharing research publicly rather than simply publishing it in closed journals, sometimes prior to any peer review occurring.
    • Sharing data and code – projects that encourage researchers to share more information about their research, by providing tools to make sharing easier or by creating incentives to share.
    • Reproducibility – projects that focus on assessing and improving the reproducibility of research, something that the traditional journal system has only very limited mechanisms to address.
    • Attribution – tools allowing researchers to cite each others’ work in nontraditional ways, thus encouraging nontraditional practices (such as data-sharing).
    • Advocacy – public- or government-focused campaigns aiming to encourage open access, data/code sharing, and other practices that might have social benefits but private costs for researchers or publishers.
    • Alternative publication and peer review models – providing novel ways for researchers to disseminate their research processes and findings and have them reviewed (pre-publication).
    • Social networks – platforms encouraging researchers to connect with each other, and in the process to share their research in nontraditional forums.

    The process by which we found these groups and categorized them is described on our process page. We’ve posted an editable version of the spreadsheet on Google Drive, and we welcome any edits or additions to that version.

    Who else is working on this?
    The “Funders” sheet of our landscape spreadsheet (XLS) lists the major funders we’ve come across in this field.

    One important presence in the funding landscape is for-profit capital. MacMillan Publishers, owner of Nature (one of the most prestigious scientific journals), owns a group called Digital Science, which runs and/or funds multiple projects working to address these issues. In addition, there are three organizations we know of in the “social networks for researchers” category that have gotten substantial for-profit funding. According to TechCrunch, Academia.edu has raised several million dollars, ResearchGate has raised at least $35 million, and Mendeley was acquired by a major journal publisher this year for $69-100 million. It’s not clear to us just how we should think of for-profit capital; there seem to be large amounts of funding available for groups that are successfully changing the way researchers share their work, but it’s an open question how aligned the incentives of for-profit investors are with the vision of “making science more open” discussed in the previous section. All three of these companies do explicitly discuss efforts to “make science more open” as being an important part of their overall goals.

    Another important presence is the Alfred P. Sloan Foundation, which we have published conversation notes from. The Sloan Foundation appears to be mostly focused on funding platforms and tools that will make it easier for researchers to operate along the lines of “open science”:

    Researchers have various reasons for not sharing their data and code, but the difficulty of sharing it in a public context is often the easiest explanation for not doing so. If it became easier to share, then researchers might feel more pressure to share, because the technical excuse would cease to be credible.

    Other funders we encountered in this area were generally newer to the space:

    • The Gordon and Betty Moore Foundation is currently launching a 5-year, $60 million, Data-Driven Discovery Initiative.
    • The Laura and John Arnold Foundation recently made a $5 million grant through their Research Integrity program to launch the Center for Open Science.
    • The Andrew W. Mellon Foundation, which typically focuses on the humanities, has a Scholarly Communication and Information Technology program that spent $26 million in 2011 (big PDF), much of it going to support libraries and archives but some going to the kinds of novel approaches described above.

    In general, it seems to us that there is currently much more organizational activity on the “building tools and platforms” front than on the “changing incentives and advocating for better practices” front. This can be seen by comparing the “Advocacy” groups in our landscape spreadsheet to the other groups, as well as through the preceding two paragraphs, though the relative youth of the Moore and Arnold Foundations in this space is a source of significant uncertainty in that view. Another possibility is that much of the work being done to change incentives and improve practices happens at the disciplinary or journal level in ways that aren’t caught by the interview process that we conducted.

    What are possible next steps for further investigation?
    We are unlikely to put substantially more time into this cause until we’ve examined some other causes. A major justification for doing a “medium-depth” investigation of this cause was to experiment with the idea of a “medium-depth review” itself, and we intend to do more “medium-depth reviews” as our research progresses. That said, we are likely to take minor steps to improve our understanding and stay updated on the cause, and we are open to outstanding giving opportunities in this cause if they meet our working criteria.

    If we were to aim for the next level of understanding of this cause, we would:

    • Improve our understanding of the size, scope and consequences of the problems listed in the “What is the problem?” section, seeking to understand how much benefit we could expect from a transition from traditional to “open” science. We would also attempt to gauge the progress that has been made on this front so far, to get a sense of the likely returns to further funding (with the possibility that speedy progress to date may reflect an underlying inevitable process that may limit the need for much greater funding).
    • Try to improve our relationships with and understanding of other funders in the space. Since there are several funders that are relatively new and/or have agendas that we don’t know a great deal about, it is very important to understand how they’re thinking so that we can focus on underfunded areas.
    • Have further conversations with the organizations included in our landscape, with the hope of understanding their missions and funding needs.
    • General-purpose networking in order to deepen our understanding of the landscape and improve our odds of running into potential strong giving opportunities. Alexander plans to attend the Peer Review Congress in Chicago in September, since we see this as a relatively efficient way to interact with a lot of relevant people in a short amount of time. (We’re also hoping that the conference will give us more of a sense of the work going on in what we previously called the “efficiency and integrity of medical research” subset of the metaresearch community, which we have explicitly not included in this discussion.)

    We think these steps would be appropriate ones to take prior to committing substantial funding or undertaking a full-blown strategy development process, though we could envision recommending some funding to particular outstanding giving opportunities that we encountered in the process of learning more about this field.

    Note:

Grant to Center for Global Development (CGD)

Via the grantmaking process described previously, Good Ventures has decided – with GiveWell’s input – to make a grant to the Center for Global Development (CGD) for general operating support. The grant will be $300,000 paid evenly over the next three years. This post lays out the thinking behind this grant. As mentioned previously, this grant is distinct from our charity recommendations in terms of the primary justification.

The observation that led to this grant – and underlies much of the reasoning behind it – is that we (GiveWell and Good Ventures) are relying on CGD substantially for help with our learning agenda. This has implications for all of the principles we previously laid out for making grants:

  • A grant to CGD is likely to have “learning value” via increasing our access to CGD.
  • Because we’re directly engaging with CGD’s “product,” we feel relatively well positioned to evaluate the quality of that “product” and by extension the quality of CGD. Though our view of CGD is far from exhaustive, what we have seen of the organization is quite positive, which implies to us that general operating support is a good giving opportunity.
  • Because CGD’s work is important and valuable to us and because CGD relies on (and seeks) philanthropic funding, we believe that supporting CGD falls under the heading of “good citizenship” discussed previously.

In general, we’re planning to frequently consider grants to organizations whose work is highly valuable to our research, because such situations tend to be associated with the above points: they tend to be situations in which we value access, in which we are reasonably positioned to have a favorable view of at least a part of the organization’s work, and in which “good citizenship” principles call for providing support.

Our experience as “customers” of CGD
We have benefited from CGD’s work for several years, dating back to when we were focused exclusively on direct aid.

  • Millions Saved has been a valuable resource for us in identifying interventions that have worked at scale. It’s the only work we’ve seen that has collected success stories like these into one place, and we’ve vetted the report and found it to be of reasonably good quality. We are currently engaged in an update of the report and have found this project to be one of the most promising “shovel-ready” opportunities within the category of history of philanthropy.
  • We have been extremely impressed with – and helped by – David Roodman, one of CGD’s resident scholars. We have made use of his critiques of nonexperimental studies of microfinance, review of the problems with literature on the macro effects of aid, and review of higher-quality evidence regarding microfinance. He’s the person we know of who has made the biggest contributions to examining the validity and reliability of research relevant to foreign aid (something we have looked for a great deal). In addition, I found his book Due Diligence to be the best discussion of microfinance I’ve seen, and to be generally a model of analysis that is simultaneously thoughtful and careful, holistic (looking at many different angles of a problem), and transparent (in the sense that it is always clear what his claims are based on).
  • More recently, we have had conversations with Todd Moss and Michael Clemens (both CGD staff) as part of our shallow investigations of new causes. We spoke to Dr. Clemens because we had been repeatedly pointed to him as a leading scholar and advocate on the topic of international migration (and his work in reviewing the literature was a major contributor to sparking our interest in this cause as a potentially high-impact one in the first place). We spoke to Dr. Moss because we had run across his work in our review of discussions of developing-world infrastructure (writeup forthcoming). In both cases, we found the conversations helpful and informative, though we have not independently investigated the accuracy of the statements made in these conversations.
  • As we continue to go down our list of potential shallow investigations, there are many more for which we anticipate that speaking to a CGD staffer will be a good starting point. This is because of CGD’s relatively unique standing as an organization that examines both the intellectual and practical/political aspects of designing policy to help the global poor.
  • As a more minor point, we found our conversation with Lant Pritchett (also a CGD staffer) to be one of the more interesting open-ended conversations we’ve had on how a funder can accomplish as much good as possible. Dr. Pritchett had concrete ideas for areas that could plausibly be high-impact and do not seem to be already “crowded” with funders.

More on CGD as an organization
CGD describes itself as conducting “research and analysis on a wide range of topics related to how rich country policies impact people in the developing world.” Looking across CGD’s topics, initiatives, and experts, the consistent picture is of an organization doing analysis on practical policy ideas aimed at improving conditions for the world’s poorest. We know of few other organizations with similar missions.

We have not done an exhaustive review of CGD’s activities and the case for each, and in particular we know little about CGD’s influence. (We have also not done room for more funding analysis, though we do know that CGD is soliciting general operating support.) But the fact that the people and work we’ve seen so far are generally high-quality – and that CGD’s activities are both broad and consistently aimed at a population that we think is particularly appropriate as a target of philanthropic efforts – point to CGD as an organization that stands out on both mission and staff, and therefore can potentially do substantial good with general operating support.

One more point influencing our overall impression of CGD is its data disclosure policy, announced in 2011. We have previously written about our interest in data/code sharing, and we know of no other research organization (focused on social sciences) with a similar policy, even two years later.

Grant size and structure
We haven’t done an in-depth investigation of CGD, and since much of our goal is to fulfill the goals of “access” and “good citizenship” discussed previously, we have tried to settle on a grant large enough to show seriousness of support and to ensure that CGD will consider interacting with us to be worth its time. Via informal conversations with other funders and organizations, we have arrived at the figure of $100,000 per year as a reasonable figure to accomplish this for a fairly large (~$10 million per year) and established organization such as CGD.

The grant is a three-year grant ($100,000 each year for the next three years). As we will be discussing in a future post, we believe that providing multiple-year commitments is helpful for other organizations’ planning, and that a three-year grant is therefore substantially better for the grantee better than a one-year grant renewed twice.

Our take on “earning to give”

GiveWell exists to help people do as much good as possible with their financial giving. We’re interested in the related question of how to do as much good as possible with one’s talents and career choice, and so we’ve been interested in the debate that has sprung up around last month’s article by Dylan Matthews on “earning to give.”

One of the reasons that we have chosen to focus our analysis on how to give well – rather than on how to choose a career well – is that we feel the latter is much harder to provide general insight about. Everyone’s dollars are the same, but everyone’s talents are different – so even if two people have identical views about the most important causes, the most promising solutions and the best organizations, they may rightly end up doing two very different jobs if they have different abilities. As stated previously, we are generally skeptical of taking expected-value figures like “$2500 per life saved” literally in any context, and we don’t endorse choosing one’s career based on explicit quantification of expected good accomplished. I elaborated on this thinking in an interview with 80,000 Hours.

With that said, we believe that the “earning to give” idea has something very valuable about it: it represents a broadening of the set of options one considers as possibilities for doing good.

The conventional wisdom that “doing good means working for a nonprofit,” in our view, represents an “easy way out” – a narrowing of options before learning and deliberation begin to occur. We believe that many of the jobs that most help the world are in the for-profit sector, not just because of the possibility of “earning to give” but because of the general flow-through effects of creating economic value. Considering both nonprofit and for-profit jobs means that one will (hopefully) end up with a better-fitting, higher-impact (and more personally satisfying) job in one area or the other.

In a previous post, I alluded to a distinction between extreme quantification (basing one’s decisions on shaky, guesswork-filled estimates of expected value) and systematicity (examining as many options as possible and being deliberate and transparent about choosing between them). That distinction is relevant here. We wouldn’t be happy to see more people basing their career decisions on things like “lifetime earnings divided by cost per life saved estimate.” But we would be happy to see more people – with their jobs as well as with their giving – being proactive rather than reactive and putting all the options on the table.

In both giving and working, we feel that most people consider too few options, do too little reflection, and place too little weight on helping others. They give to the charities that they happen to come into contact with, and they make early decisions about careers that often are not fully informed and are not later revisited. When we speak of an “effective altruism” movement, we picture people asking not “How can I feel good?” or even “How can I do good?” but “How can I do as much good as possible?” – not out of obligation or guilt, but out of genuine excitement at the thought of making a positive difference and hunger to make that difference as big as they can. That’s a movement we’re excited to see growing, and we’re excited about “earning to give” as one option among many.

Near-term grantmaking

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

As stated previously, we expect that it will take quite a long time for us to reach the point of issuing major recommendations based on our GiveWell Labs work. That said, there have been – and will be – situations in which making a grant is appropriate and helpful. Since we are working closely with Good Ventures on Labs, our default approach has been – and will be – to jointly assess situations in which a grant may be called for, with the final call (and any grant) being made by Good Ventures. (If we encounter a point of disagreement, in which we feel it is important to make a grant and Good Ventures does not, we may approach other donors.) This post lays out the basic principles by which we (GiveWell and Good Ventures) decide when to make a grant.

Note that these grants are importantly different from our official recommendations. There is much less emphasis on thorough investigation and maximizing good accomplished per dollar (though the latter is a consideration), and much more weight placed on practical value to our agenda (particularly learning opportunities).

1. Giving to learn

We’ve written before about the concept of “giving to learn,” stating that “gaining information from an organization … is much easier to obtain as a ‘supporter’ (someone who has helped get funding to an organization in the past) than simply as an evaluator (someone who might help get funding to an organization in the future).”

To elaborate a bit on this idea, there are multiple forms that “giving to learn” can take:

  • A grant can improve our access to an organization that we want to learn more about, or an organization whose personnel are good sources of information. The work we’ve done on co-funding generally goes in this category.
  • A grant may directly pay for work that generates useful information, or may help us influence the direction that such work takes. Potential examples include any grants from our history of philanthropy project, including the recent $50,000 grant to the Millions Saved project.
  • In some cases a grant can be viewed as an “experiment” – a way to test a theory that a particular project will have a particular result, or will more generally be a worthwhile investment.In general, we believe that “betting on one’s beliefs, and seeing what happens” is a good way to learn about the world, though we also think that this approach has major and unusual limitations when it comes to philanthropy. In our experience, understanding the outcomes/results of a given philanthropic project is usually a major undertaking, and it’s easy to learn nothing from a grant if one does not commit to such an undertaking. Therefore, we try to pick “learning grants” of this type carefully. The giving that fits best into this category so far is the money we’ve moved to our top charities, which we believe to be excellent giving opportunities that we can follow and adjust our views of over time.

2. Strong giving opportunities

Because we believe that good accomplished compounds over time, we want to take advantage of unusually strong giving opportunities when we come across them. Doing so will sometimes have the added benefit of providing further “experiments” to learn from in line with the previous section.

We believe that it is usually difficult to assess the quality of a giving opportunity without having strong cause-level knowledge. As such, we expect to make fairly few grants in this category in the near future, though as we expand the set of causes we understand well, we expect to make more over time.

3. Good citizenship

We are just getting started in exploring many relevant areas; our reputation and relationships are important. Therefore, we think it is important to generally behave as “good citizens” when it comes to grantmaking. The idea of being a “good citizen” is a vague one that we’re still fleshing out, but it includes things like

  • Being direct and open with potential funding partners and grantees, and not withholding information for the sake of saving money.
  • Not behaving in ways that “reward” potential funding partners/grantees for being less than direct and open with us, or “punish” potential funding partners/grantees for being direct and open with us.

Imagine that both we and another funder are considering making the same grant, and we have the feeling that the other funder might make the grant if we did not. In such a case, we could hold back and disguise our interest for the purpose of saving money, but we feel such an action would fail the “good citizenship” test. Rather, we intend to err on the side of making grants that we would have been willing to make under slightly different circumstances (concerning funding partners’ and potential grantees’ plans and preferences). If we value an organization’s help enough that we would be willing to make a “learning grant” to gain better access to it, we will err on the side of making such a grant even if we happen to believe that we could gain such access without a grant. If we are interested enough in a project that we would be willing to fund it if a potential partner weren’t, we will err on the side of contributing to funding even if we feel that the potential partner doesn’t need our help.

Weighing factors and making decisions

We plan to make grants when some combination of the above factors calls for doing so.

For any given grant, we will need to determine the appropriate level of investigation, as well as the appropriate level of followup and public discussion. In all cases, we will announce grants and give at least a basic characterization of the thinking behind them. But we also will be trying to make the level of investigation, followup and public discussion conceptually “proportional” to the size of the grant. The $50,000 grant to Millions Saved is simply too small – in the scope of the amount of funding we hope eventually to direct – to justify the sort of intensive investigation and followup we’ve done of our top charities. On the flip side, if we were contemplating a very large grant (in the millions of dollars), we would generally plan on serious investigation, and accordingly we would have a much higher bar that the grant would have to clear regarding the above criteria. We wouldn’t undertake a major investigation and major grant unless we felt an opportunity was highly outstanding (and/or in line with our learning agenda).

Over the coming months, GiveWell and Good Ventures expect to announce a reasonable number of grants. Such grants will not always be accompanied by exhaustive research or explicit cost-effectiveness analysis, but they will be carefully selected to fulfill the above criteria and further our mission of finding and funding the most outstanding giving opportunities possible.

The moral case for giving doesn’t rely on questionable quantitative estimates

In light of Peter Singer’s TED talk and Dylan Matthews’s piece on “earning to give,” there’s been a fair amount of discussion recently of what one might call “Peter Singer’s challenge,” which I’d roughly summarize as follows:

  • By giving $X to the right charity, you can save a human life.
  • This fact has multiple surprising consequences, such as (a) you morally ought to give as much as possible (b) a reasonable path to doing as much good as possible is to pick a maximally high-paying job, to facilitate giving more to charity.


 

A common response to this reasoning – which one can see in Felix Salmon’s recent post – is to attack the first bullet point. This means disputing the robustness of the “$X saves a life” figure (a figure that is often quoted based on GiveWell’s analysis), and questioning the quantification exercise that generates this figure as being distortive and costly.

We believe that these objections to quantification have serious merit, and in fact we have produced a great deal of content that supports such objections. GiveWell is about giving as well as possible, not specifically about quantifying the expected value of donations. This distinction has become increasingly important to us since the start of our project, and we’ve continually moved in the direction of making our evaluations more holistic. (Some details on how we’ve done so below.)

But we also believe that these objections miss the real heart of Peter Singer’s challenge. In many ways we think that Peter and others do their own argument a disservice when they rely on the “$X saves a life” figure: such a figure is both open to reasonable attack and unnecessary to make the core point.

To us, the strongest form of the challenge is not “How much should I give when $X saves a life?” but “How much should I give, knowing that I have massive wealth compared to the global poor?” Perhaps the most vivid illustration comes not from Against Malaria Foundation (our #1-rated charity) but from GiveDirectly (our #2). If you give $1000 to GiveDirectly, ~$900 will end up in the hands of people whose resources are a tiny fraction of yours. GiveDirectly’s estimate – which we believe is less sensitive to guesswork than “cost per life saved” figures – is that recipients live on ~65 cents per day, implying that such a donation could roughly double the annual consumption for a family of four, not counting any long term benefits. We may not know exactly how many lives that saves, if any, but we find it a compelling figure nonetheless, and one that calls for far more generous giving than what’s “normal.”

Those figures aren’t precise, and we believe our #1 charity accomplishes even more good per dollar, but we believe the broad point to be quite robust: whether or not the money I spend on luxuries could have literally saved a life, it’s money that could do a lot more for someone else than it does for me. Jason Trigg’s attitude is, in my view, defensible based on this consideration alone.

This version of Peter Singer’s challenge relies not on the fragile estimates GiveWell produces, but on an extremely robust and nearly undisputed set of observations about extraordinary global inequalities. And it challenges us to give not just money, but time, thought, and whatever else we can spare.

We believe strongly in the value of healthy skepticism toward charities and toward cost-effectiveness estimates. What we don’t believe in is using such skepticism as an excuse to dodge questions about the appropriate level of generosity. We fear that Peter Singer and his advocates sometimes enable this dodge by relying so heavily on “cost per life saved” type figures.

The global distribution of wealth is mind-bogglingly uneven, and the readers of this blog are mostly on the privileged side of the divide. We have the informational and technological tools to help others enormously just by writing checks. These are facts that are hard to dispute, and they’re facts that raise some uncomfortable questions about how we should manage our lives and our budgets. We welcome (and instigate) debates over both our methodology and our particular recommendations, but such debates shouldn’t distract us from the moral case for giving.

Some notes on GiveWell’s relationship to “quantified giving”

We think it’s worth addressing some of the specific objections that Felix Salmon gave to the methodology of “quantified giving,” because in many cases we feel that we have not only acknowledged such objections but have put substantial work into fleshing them out, supporting them, and embracing their consequences. Specifically:

We believe that “systematically examining all options with the aim of doing as much good as possible, and being highly transparent about our reasoning” is often conflated with “making decisions based on explicit quantifications of good accomplished.” As long as the two are held equivalent, the project of “effective altruism” will be on shaky ground. But we believe the two are not equivalent – that it is possible to be simultaneously holistic, systematic and transparent. We will be writing more about the distinction.