The GiveWell Blog

Our take on “earning to give”

GiveWell exists to help people do as much good as possible with their financial giving. We’re interested in the related question of how to do as much good as possible with one’s talents and career choice, and so we’ve been interested in the debate that has sprung up around last month’s article by Dylan Matthews on “earning to give.”

One of the reasons that we have chosen to focus our analysis on how to give well – rather than on how to choose a career well – is that we feel the latter is much harder to provide general insight about. Everyone’s dollars are the same, but everyone’s talents are different – so even if two people have identical views about the most important causes, the most promising solutions and the best organizations, they may rightly end up doing two very different jobs if they have different abilities. As stated previously, we are generally skeptical of taking expected-value figures like “$2500 per life saved” literally in any context, and we don’t endorse choosing one’s career based on explicit quantification of expected good accomplished. I elaborated on this thinking in an interview with 80,000 Hours.

With that said, we believe that the “earning to give” idea has something very valuable about it: it represents a broadening of the set of options one considers as possibilities for doing good.

The conventional wisdom that “doing good means working for a nonprofit,” in our view, represents an “easy way out” – a narrowing of options before learning and deliberation begin to occur. We believe that many of the jobs that most help the world are in the for-profit sector, not just because of the possibility of “earning to give” but because of the general flow-through effects of creating economic value. Considering both nonprofit and for-profit jobs means that one will (hopefully) end up with a better-fitting, higher-impact (and more personally satisfying) job in one area or the other.

In a previous post, I alluded to a distinction between extreme quantification (basing one’s decisions on shaky, guesswork-filled estimates of expected value) and systematicity (examining as many options as possible and being deliberate and transparent about choosing between them). That distinction is relevant here. We wouldn’t be happy to see more people basing their career decisions on things like “lifetime earnings divided by cost per life saved estimate.” But we would be happy to see more people – with their jobs as well as with their giving – being proactive rather than reactive and putting all the options on the table.

In both giving and working, we feel that most people consider too few options, do too little reflection, and place too little weight on helping others. They give to the charities that they happen to come into contact with, and they make early decisions about careers that often are not fully informed and are not later revisited. When we speak of an “effective altruism” movement, we picture people asking not “How can I feel good?” or even “How can I do good?” but “How can I do as much good as possible?” – not out of obligation or guilt, but out of genuine excitement at the thought of making a positive difference and hunger to make that difference as big as they can. That’s a movement we’re excited to see growing, and we’re excited about “earning to give” as one option among many.

Near-term grantmaking

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

As stated previously, we expect that it will take quite a long time for us to reach the point of issuing major recommendations based on our GiveWell Labs work. That said, there have been – and will be – situations in which making a grant is appropriate and helpful. Since we are working closely with Good Ventures on Labs, our default approach has been – and will be – to jointly assess situations in which a grant may be called for, with the final call (and any grant) being made by Good Ventures. (If we encounter a point of disagreement, in which we feel it is important to make a grant and Good Ventures does not, we may approach other donors.) This post lays out the basic principles by which we (GiveWell and Good Ventures) decide when to make a grant.

Note that these grants are importantly different from our official recommendations. There is much less emphasis on thorough investigation and maximizing good accomplished per dollar (though the latter is a consideration), and much more weight placed on practical value to our agenda (particularly learning opportunities).

1. Giving to learn

We’ve written before about the concept of “giving to learn,” stating that “gaining information from an organization … is much easier to obtain as a ‘supporter’ (someone who has helped get funding to an organization in the past) than simply as an evaluator (someone who might help get funding to an organization in the future).”

To elaborate a bit on this idea, there are multiple forms that “giving to learn” can take:

  • A grant can improve our access to an organization that we want to learn more about, or an organization whose personnel are good sources of information. The work we’ve done on co-funding generally goes in this category.
  • A grant may directly pay for work that generates useful information, or may help us influence the direction that such work takes. Potential examples include any grants from our history of philanthropy project, including the recent $50,000 grant to the Millions Saved project.
  • In some cases a grant can be viewed as an “experiment” – a way to test a theory that a particular project will have a particular result, or will more generally be a worthwhile investment.In general, we believe that “betting on one’s beliefs, and seeing what happens” is a good way to learn about the world, though we also think that this approach has major and unusual limitations when it comes to philanthropy. In our experience, understanding the outcomes/results of a given philanthropic project is usually a major undertaking, and it’s easy to learn nothing from a grant if one does not commit to such an undertaking. Therefore, we try to pick “learning grants” of this type carefully. The giving that fits best into this category so far is the money we’ve moved to our top charities, which we believe to be excellent giving opportunities that we can follow and adjust our views of over time.

2. Strong giving opportunities

Because we believe that good accomplished compounds over time, we want to take advantage of unusually strong giving opportunities when we come across them. Doing so will sometimes have the added benefit of providing further “experiments” to learn from in line with the previous section.

We believe that it is usually difficult to assess the quality of a giving opportunity without having strong cause-level knowledge. As such, we expect to make fairly few grants in this category in the near future, though as we expand the set of causes we understand well, we expect to make more over time.

3. Good citizenship

We are just getting started in exploring many relevant areas; our reputation and relationships are important. Therefore, we think it is important to generally behave as “good citizens” when it comes to grantmaking. The idea of being a “good citizen” is a vague one that we’re still fleshing out, but it includes things like

  • Being direct and open with potential funding partners and grantees, and not withholding information for the sake of saving money.
  • Not behaving in ways that “reward” potential funding partners/grantees for being less than direct and open with us, or “punish” potential funding partners/grantees for being direct and open with us.

Imagine that both we and another funder are considering making the same grant, and we have the feeling that the other funder might make the grant if we did not. In such a case, we could hold back and disguise our interest for the purpose of saving money, but we feel such an action would fail the “good citizenship” test. Rather, we intend to err on the side of making grants that we would have been willing to make under slightly different circumstances (concerning funding partners’ and potential grantees’ plans and preferences). If we value an organization’s help enough that we would be willing to make a “learning grant” to gain better access to it, we will err on the side of making such a grant even if we happen to believe that we could gain such access without a grant. If we are interested enough in a project that we would be willing to fund it if a potential partner weren’t, we will err on the side of contributing to funding even if we feel that the potential partner doesn’t need our help.

Weighing factors and making decisions

We plan to make grants when some combination of the above factors calls for doing so.

For any given grant, we will need to determine the appropriate level of investigation, as well as the appropriate level of followup and public discussion. In all cases, we will announce grants and give at least a basic characterization of the thinking behind them. But we also will be trying to make the level of investigation, followup and public discussion conceptually “proportional” to the size of the grant. The $50,000 grant to Millions Saved is simply too small – in the scope of the amount of funding we hope eventually to direct – to justify the sort of intensive investigation and followup we’ve done of our top charities. On the flip side, if we were contemplating a very large grant (in the millions of dollars), we would generally plan on serious investigation, and accordingly we would have a much higher bar that the grant would have to clear regarding the above criteria. We wouldn’t undertake a major investigation and major grant unless we felt an opportunity was highly outstanding (and/or in line with our learning agenda).

Over the coming months, GiveWell and Good Ventures expect to announce a reasonable number of grants. Such grants will not always be accompanied by exhaustive research or explicit cost-effectiveness analysis, but they will be carefully selected to fulfill the above criteria and further our mission of finding and funding the most outstanding giving opportunities possible.

The moral case for giving doesn’t rely on questionable quantitative estimates

In light of Peter Singer’s TED talk and Dylan Matthews’s piece on “earning to give,” there’s been a fair amount of discussion recently of what one might call “Peter Singer’s challenge,” which I’d roughly summarize as follows:

  • By giving $X to the right charity, you can save a human life.
  • This fact has multiple surprising consequences, such as (a) you morally ought to give as much as possible (b) a reasonable path to doing as much good as possible is to pick a maximally high-paying job, to facilitate giving more to charity.


 

A common response to this reasoning – which one can see in Felix Salmon’s recent post – is to attack the first bullet point. This means disputing the robustness of the “$X saves a life” figure (a figure that is often quoted based on GiveWell’s analysis), and questioning the quantification exercise that generates this figure as being distortive and costly.

We believe that these objections to quantification have serious merit, and in fact we have produced a great deal of content that supports such objections. GiveWell is about giving as well as possible, not specifically about quantifying the expected value of donations. This distinction has become increasingly important to us since the start of our project, and we’ve continually moved in the direction of making our evaluations more holistic. (Some details on how we’ve done so below.)

But we also believe that these objections miss the real heart of Peter Singer’s challenge. In many ways we think that Peter and others do their own argument a disservice when they rely on the “$X saves a life” figure: such a figure is both open to reasonable attack and unnecessary to make the core point.

To us, the strongest form of the challenge is not “How much should I give when $X saves a life?” but “How much should I give, knowing that I have massive wealth compared to the global poor?” Perhaps the most vivid illustration comes not from Against Malaria Foundation (our #1-rated charity) but from GiveDirectly (our #2). If you give $1000 to GiveDirectly, ~$900 will end up in the hands of people whose resources are a tiny fraction of yours. GiveDirectly’s estimate – which we believe is less sensitive to guesswork than “cost per life saved” figures – is that recipients live on ~65 cents per day, implying that such a donation could roughly double the annual consumption for a family of four, not counting any long term benefits. We may not know exactly how many lives that saves, if any, but we find it a compelling figure nonetheless, and one that calls for far more generous giving than what’s “normal.”

Those figures aren’t precise, and we believe our #1 charity accomplishes even more good per dollar, but we believe the broad point to be quite robust: whether or not the money I spend on luxuries could have literally saved a life, it’s money that could do a lot more for someone else than it does for me. Jason Trigg’s attitude is, in my view, defensible based on this consideration alone.

This version of Peter Singer’s challenge relies not on the fragile estimates GiveWell produces, but on an extremely robust and nearly undisputed set of observations about extraordinary global inequalities. And it challenges us to give not just money, but time, thought, and whatever else we can spare.

We believe strongly in the value of healthy skepticism toward charities and toward cost-effectiveness estimates. What we don’t believe in is using such skepticism as an excuse to dodge questions about the appropriate level of generosity. We fear that Peter Singer and his advocates sometimes enable this dodge by relying so heavily on “cost per life saved” type figures.

The global distribution of wealth is mind-bogglingly uneven, and the readers of this blog are mostly on the privileged side of the divide. We have the informational and technological tools to help others enormously just by writing checks. These are facts that are hard to dispute, and they’re facts that raise some uncomfortable questions about how we should manage our lives and our budgets. We welcome (and instigate) debates over both our methodology and our particular recommendations, but such debates shouldn’t distract us from the moral case for giving.

Some notes on GiveWell’s relationship to “quantified giving”

We think it’s worth addressing some of the specific objections that Felix Salmon gave to the methodology of “quantified giving,” because in many cases we feel that we have not only acknowledged such objections but have put substantial work into fleshing them out, supporting them, and embracing their consequences. Specifically:

We believe that “systematically examining all options with the aim of doing as much good as possible, and being highly transparent about our reasoning” is often conflated with “making decisions based on explicit quantifications of good accomplished.” As long as the two are held equivalent, the project of “effective altruism” will be on shaky ground. But we believe the two are not equivalent – that it is possible to be simultaneously holistic, systematic and transparent. We will be writing more about the distinction.

How to help GiveWell

We often get the question, “how can I most help GiveWell?”

First and foremost, you can donate to our top charities. Giving to our top charities accomplishes good directly, but it also really helps GiveWell too (as long as we know about your gift). You may not have much to give, but by giving what you can, you’re helping GiveWell commensurate with your resources.

Second, you can spread the word about GiveWell. You can introduce your friends to effective altruism and point them to helpful starting points like Peter Singer’s book The Life You Can Save, his recent TED talk, or our Giving 101.

You can also like/share our Facebook posts with your network or re-tweet what we post on Twitter. Looking out for opportunities to share interesting things we write with your network will really help us.

Third, if you have time, you can use it to help us. You can help most by reading the content we put out and letting us know — via blog comments or emailing our public email list — if you have questions, comments, or concerns. We work hard to make our research available to everyone and we really appreciate active engagement from our audience.

Alternatively, if you have a blog or other audience of your own, we always really appreciate when others discuss the work we’re doing. The best way to view all the content we’re publishing is our newly published materials list, which you can follow via email, RSS, or Twitter.

In very rare cases, we’re open to working with volunteers or even hiring people as full-time staff. If you’re passionate about what we do, and have specific skills or interests that could help us, let us know by following the instructions on our jobs page.

Meta-research update

As mentioned previously, we are currently conducting an in-depth investigation of meta-research, with the hopes of producing our first “medium-depth” report on the giving opportunities in a cause.

Our investigation isn’t yet complete, but it has taken several turns that we’ve found educational, and our vision of what it means to investigate a “cause” has evolved. This post gives an update on how we’ve gone from “investigating meta-research in general, starting with development economics” to “specifically investigating the issue of reproducibility in medical research” to “investigating alternatives to the traditional journal system.”

The big-picture takeaway is that if one defines a “cause” the way we did previously – as “a particular set of problems, or opportunities, such that the people and organizations working on them are likely to interact with each other, and such that evaluating many of these people and organizations requires knowledge of overlapping subjects” – then it can be difficult to predict exactly what will turn out to be a “cause” and what won’t. We started by articulating a broad topic – a seeming disconnect between the incentives academics face and the incentives that would be in line with producing work of maximal benefit to society – and looking for people and organizations who do work related to this topic, but found that this topic breaks down into many sub-topics that are a better match for the concept of a “cause.”

Simply identifying which sub-topics can be approached as “causes” is non-trivial. We believe it is important to do so, if one wishes to deliberately focus in on the most promising causes that can be understood in a reasonable time frame, rather than spreading one’s investigative resources exploring several causes at once.

From development economics to medicine

In a previous meta-research update, we focused on the field of development economics. Following that update, we collaborated for several months with an institutional funder that supports a significant amount of development economics work and has expressed similar “meta-research” interests; we also explored some other fields, as discussed in a recent post. We ultimately came to the working conclusion that

  • Meta-research in medicine-related fields is “further along” than in social sciences, in the sense that there are more established organizations and infrastructure around meta-research (for example, Cochrane Collaboration and EQUATOR network) and there has been more research on related issues (particularly the work of John Ioannidis).
  • With that said, meta-research in medicine-related fields still has a long enough way to go – and little enough in the way of existing funders working on it – to make it a potentially promising area.
  • In social sciences, studies are often so expensive and lengthy to conduct (the deworming study we’ve discussed before took over a decade to produce what we consider its most relevant results) that the prospects for robustly establishing conclusions to inform policy generally seem distant. By contrast, we believe that improving the reliability of medical research would likely have fairly direct and quick impacts on medical practice.
  • The institutional funder we have collaborated with continues to work in social sciences (specifically development economics), and we believe its approach and attitude is similar enough to ours that our value-added in this area would be limited.

With these points in mind, we decided to shift our focus and deeply investigate meta-research in medicine-related fields rather than meta-research in development economics. This was a provisional decision; we remain interested in the latter.

Exploring meta-research in medicine

Alexander Berger led an investigation of meta-research in medicine, beginning in February. His basic approach was to start with the leads we had – contacts at Cochrane as well as individuals suggested by John Ioannidis – and get referrals from them to other people he should be speaking with.

In early May, we paused the investigation to take stock of where we were. It occurred to us that the people and organizations we had come across were divided into a few categories, which didn’t necessarily overlap:

1. The “efficiency and integrity of medical research” community. This community focuses on improving the efficiency with which medical research funding is translated into reliable, actionable evidence, by promoting practices such as (a) systematic reviews, which synthesize many studies to provide overall conclusions that can inform medical practitioners; (b) data sharing, especially of clinical trial data; (c) preregistration; and (d) replications of existing studies to check their reliability. This community includes the Cochrane Collaboration.

People in this community that we spoke to include:

2. The “open science” community. This community focuses on new tools for producing, sharing, reviewing, and evaluating research, many of them focusing on the idea of a transition from traditional paper journals to more powerful and flexible online applications. Some such tools (such as Open Science Framework) are produced by nonprofits, while others (such as ResearchGate and JournalLab) are produced by for-profits.

People in this community that we spoke to include:

Widespread adoption of tools such as those listed above could eventually make it much easier for researchers to share their data, check the reliability of each others’ work, and synthesize all existing research on a given question – in other words, such adoption could eventually lead to resolution of many of the same issues that the “efficiency of medical research” community deals with. Not surprisingly, many of the people in the “open science” community emphasize the same problems with today’s research world that people in the “efficiency of medical research” community emphasize – so it’s not surprising that, when we expressed interest in these issues, we were pointed to people in both categories.

That said, there is little overlap between communities #1 and #2, and we believe that this is largely for good reason. Community #1 focuses on medical research; community #2 is generally working across many fields at once. Community #1 focuses on actions that could directly and quickly improve the usability of medical research; community #2 is largely working on a longer time horizon, and hopes to see dramatic improvements when widespread adoption of its tools takes place. (Despite this, when there are organizations that have a disciplinary bent, we’ve continued to focus on the more bio-medically relevant ones, as opposed to those focused on, e.g. astronomy or geosciences.)

3. Other communities. Some other communities that could fall under the heading of “meta-research relevant to medical practice” include:

  • The evidence-based medicine community, which seeks to improve the usefulness of evidence for medical practice by increasing the extent to which available high-quality evidence is used in medical practice. (We see this community as distinct from the “efficiency and integrity of biomedical research” community because it focuses on the use, as opposed to the production, of evidence, though many of the practitioners overlap.)
  • People seeking to improve the practice of epidemiology (whose methods and issues are quite distinct from those of the sort of research that Cochrane Collaboration synthesizes). One such group is the Observational Medical Outcomes Partnership (OMOP), which we spoke with David Madigan about.
  • John Ioannidis, whose work seems largely unique as far as we can tell. Prof. Ioannidis has studied a wide variety of “meta-research” issues in a wide variety of fields, including reproducibility of clinical research, bias, reliability of genome-wide association studies, and conformity vs. creativity in biology research.
  • Vannevar, a group started by Dario Amodei (who is a GiveWell fan and personal friend), which aims to improve the infrastructure around fields such as basic biology (which is distinct from both epidemiology and the sort of medical research that the Cochrane Collaboration addresses) and machine learning. Unlike most of the groups discussed above, Vannevar is focused on improving the ability of academia to produce high-risk, revolutionary work, rather than on improving its ability to efficiently produce immediately actionable recommendations for medical practitioners and policymakers.

Many of the individuals working in these communities may have cross-cutting interests and play some role in multiple communities, but we see the communities as having discrete identities. The characterization above is not meant to be exhaustive or to eliminate the possibility of other groupings, but rather to convey our understanding of the relationships between various problems, interventions, and individuals.

The path from here

At this point, the community we feel we have covered the most thoroughly is #2, the “open science” community. This hasn’t been an entirely deliberate decision: we’ve spoken to the people we’ve been pointed to and the people they’ve pointed us to, and only after many conversations have we noticed the patterns and distinct communities discussed above.

Because it is important to us to complete a medium-depth writeup, we’re currently aiming to complete such a writeup on open science. We will add the other communities discussed above to our list of potential shallow investigations.

In this process, we’ve learned that it can take a fair amount of work and reflection just to determine what counts as a “cause” in the relevant way. We think such work and reflection is worthwhile. Rather than speaking to everyone who is somehow connected to a problem of interest, we seek to identify different causes, deliberately pick the ones we want to focus in on, and cover those thoroughly.

Refining the goals of GiveWell Labs

[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]

To date, our work on GiveWell Labs has been highly exploratory and preliminary, but we’ve recently been refining our picture of what we expect our output to look like in the reasonably near (~1 year) future. Our plans are still tentative, but have changed enough that an update seems worthwhile at this stage.

In brief,

  • Our main goal is to find the most promising charitable causes; we think of the “cause,” rather than the “charity” or “project,” as the most relevant unit of analysis for us at this point.
  • We expect to recommend causes that combine high potential for impact with low existing philanthropic resources.
  • We currently work closely with Good Ventures on this research. Cari Tuna is an active partner with us on these investigations, and we see Good Ventures as the initial target for our recommendations. Both GiveWell and Good Ventures anticipate other philanthropists (including some portion of GiveWell’s existing audience of individual donors) eventually participating in funding the opportunities we identify.
  • We expect to investigate potential cause recommendations for a substantial amount of time before releasing recommendations, but we are not holding ourselves to conducting a “comprehensive” investigation before releasing recommendations. At some point in the future, we will recommend causes based on the information we’ve gained so far, while continuing to explore more. This approach mirrors the approach we’ve taken in the past with charity recommendations: taking some time up front, releasing recommendations, then continuing to seek better recommendations even as we promote our existing ones.
  • For the near future, we will focus on exploring causes at limited depth, in order to identify the most promising ones. We are planning to explore many causes at the “shallow” level (~20 hours), and a smaller number of causes at a deeper level (~3 months, with the investigation outsourced to a contractor when possible). The causes we explore at a deep level will be based on the causes we find most promising at a given point in time.

Note that we continue to collaborate closely with Good Ventures on our work in these areas, which constitute a largely shared agenda, and “we” generally refers to “GiveWell and Good Ventures” in this post.

Details follow.

Charitable cause as fundamental unit of analysis
When GiveWell started, it focused on finding the best charities by certain criteria. These criteria were particularly well-suited to looking across a broad array of charitable causes: we weren’t sure yet what types of interventions we found most promising, but for any given charity we could ask whether it had room for more funding to carry out activities with established and quantifiable likely outcomes.

When we first launched GiveWell Labs, we shifted to the idea of finding the best projects. We had realized that many charities are extremely diverse organizations, and many philanthropic opportunities involve funding particular parts of them. (A particularly extreme case might be that of a university, whose professors could be funded to do promising research but which we wouldn’t want to provide unrestricted support to). We laid out a set of criteria for such projects.

However, we’ve since moved to the cause as our fundamental unit of analysis. We’d roughly define a “cause” as “a particular set of problems, or opportunities, such that the people and organizations working on them are likely to interact with each other, and such that evaluating many of these people and organizations requires knowledge of overlapping subjects.”

Some reasons for this shift include:

  • It’s generally very difficult to evaluate a project in isolation from knowledge of the cause it sits within.
    • Even for the most “proven” interventions we’ve been able to find – such as bednet distribution – we’ve put in a great deal of work to understand the nuances of the evidence base and the funding landscape, and we now feel better positioned to assess other ideas that touch on these areas (for example, funding of research on insecticide resistance).
    • An instructive experience was when, last year, we sought to evaluate the Cochrane Collaboration, whose work we were already highly familiar with. Even to get a basic sense of its situation, we felt it necessary to do a miniature survey of the funding landscape, and doing so increased our interest in and understanding of meta-research for biomedical sciences. After this survey, we felt better positioned to understand funding opportunities in this area than in most others, which is why we prioritized it as our first medium-depth cause investigation (more on this in a future post).
    • In trying to evaluate giving opportunities in unfamiliar areas – whether brought to us by individuals, charities or foundations – we’ve found that our assessments are highly volatile and tend to change rapidly with new information, making it hard to form confidence without getting a better sense of the cause-level issues.
    • In particular, when evaluating a giving opportunity, we feel it’s important to have a sense of who the other funders are in the relevant space, and what sorts of projects they are and aren’t interested in.
  • We’ve also come to the view that committing to a cause can be necessary in order to find giving opportunities within that cause. At this point, we don’t think one can take the lack of “shovel-ready” projects within a cause as a sign that the cause doesn’t have room for more funding. More at a previous post on active vs. passive funding.
  • Between the above points, it seems to us that it may be appropriate to make a several-year commitment to a cause, in order to form the appropriate relationships, source giving opportunities, try different approaches and learn from them, etc. In speaking with foundations, we’ve generally gotten the sense that their approaches to the causes they’ve focused on have changed dramatically over time.
  • Another major input into our thinking has been the fact that nearly every major foundation (some of which we find impressive) seems to approach giving from this basic perspective, i.e., focusing on particular causes.

We do believe there are potential ways to give well without taking a “cause-focused” approach. These may include

  • Focusing on interventions with strong formal evidence of effectiveness regardless of cause, as GiveWell has for most of its history. Our take at this point is that such interventions are rare, and such a focus largely ends up leading to a focus on causes within global health and nutrition.
  • Focusing on finding and funding outstanding people. I believe that this approach can be very effective when one uses one’s own network (and thus effectively trades deep knowledge of causes for deep knowledge of people), but that it’s more difficult to carry out such an approach with scale and systematicity. Funders aiming to do the latter include Ashoka, Echoing Green, Draper Richards Kaplan, and the Skoll Foundation.
  • Funding prominent organizations and individuals whose prominence makes it relatively easy to assess them (e.g., by triangulating others’ opinions).

Our anticipated approach to making cause recommendations
Our tentative basic approach to recommending causes is continuous with the approach we’ve previously taken to recommending charities:

  • We seek to cast a wide net, considering many options. At the same time, we focus our resources on the options that seem most promising.
  • We will be asking a set of consistent critical questions of each cause we consider. At this point, these questions are tentatively: “What is the problem?”, “What are possible interventions?” and “Who else is working on it?”, and we are looking for causes where the philanthropic funding and presence are unusually low relative to the importance, tractability, and opportunities around the problem(s) in question.
  • We expect the answers to these questions to involve judgment calls, and we aim to be transparent about such judgment calls.
  • We seek to release recommendations at the point where we (a) have put substantial time into research, and (b) feel that our recommendations are highly likely to be better than what our audience can come up with on its own.
  • We do not seek perfect or comprehensive knowledge, preferring to issue recommendations once they’ve crossed a certain threshold of thoroughness and then continue to refine them over time. Early recommendations may have some element of arbitrariness in them (e.g., being sensitive to what we chose to prioritize), and we expect recommendations to become more systematically grounded over time.

In the past, this approach has applied to recommended charities; at this point, we tentatively anticipate applying it to charitable causes. While we aren’t yet ready to set a deadline, we hope that we will have recommendations as soon as possible regarding which charitable causes are most promising for a major philanthropist to invest in. We expect to recommend to major philanthropists that they consider hiring specialized staff to explore these causes.

Cause investigations that are completed or in progress
Our work on GiveWell Labs can broadly be divided into:

  • Lower-depth investigations. So far we have published 3 of these, and they are available here. We have examined climate change, international migration (report forthcoming), promotion of in-country migration, and detection of near-earth asteroids. Some investigations have taken relatively little time (in the range of 20 person-hours) while some have taken substantially longer (getting a basic feel for the climate change literature took a significant investment).In all cases, we’ve sought to get a basic sense of (a) the significance of the problem to be addressed; (b) broad possible avenues of intervention; (c) who else is working in this area. By collecting this basic information for many causes, we hope to be able to identify the ones that have a particularly strong combination of humanitarian significance, tractability, and “room for more philanthropy” (i.e., being under-invested in relative to other causes). We feel that most of the time we’ve spent on these investigations has been necessary to produce a basic understanding of these issues, and that it would take much more time to gain high confidence or gain a strong sense of the specific giving opportunities that are out there.
  • Higher-depth investigations. We are currently working on a higher-depth investigation of a particular sub-field of meta-research. The investigation has involved a large number of conversations and, unlike the lower-depth investigations, is aiming to give us a fairly clear sense of what the major players and the giving opportunities in this space look like. It is difficult to say how much longer this investigation will take; when it is complete, we hope it will become a template for future high-depth investigations, which we may look for contractors (e.g., subject matter experts) to work on.
  • Cross-cutting projects intended to put us in better position to look at large numbers of causes. These include our work on understanding the basics of scientific research and political advocacy (which we will write more about in the future), our work on history of philanthropy, and co-funding work.

So far, we have not prioritized areas solely on the basis of how promising they seem: we’ve also factored in how prepared we felt to investigate them, given our existing background knowledge. For example, as mentioned above, we felt that we were better grounded in the issues around meta-research than in most issues, so we chose this area for our first high-depth investigation. As we develop a better sense of what these investigations involve, such considerations will become less of a factor.

Our plan for conducting more cause investigations
Alexander Berger has, so far, led both the lower-depth and higher-depth investigations. He will continue conducting lower-depth investigations, hoping for an average of about one completed report per month, with output hopefully rising as we bring on more staff later this year.

When it comes to higher-depth investigations, we are hoping to try outsourcing these investigations to contract researchers. We are planning to produce a meta-research writeup that can serve as a fairly concrete template for what we’re looking for, and we believe it’s possible that a contract researcher – perhaps a subject-matter expert in the relevant field, perhaps a consulting firm that has done this sort of work for other foundations – can create a similar writeup for other causes.

We expect finding such contractors to be challenging, and we expect working with such contractors to involve significant investment on our part in terms of specifying what we’re looking for and managing the process. For this reason, we’re not currently seeking to outsource our lower-depth investigations in the same way; we’d need a good deal of output to justify the investment we expect to make. Also for this reason, we’re hoping to begin experimenting with contractors soon, rather than waiting until we’re confident in which causes are most worth exploring at greater depth.

All of the above plans are tentative; we plan to move forward as outlined and change course if/when it makes sense to do so.