The GiveWell Blog

The path to biomedical progress

We’ve continued to look into scientific research funding for the purposes of the Open Philanthropy Project. This hasn’t been a high priority for the last year, and our investigation remains preliminary, but I plan to write several posts about what we’ve found so far. Our early focus has been on biomedical research specifically.

Most useful new technologies are the product of many different lines of research, which progress in different ways and on different time frames. I think that when most people think about scientific research, they tend to instinctively picture only a subset of it. For example, people hoping for better cancer treatment tend instinctively to think about “studying cancer” as opposed to “studying general behavior of cells” or “studying microscopy techniques,” even though all three can be essential for making progress on cancer treatment. Picturing only a particular kind of research can affect the way people choose what science to support.

I’m planning to write a fair amount about what I see as promising approaches to biomedical sciences philanthropy. Much of what I’m interested in will be hard to explain without some basic background and vocabulary around different types of research, and I’ve been unable to find an existing guide that provides this background. (Indeed, many of what I consider “overlooked opportunities to do good” may be overlooked because of donors’ tendencies to focus on the easiest-to-understand types of science.)

This post will:

  • Lay out a basic guide to the roles of different types of biomedical research: improving tools and techniques, studying healthy biological processes, studying diseases and conditions of interest, generating possible treatments, preliminarily evaluating possible treatments, and clinical trials.
  • Use the example of the cancer drug Herceptin to compare the roles of these different sorts of research more concretely.
  • Go through what I see as some common misconceptions that stem from overfocusing on a particular kind of research, rather than on the complementary roles of many kinds of research.

Basic guide to the roles of different types of biomedical research

Below are some distinctions I’ve found it helpful to draw between different kinds of research. This picture is highly simplified: many types of research don’t fit neatly into one category, and the relationships between the different categories can be complex: any type of research can influence any other kind. In the diagram to the right (click to expand), I’ve highlighted the directions of influence I believe are generally most salient.

(A) Improving tools and techniques. Biomedical researchers rely on a variety of tools and techniques that were largely developed for the general purpose of measuring and understanding biological processes, rather than with any particular treatment or disease/condition in mind. Well-known examples include microscopes and DNA sequencing, both of which have been essential for developing more specific knowledge about particular diseases and conditions. More recent examples include CRISPR-related gene editing techniques, RNA interference, and using embryonic stem cells to genetically modify mice. All three of these provide ways of experimenting with changes in the genetic code and seeing what results. The former two may have direct applications for treatment approaches in addition to their value in research; the latter two were both relatively recently honored with Nobel Prizes. Improvements in tools and techniques can be a key factor in improving most kinds of research on this list. Sometimes improvements in tools and techniques (e.g., faster/cheaper DNA sequencing; more precise microscopes) can be as important as the development of new ones.

(B) Studying healthy biological processes. Basic knowledge about how cells function, how the immune system works, the nature of DNA, etc. has been essential to much progress in biomedical research. Many of the recent Nobel Prizes in Physiology or Medicine were for work in this category, some of which led directly to the development of new tools and techniques (as in the case of CRISPR-based gene editing, which is drawn from insights about bacterial immune systems).

(C) Studying diseases and conditions of interest. Much research focuses on understanding exactly what causes a particular disease and condition, as specifically and mechanistically as possible. Determining that a disease is caused by bacteria, a virus, or by a particular overactive gene or protein can have major implications for how to treat it; for example, the cancer drug Gleevec was developed by looking for a drug that would bind to a particular protein, which researchers had identified as key to a particular cancer. Note that (C) and (B) can often be tightly intertwined, as studying differences between healthy and diseased organisms can tell us a great deal both about the disease of interest and about the general ways in which healthy organisms function. However, (B) may have more trouble attracting support from non-scientists, since the applications can be less predictable and clear.

(D) Generating possible treatments. No matter how much we know about the causes of a particular disease/condition, this doesn’t guarantee that we’ll be able to find an effective treatment. Sometimes (as with Herceptin – more below) treatments will suggest themselves based on prior knowledge; other times the process comes down largely to trial and error. For example, malaria researchers know a fair amount about the parasite that causes malaria, but have only identified a limited number of chemicals that can kill it; because of the ongoing threat of drug resistance developing, they continue to go through many thousands of chemicals per year in a trial-and-error process, checking whether each shows potential for killing the relevant parasite. (Source)

(E) Preliminarily evaluating possible treatments (sometimes called “preclinical” work). Possible treatments are often first tested “in vitro” – in a simplified environment, where researchers can isolate how they work. (For example, seeing whether a chemical can kill isolated parasites in a dish.) But ultimately, a treatment’s value depends on how it interacts with the complex biology of the human body, and whether its benefits outweigh its side effects. Since clinical trials (next paragraph) are extremely expensive and time-consuming, it can be valuable to first test and refine possible treatments in other ways. This can include animal testing, as well as other methods for predicting a treatment’s performance.

(F) Clinical trials. Before a treatment comes to market, it usually goes through clinical trials: studies (often highly rigorous experiments) in which the treatment is given to humans and the results are assessed. Clinical trials typically involve four different phases: early phases focused on safety and preliminary information, and later phases with larger trials focused on definitively understanding the drug’s effects. Many people instinctively picture clinical trials when they think about biomedical research, and clinical trials account for a great deal of research spending (one estimate, which I haven’t vetted, is that clinical trials cost tens of billions of dollars a year, over half of industry R&D spending). However, the number of clinical trials going on generally is – or should be – a function of the promising leads that are generated by other types of research, and the most important leverage points for improving treatment are often within these other types of research.

(A) – (C) are generally associated with academia, while (D) – (F) are generally associated with industry. There are a variety of more detailed guides to (D) – (F), often referred to as the “drug discovery process” (example).

Example: Herceptin
Herceptin is a drug used for certain breast cancers, first approved in 1998. Its development relied on relatively recent insights and techniques, and it is notable for its relative lack of toxicity and side effects compared to other cancer drugs. I perceive it as one of the major recent success stories of biomedical research (in terms of improving treatment, as opposed to gaining knowledge) – it was one of the best-selling drugs of 2013 – and it’s an unusually easy drug to trace the development of because there is a book about it, Her-2: The Making of Herceptin (which I recommend).

Here I list, in chronological order, some of the developments which seem to have been crucial for developing Herceptin. My knowledge of this topic is quite limited, and I don’t mean this as an exhaustive list. I also wish to emphasize that many of the items on this list were the result of general inquiries into biology and cancer – they weren’t necessarily aimed at developing something like Herceptin, but they ended up being crucial to it. Throughout this summary, I note which of the above types of research were particularly relevant, using the same letters in parentheses that I used above.

  • In the 1950s, there was a great deal of research focused on understanding the genetic code (B). For purposes of this post, it’s sufficient to know that a gene serves the function of a set of instructions for building a protein, a kind of molecule that can come in many different forms serving a variety of biological functions. The research that led to understanding the genetic code was itself helped along by multiple new tools and techniques (A) such as Linus Pauling’s techniques for modeling possible three-dimensional structures (more).
  • In the 1970s, studies on chicken viruses that were associated with cancer led to establishing the idea of an oncogene: a particular gene (often resulting from a mutation) that, when it occurs, causes cancer. (C)
  • In 1983, several scientists established a link between oncogenes and a particular sort of protein called epidermal growth factor receptors (EGFRs), which give cells instructions to grow and proliferate. In particular, they determined that a particular EGFR was identical to the protein associated with a known chicken oncogene. This work was a mix of (B) and (C), as it grew partly out of a general interest in the role played by EGFRs. It also required being able to establish which gene coded for a particular protein, using techniques that were likely established in the 1970s or later (A).
  • In 1986, an academic scientist collaborated with Genentech to analyze the genes present in a series of cancerous tumors, and cross-reference them with a list of possible cancer-associated EGFRs (C). One match involved a particular gene called HER2/neu; tumors with this gene (in a mutated form) showed excessive production of the associated protein, which suggested that (a) the mutated HER2/neu gene was overproducing HER2/neu proteins, causing excessive cell proliferation and thus cancer; (b) this particular sort of cancer might be mitigated if one could destroy or disable HER2/neu proteins. This work likely benefited from advances in being able to “read” a genetic code more cheaply and quickly.
  • The next step was to find a drug that could destroy or disable the HER2/neu proteins (D). This was done using a relatively recent technique (A), developed in the 1970s, that relied on a strong understanding of the immune system (B) and of another sort of cancer that altered the immune system in a particular way (C). Specifically, researchers were able to mass-produce antibodies designed to recognize and attach to the EGFR in question, thus signaling the immune system to destroy them.
  • At that time, monoclonal antibodies (mass-produced antibodies as described above) were seen as highly risky drug candidates, since they were produced from other animals and likely to be rejected by human immune systems. However, in the midst of the research described above, a new technique (A) was created for getting the body to accept these antibodies, greatly improving the prospects for getting a drug.
  • Researchers then took advantage of a relatively recent technique (A) for inserting human tumors into modified mice, which allowed them to test the drug and produce compelling preliminary evidence (E) that the drug might be highly effective.
  • At this point – 1988 – there was a potential drug and some supportive evidence behind it, but its ultimate effect on cancer in humans was unknown. It would be another ten years before the drug went through all relevant clinical trials (F) and received FDA approval, under the name Herceptin. Her-2: The Making of Herceptin gives a great deal of detail on the challenges of this period.

As detailed above, many essential insights necessary for Herceptin’s development came out very long before the idea of Herceptin had been established. My impression is that most major biomedical breakthroughs of the last few decades have a similar degree of reliance on a large number of previous insights, many of them fundamentally concerning tools and techniques (A) or the functioning of healthy organisms (B) rather than just disease-specific discoveries.

General misperceptions that can arise from over-focusing on certain types of research
I believe that science supporters often have misperceptions about the promising paths to progress, stemming from picturing only certain types of research. Below, I informally list some of these misperceptions, as informal non-attributed quotes.

  • “Publicly funded research is unnecessary; the best research is done in the for-profit sector.” My impression is that most industry research falls into categories (D)-(F). (A)-(C), by contrast, tend to be a poor fit for industry research, because they are so far removed from treatments both in terms of time and risk. Because it is so hard to say what the eventual use is of a new tool/technique or insight into healthy organisms, it is likely more efficient for researchers to put insights into the public domain rather than trying to monetize them directly.
  • “Drug companies don’t do valuable research – they just monetize what academia provides them for free.” This is the flipside of the above misconception, and I think it overfocuses on (A)-(C) without recognizing the challenges and costs of (D)-(F). Given the very high expenses of research in categories (D)-(F), and the current norms and funding mechanisms of academia, (D)-(F) are not a good fit for academia.
  • “The best giving opportunities will be for diseases that aren’t profitable for drug companies to work on.” This might be true for research in categories (D)-(F), but one should also consider research in categories (A)-(C); this research is generally based on a different set of incentives from those of drug companies, and so I’d expect the best giving opportunities to follow a different pattern.
  • “Much more is spent on disease X than disease Y; therefore disease Y is underfunded.” I think this kind of statement often overweights the importance of (F), the most expensive but not necessarily most crucial category of research. If more is spent on disease X than on disease Y, this may be simply because there are more promising clinical trial candidates for disease X than disease Y. Generally, I am wary of “total spending” figures that include clinical trials; I don’t think such figures necessarily tell us much about society’s priorities.
  • “Academia is too focused on knowledge for its own sake; we need to get it to think more about practical solutions and treatments.” I believe this attitude undervalues (A)-(B) and understates how important general insights and tools can be.
  • “We should focus on funding research with a clear hypothesis, preliminary support for the hypothesis, and a clear plan for further testing the hypothesis.” I’ve heard multiple complaints that much of the NIH takes this attitude in allocating funding. Research in category (A) is often not hypothesis-driven at all, yet can be very useful. More on this in a future post.
  • “The key obstacles to biomedical progress are related to reproducibility and reliability of studies.” I think that reproducibility is important, and potentially relevant to most types of research, but it is most core to clinical trials (F). Studies on humans are generally expensive and long-running, and so they may affect policy and practice for decades without ever being replicated. By contrast, for many other kinds of research, there is some cheap effective “replication” – or re-testing of the basic claims – via researchers trying to build on insights in their own lab work, so a non-reproducible study might in many cases mean a relatively modest waste of resources. I’ve heard varying opinions on how much waste is created by reproducibility-related issues in early-stage research, and think it is possible that this issue is a major one, but it is far from clear that it is the key issue.

Thoughts on the Sandler Foundation

Note: Steve Daetz of the Sandler Foundation reviewed a draft of this post prior to publication.

Previously, we wrote about the tradeoff between expertise and breadth in philanthropy. We noted the traditional “program officer” model of philanthropy, in which staff specialize in particular causes, and we contrasted it with some other possible models that sacrifice true cause-level expertise, while allowing a philanthropist to work in more areas at once.

We cited the Sandler Foundation as an example of a foundation that appears to have a strong track record despite not following the traditional “program officer” model. Since then, we’ve had a couple of extended conversations with the Sandler Foundation’s Herb Sandler and Steve Daetz. We’ve tried to understand better how its approach differs from more traditional approaches, and what the pros and cons are. We’ve come out thinking that:

  • The Sandler Foundation appears to have an impressive track record; it has played major roles in the development of multiple impressive organizations. More
  • The Sandler Foundation does seem to have noticeable differences with the more traditional approach. Its staff are not subject matter experts specializing in particular causes, and they do not operate with fixed budgets for the amount of time and money spent on a cause. Rather, the Sandler Foundation is highly flexible and opportunistic, ready to put a lot of time and money into an idea when they find the right leadership, or stay out of a cause of interest entirely when they don’t. They often put a lot of time and energy into investigating and refining a grant early on, to the point where working on a single grant becomes a major part of their agenda; this is temporary, however, as they have a preference for reliable, recurring, flexible support (rather than continuously revisiting and revising the terms of grants). More
  • In many of the ways that the Sandler Foundation differs from traditional foundations, we think the Sandler model may be preferable. More

Notable Sandler Foundation grants
We discussed multiple interesting grants in our conversation with the Sandler Foundation. Below are some highlights:

I’m generally interested in cases where a foundation played a major role in the development of a strong and important institution, and at this point we’ve spoken with the heads of many major foundations and asked them about their major success stories. I think the above list compares favorably with comparable lists I’d be able to put together for other foundations’ work over the last decade (based in many cases on off-the-record conversations). This isn’t necessarily a fully appropriate comparison, since the Sandler Foundation explicitly prioritizes making large grants and helping to start organizations; it’s possible that other foundations have had equal or greater impact with larger numbers of smaller grants, and that it’s simply hard to put together comparable lists of highly tangible “success stories.” Still, my impression is that the Sandler Foundation has been quite successful in helping to build strong organizations, despite having a much smaller staff – and less subject-matter expertise – than traditional foundations.

The Sandler Foundation approach
From talking to the Sandler Foundation, I perceive it as diverging from traditional foundations on a couple of key dimensions:

1. The priority placed on funding strong leadership. The Sandler Foundation emphasized its preference for flexible, long-term support rather than constantly picking and prescribing projects. This sort of support is likely especially valuable to grantees, and even more so for new organizations trying to attract outstanding talent. At the same time, giving flexible and long-term support is a major “bet,” and seems most appropriate when one has very high confidence in the leadership one is supporting. The Sandler Foundation emphasized its extensive due diligence on leadership (for example, Sandler Foundation staff had over 30 conversations about John Podesta before supporting him to start Center for American Progress), and its high expectations for leaders: it aims to support people who are highly strategic, highly receptive to criticism and interested in self-improvement, and highly aligned with the Sandler Foundation on values and communication (“good chemistry” was emphasized).

2. A high level of “opportunism”: being ready to put major funding or no funding behind an idea, depending on the quality of the specific opportunity. The Sandler Foundation emphasized its lack of well-defined “budgets” for either money or time: its staff are often exploring several ideas at once with a low level of time commitment, and ready to substantially raise their involvement when a good opportunity presents itself. In the case of ProPublica, the Sandler Foundation first developed the basic idea for a nonprofit newsroom in 2006, and had 15-20 conversations with potential leaders; in May of 2007, when they met Paul Steiger, they quickly became interested in funding him and started putting much more time into the idea. At the same time, there are some cases in which the Sandler Foundation has explored an idea or an issue for a considerable period of time, and ultimately decided not to make any major grants. The general pattern seems to be that the Sandler Foundation puts a great deal of “front-end energy” into promising grant opportunities they’ve identified, and spends relatively less time on (a) pursuing ideas for which strong leaders haven’t yet been identified; (b) following up on a given existing grant (though it still spends substantial time on those as well).

The Sandler Foundation believes that cause-specific “program officers” are a poor fit for this model. The Sandler model relies on strong assessment of organizational leadership, with relatively few, large grants to trusted leaders. Program officers tend to have incentives to make more, smaller grants, and tend not to be well positioned for the funders to defer to their judgments about organizational leadership. Program officers also typically want pre-specified budgets, which the foundation leadership worries would make them insufficiently opportunistic.

What can we learn?
We don’t think the Sandler Foundation’s model is obviously the best one, and we don’t plan on fully emulating it. Among other things,

  • We aren’t fully aligned with the Sandler Foundation’s values and priorities, and we believe that our set of policy priorities doesn’t map very well to today’s most common political platforms. Because of this, it could be particularly hard for us to find leaders whom we feel fully aligned with.
  • We believe the “expert philanthropy” model has much to recommend it (more), and we plan to experiment with it.
  • We believe there can be a good deal of value in relatively small, low-confidence, low-due-diligence grants that give a person/team a chance to “get an idea off the ground.” We’ve made multiple such grants to date and we plan on continuing to do so.
  • We have a favorable impression of the Sandler Foundation’s track record, but we don’t have enough information to be highly confident in this.

With that said, we see the Sandler Foundation as something of a proof of concept that high-impact grants can come from opportunistic generalists.

For reasons outlined previously, we’re highly interested in trying out a philanthropic model that looks across multiple issue areas for the very most outstanding opportunities, and we think that taking a highly opportunistic approach – scanning multiple areas, waiting for outstanding leadership, keeping the bar high, and being ready to get very involved when an opportunity comes up – makes a great deal of sense for this goal. By taking this attitude toward many of our focus areas, we might be able to make the most of our generalist staff, and be able to keep our bar high for the opportunities we get most involved in (something that would be more difficult to do if we pre-committed to a smaller number of particular issues and ideas).

Note: another perspective on the Sandler Foundation is available in a January piece from Inside Philanthropy.

Notes from November convening on our policy priorities

Last November, we held a day-long convening in Washington, D.C. to discuss possible priorities for Open Philanthropy Project work on U.S. policy.

Our main goal was to present our picture of several policy issues, as well as to receive input to inform upcoming decisions about which issue(s) we should focus on. For each issue, we laid out what sort of change we’d like to see, why we find the issue especially promising for philanthropy, what the current landscape looks like (including other funders), and what possible strategies might look like. We sought feedback on all of these points, as well as ideas for promising issue areas and promising strategies that haven’t occurred to us.

We’ve now posted a summary of points raised at the convening, a partial list of participants, and the briefing materials for the convening here:

Page on Nov. 10 policy convening

Many points were raised at the convening, and it served as an input into our overall strategy setting on U.S. policy (which we will be writing more about). Some of the highlights, from our perspective, were:

  • We had a fair amount of discussion of active vs. passive funding. Our discussion reinforced the importance of finding people we’re comfortable giving unrestricted support to if possible, while being willing to make compromises and engage in some degree of “active funding” on particular issues.
  • Reactions to the causes we’re considering varied considerably. Participants were generally quite positive on macroeconomic policy (feeling that aspects of it are under-attended to) and criminal justice reform (seeing, as we do, a window of opportunity). By contrast, there was a much more mixed and hesitant reaction to some other causes we’re considering, such as labor mobility. We aren’t necessarily inclined to favor the causes that received a more positive reaction, since we see a great deal of value in working on issues whose value isn’t widely recognized. However, hearing the different reactions helped us understand which of our potential causes might present particular challenges in terms of communications and coalition building.
  • We discussed the goal of strengthening the general community that shares our policy priorities (in particular, prioritizing both economic efficiency and global humanitarianism). One idea that came up in this regard was that of funding scholarships and fellowships, in order to encourage people to get interested in issues we consider important early in their careers. However, the convening also reinforced our view that this sort of goal will probably be easier to work on after we’ve done more concrete work and gained experience, strengthened our networks, etc.
  • We got many suggestions for potential causes to look into.

GiveWell is hiring

We’re resuming hiring to expand our ability to identify outstanding giving opportunities. Filling the roles below would make a substantial difference to our research.

If you follow GiveWell and want to help us out, please share this post with anyone whom you think might be a good fit for the jobs listed below.

  • Research Analyst. Research Analysts are GiveWell’s primary staff and work on all parts of our research process. We hope to add a few entry-level research analysts this year, and are open to hiring individuals later in their career.
  • Summer Research Analyst. We offer summer positions to students entering their final year of undergraduate or graduate school with the hopes that they will become full-time employees following graduation.
  • Outreach Associate. Outreach Associate is a new position we’re hiring for. The role will have some overlap with that of a Research Analyst, but has a particular focus on outreach and communication with donors.
  • Conversation Notes Writer. Conversation notes are a key part of our research process. Conversation Notes Writers listen to conversations conducted by GiveWell staff and produce summaries. This position is flexible: it can be done from anywhere in the world at any time of day, but we ask for people who can commit at least 10 hours per week. We are currently looking to find 1-2 additional Conversation Notes Writers.

Putting the problem of bed nets used for fishing in perspective

A recent article in the New York Times describes people using insecticide treated bed nets for fishing instead of sleeping under the nets to protect themselves from malaria-carrying mosquitoes. The article warns that fishing with insecticide treated nets may deplete fish stocks, because the mosquito nets trap more fish than traditional fishing nets and because the insecticide contaminates the water and kills fish (“the risks to people are minimal, because the dosages are relatively low and humans metabolize permethrin [the insecticide] quickly”). We recommend donating to the Against Malaria Foundation (AMF), an organization that funds distributions of long-lasting insecticide treated bed nets, so we’d like to address the concerns raised in the article.

Net distributions funded by the Against Malaria Foundation

We have reasonably high confidence that most people properly use the nets funded by AMF, because AMF requires distribution partners to conduct follow-up surveys on net use. These surveys show that 80% to 90% of households have nets hung up 6 months after distributions (for more detail, see our charity report on AMF). The survey methodology also dictates that interviewers observe whether survey respondents have hung their nets by entering their houses rather than simply asking them if they’ve hung their nets. We believe that the concerns raised in the article largely don’t apply to net distributions funded by AMF.

The prevalence of unintended use of nets

For net distributions more generally, the best data available indicates that usage rates range from 60% to 80%. Surveys asking respondents if they use their nets generally show usage rates of around 90%, but respondents may not want to report that they use nets in ways unintended by donors. One small-scale study found a usage rate of around 70% based on spot visits to homes compared to a usage rate of around 85% based on asking people, so our best guess comes from adjusting the survey rates downwards to correct for overreporting (for more detail, see our intervention report on long-lasting insecticide treated nets). Even taking into account the fact that some people won’t sleep under their nets, the program remains one of the most cost-effective ways to save lives. Given the very large numbers of bed nets distributed, we do not find stories of unintended use in a few areas particularly surprising. We view the anecdotes related in the article as unlikely to be representative of a problem that would change our assessment of the program.

The evidence on possible harm to fish stocks

Besides the harm caused by some people contracting malaria because they don’t sleep under their nets, which we already account for in our cost-effectiveness analysis, the article warns that fishing with insecticide treated nets may deplete fish stocks. In making this case, the article cites only one study, which reports that about 90% of households in villages along Lake Tanganyika used bed nets to fish. It doesn’t cite any studies examining the connection between bed nets and depleted fish stocks more directly. The article states that “Recent hydroacoustic surveys show that Zambia’s fish populations are dwindling” and “recent surveys show that Madagascar’s industrial shrimp catch plummeted to 3,143 tons in 2010 from 8,652 tons in 2002,” but declines in fish populations and shrimp catch may have causes other than mosquito net-fishing.

It’s worth comparing the evidence presented by this article to the evidence available on the benefits of bed nets. Randomized control trials consistently show large declines in child mortality from distributing nets and trends in malaria mortality and net coverage rates also suggest that mass distribution of mosquito nets has contributed to major declines in the burden of the disease. This evidence comprises one of the most robust cases for impact we’ve seen. The article makes the case for a possible harm to fish stocks relying on highly limited evidence.

Malaria control in waterside, food-insecure communities

The article does highlight a potential need to experiment with alternative approaches to malaria control in waterside, food-insecure communities that have very low net usage rates. In these areas, people shouldn’t have to choose between malaria and hunger. But again, we see this as a likely isolated problem, and a much smaller one than the problem of insufficient nets for preventing malaria.

Conclusion

We generally like to see reporting on both the successes and failures of foreign aid. However, we felt the reporting in this case presented an unbalanced view of the magnitudes of the benefits and harms of distributing bed nets.

Organizations promoting generous, effective giving

GiveWell focuses on doing high-quality research on where to give; we put relatively little effort into marketing, community building, or encouraging people to give more. We’d like to give a shout out to some organizations – most of them relatively young – that do focus on this important work.

Giving What We Can is an international society dedicated to eliminating extreme poverty. It provides a variety of resources to encourage people to give generously, including a membership pledge for lifetime giving of 10% of income, a “try giving” program for shorter and more flexible giving commitments, and a variety of local chapters currently in the U.S., U.K. and Australia. It also encourages people to give as effectively as they can, with a similar definition of effectiveness to ours, and its charity recommendations draw on our research. Giving What We Can is part of the Centre for Effective Altruism, which engages in a variety of projects around the ideas of effective altruism.

The Life You Can Save is an organization founded by the philosopher Peter Singer (who has been one of the most influential advocates for using GiveWell’s research). It spreads awareness of things people can do to fight extreme poverty through a blogoutreach events, and a worldwide network of regional community groups. The Life You Can Save also provides a list of charity recommendations that draws on our research and encourages people to pledge a percentage of their income to these charities (the recommended percentage scales with income level).

Charity Science aims to educate the public about the “science of doing good.” It aims to make research on good giving more accessible and entertaining, and encourages donations to our recommended charities. It does so by running small-scale experiments to see what works and what doesn’t in spreading the word. Experiments have included encouraging birthday and Christmas fundraisers, where people ask for donations instead of material possessions. Charity Science also provides education through write ups, infographics, and presentations.

Raising for Effective Giving (REG), a project of GBS Switzerland, is a community of poker players interested in making a positive impact. It encourages poker players to pledge at least 2% of their gross winnings (which REG states generally translates to 5-10% of net income) to its recommended charities. Its recommendations are a mix of GiveWell-recommended charities and effective-altruism-associated organizations. The first- and third-place finishers in the most recent World Series of Poker Main Event were REG members.