The GiveWell Blog

Statement from Kaitlyn Trigger & Mike Krieger: Why we’re partnering with the Open Philanthropy Project

Mike and I are committed to giving away a lot of our wealth during the course of our lifetime. It’s very early days, so one of our biggest goals is educating ourselves about the landscape and and context of philanthropy today. For example: What issue areas are important and underfunded? How do we evaluate and compare giving opportunities? What are effective ways to structure grants? What role can or should funders play in a nonprofit’s operations?

That said, we don’t want to wait until we feel 100% informed before we start giving. It’s important for us to learn through doing as well.

When Mike and I met with Cari Tuna, we were immediately struck by how much her approach at the Open Philanthropy Project resonated with us. We sensed that the Open Philanthropy Project’s values aligned with ours: open-mindedness, rigorous analytical thinking, and transparency. We were impressed by their staunch commitment to making the greatest impact possible, through their evaluation framework incorporating importance, tractability, and crowdedness of causes.

We see this partnership as an opportunity to draw on all the knowledge the Open Philanthropy Project team has accrued over the past several years, rather than starting from scratch. We believe it’s a highly efficient way to learn, plus it allows us to help fund important causes sooner than we could on our own.

This partnership is an opportunity for both sides to experiment with a co-funding agreement, and hopefully pave the way for future similar partnerships. The Open Philanthropy Project team has been exceptionally welcoming and it’s clear they are invested in making this successful.

Our own philanthropic mission statement
Mike and I are still in the early stages of developing our giving strategy, but we have identified some key values and approaches to get started:

We believe that all people deserve a free, vibrant, and productive life. To support this vision, we identify and champion forward-thinking ideas, and help scale solutions that work. To create significant, sustainable change, we are committed to systems-level thinking and rigorous analysis. We advocate collaboration and transparency to engage a broader community and magnify our impact.

What other things are we giving to?
Mike and I love the arts, education and supporting the city we call home, San Francisco. We are getting increasingly involved with Bay Area institutions like SFMOMA and SFJAZZ, as well as after-school programs like Mission Bit and Little Opera. These organizations bring us a lot of personal pleasure, and we want to make them as accessible as possible to a diverse group of people.

Science policy and infrastructure

We’ve tried to approach scientific research funding – focusing initially on life sciences – by looking for gaps and deficiencies in the current system for supporting scientific research. We’ve identified several possibilities, including a set of systematic issues that make it difficult to support attempts at breakthrough fundamental science.

One way to respond to a gap in the system would be to fill it ourselves: support the kind of science that has trouble getting support from existing funding agencies, universities, etc. We believe this is the approach taken by organizations such as Howard Hughes Medical Institute. But another way to respond would be to try to improve the system directly, by funding the development of – and advocacy for – proposals for structural changes. Structural changes could include changes in how government agencies allocate funding, in how universities determine professorships, or in other practices that we believe are important in influencing what scientists are able to do. (We broadly refer to universities, journals, and other institutions that play an important role in scientists’ incentives and support as “infrastructure.”)

We find the latter idea intriguing. It appears to us that the strongest scientific funders have little interest in policy analysis and advocacy, while the strongest funders of policy analysis and advocacy tend not to take interest in the scientific research issues discussed in this post. We’re interested in the idea of combining – in a dedicated organization – great scientists and great policy analysts, in order to put in the substantial amount of work needed to develop and promote the best possible proposals for improving science policy and infrastructure. It would be a high-risk, potentially very high-return project to attempt. We aren’t aware of any attempts to do something along these lines at the moment, and we think it could be a risk worth taking.

The rest of this post outlines:

  • Examples of science policy and infrastructure issues we’d like to see more work on. More
  • A brief sketch of how an organization dedicated to these issues might operate. More
  • What we know about existing attempts to improve science policy and infrastructure, and why we believe a new organization (or a dedicated team within an existing organization) could be a significant addition. More
  • Why we believe that supporting such an organization would be worthwhile. More

Examples of science policy and infrastructure issues

We previously wrote about claims that the current life sciences system has trouble supporting attempts at breakthrough fundamental science, and we featured a PNAS paper on the subject. This paper gives multiple concrete suggestions for how changes in U.S. policy might reduce competitiveness between scientists, improving prospects for early-career scientists, and supporting higher-risk, higher-reward research:

  • Making the government budget for scientific funding more “predictable and stable,” in order to facilitate long-term planning and avoid the sorts of supply-demand imbalances described previously.
  • Making changes in what sorts of grants can be used for what sorts of expenses (in particular, putting restrictions on the ability to support graduate students and postdocs using research grant funds), in order to allow more deliberate control of the number of graduate students and postdocs who will end up competing for professorships.
  • Aiming to broaden the possible career paths for young scientists, including increasing the use of “staff scientists” rather than trainees to support lab research. These changes could further diminish the intensity of competition for professorships as well as improving the overall productivity of labs.
  • Increasing the size of grant programs such as the NIH Director’s New Innovator award, which may be more conducive to supporting attempts at breakthrough fundamental science.
  • Improving the quality of grant application evaluation by revising criteria and scoring methodologies, and making more effort to include top scientists in evaluation.

These ideas are, by and large, fairly concrete and (to my eyes) practical-seeming suggestions for policy change. I haven’t been able to find information on the extent to which they are being implemented or actively discussed (other than that the number of Pioneer Award recipients seems to have shrunk rather than grown from last year to this year). To my knowledge, none have been substantially implemented.

In addition to these sorts of ideas, I think the following could also be highly worthwhile:

Thinking through how universities could experiment with new models for determining professorships, as well as how journals could experiment with new processes for highlighting noteworthy science. Both processes are extremely important factors in what kind of work is supported and incentivized in academia. Universities and journals tend to follow certain common cultural norms today, but given the degree of apparent agreement about room for improvement in the current system, it’s plausible to me that a dedicated effort at developing and promoting new approaches could spur experimentation and change.

Examining existing regulations – regulations on research, regulations regarding sharing of data, etc. – from the perspective of optimizing the ability to gain new knowledge and reap the benefits of innovation. Both the paper linked above and the paper I previously discussed on declining pharmaceutical productivity have identified increasing regulatory burdens as a major issue. In addition, from my limited readings on the history of biomedical research, it seems to me that getting new medical technologies tested and approved used to be much easier than it is today, and that many key experiments were highly speculative and dangerous. Such experiments would have been much more difficult to carry out with today’s regulation and social norms. Work in this category could include the following (these ideas are fairly speculative and may overlap to some degree with work being done at existing institutions):

  • Improving the balance between patients’ privacy and scientists’ ability to access large amounts of data for research purposes.
  • Improving the FDA process with an emphasis on increasing scientists’ ability to experiment and innovate, especially if and when new tools for data sharing present new possibilities for demonstrating safety and efficacy of medical technologies.
  • Improving the balance between ethical considerations and scientists’ ability to run informative experiments without excessive overhead.
  • Bringing a science- and scientist-focused perspective to debates over intellectual property law.
  • Regulating data sharing practices in clinical trials with an eye to enabling “reverse translation” research.
  • Working on optimal regulation of emerging technologies, in a framework that emphasizes the importance of innovation’s benefits as much as the importance of caution.

An opportunity for impact?

It seems to me that there could be a great deal of value in an organization dedicated to bringing together great scientists and great policy analysts, in order to develop and promote the best possible proposals for improving science policy and infrastructure. I would see such work as primarily aiming to have influence on universities, journals, and government agencies via developing well-thought-through ideas and making the case for them on the merits, rather than via exerting political pressure based on grassroots mobilization, media, etc. This strategy of aiming for impact would be comparable to that of organizations such as Center for Global Development and Center on Budget and Policy Priorities. (Note that claims of impact are available for both CGD and CBPP; we have not vetted either list but find both lists quite plausible.)

I think this kind of activity could be quite influential, and that the difference between a dedicated effort to carry it out and the status quo could be substantial. This would be consistent with my understanding of many past cases of nonprofits influencing policy, as well as with my understanding of how both corporate and nonprofit actors often have influence.

I discussed this idea at some length in my conversation with Neal Lane.

Does such an organization already exist?

My impression is that there are no organizations playing the role described above. The policy issues I’ve laid out have been raised by scientists via op-eds (such as the PNAS paper discussed above) and committee reports (such as a recent piece released by the American Academy of Arts and Sciences), but are not the focus of any dedicated organizations (or teams within organizations).

It’s possible that such an organization or team does exist – I haven’t searched exhaustively – but I have several reasons to believe it does not:

  • Most importantly, I have discussed this topic with many people, including those mentioned in our previous post. I have generally asked explicitly whether the kind of organization I’m envisioning already exists, and sought referrals to others who are knowledgeable on the subject. None of the people I’ve asked have been aware of such an organization – so even if one does exist, it seems unlikely that it has achieved much prominence. Neal Lane seemed particularly interested in these issues, and stated that he is not aware of such an organization (see public conversation notes).
  • I’ve searched the web for groups focused on science policy. At the moment, my impression is that “science policy” (as commonly used) tends to refer to some combination of (a) promoting a high level of funding for science; (b) working on policy around science education and outreach; (c) working on a wide range of policy issues, such as climate change mitigation, in a way that is informed by science (and/or emphasizes the importance of scientific knowledge and evidence in decision-making). I’ve examined Wikipedia’s category page on science advocacy organizations, and these organizations seem to generally be in one of the aforementioned categories. None of them seem to be focused on the sorts of issues I’ve discussed in this post. Note that Wikipedia’s list excludes Research!America, an organization that focuses on making the case for a high level of government support for science.
  • I’ve examined a list of think tanks by category, and none of those listed under “Science and technology” appear to do significant work on the sorts of issues discussed in this post.

Speaking generally from conversations I’ve had with major funders, it appears that the strongest scientific funders have little interest in policy analysis and advocacy, while the strongest funders of policy analysis and advocacy tend not to take interest in the scientific research issues discussed in this post.

Could work along these lines be worthwhile?

In conversations about this idea so far, I’ve encountered a mix of enthusiasm and skepticism. (I’ve also generally heard from science funders that it would be outside of their model, regardless of merits, because of the focus on influencing policy rather than directly supporting research.) Most of the skepticism has been along the lines of, “The current system’s cultural norms and practices are too deeply entrenched; it’s futile to try to change them, and better to support the best research directly.”

This may turn out to be true, but I’m not convinced:

So far, we haven’t been able to find a person or organization who seems both qualified and willing to lead the creation of the sort of organization described in this post. We plan to continue looking for such a person or organization, while continuing to discuss, refine and reflect on these ideas.

April 2015 open thread

Following up on the inaugural open thread, we wanted to have another one.

Our goal is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org if there’s feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

Breakthrough fundamental science

We’ve been looking for gaps in the world of scientific research funding: areas that the existing system doesn’t put enough investment into, leaving potential opportunities to do unusually large amounts of good with philanthropic funding. We previously wrote about the alleged “valley of death” that makes it challenging to translate academic insights about biology into new medical technologies. This post is about a different issue, one that has come up in the vast majority of conversations I’ve had with scientists: it is believed to be extremely difficult to do what this post will call “breakthrough fundamental science” in the existing life sciences ecosystem.

Breakthrough fundamental science is the term I’m using for what I believe many of the people I’ve spoken to have meant when they’ve used terms such as “basic research,” “high-risk/high-reward research” and “revolutionary/path-breaking research.” My subject matter knowledge is extremely limited, so I can’t be confident that I’ve correctly classified the comments I’ve heard as having a consistent theme or that I’m correctly defining the theme, but I’m attempting to do so anyway because the theme has seemed consistent and important. In brief, “breakthrough fundamental science” (in the context of life sciences) refers to research that achieves important, broadly applicable insights about biological processes, such that the insights bring on many new promising directions for research, yet it is difficult to anticipate all the specific ways in which they will be applied and thus difficult to be assured of “results” in the sense of new clinical applications. This type of work stands in contrast to research that is primarily aimed at producing a particular new drug, diagnostic or other medical technology.

This definition doesn’t lend itself to fully objective classifications, but a couple of illustrative examples would be: (a) understanding the genetic code and the structure of DNA; (b) (more recently) work on the CRISPR/CAS system and developing it to the point where it can be used to “edit” an organism’s DNA. Each of these has opened up many possible directions for research, while not having immediately clear relevance for a particular disease, condition or clinical application.

This post will:

  • Give examples of the wide variety of people who have noted the difficulty of securing support for attempts at breakthrough fundamental science in the current system.
  • Discuss what the roots of this “gap” might be.

Comments about breakthrough fundamental science

Many of the conversations we’ve had, as I’ve interpreted them, have stressed the difficulty of securing support for attempts at breakthrough fundamental science in the current system. It has been a common theme in discussions with relatively junior scientists we interviewed as potential advisors (off the record), including those who serve as our advisors now. It has also been emphasized by a number of very senior scientists with substantial credentials and authority. Some examples of the latter follow. Quotes are not verbatim; they are taken from our public conversation notes, which paraphrase the points made by the speaker.

Susan Desmond-Hellmann, current CEO of the Gates Foundation (then Chancellor of UCSF, and formerly president of product development at Genentech):

The NIH faces a large number of applicants for a relatively small number of grants. Its current methods for selecting recipients have difficulty ensuring fairness and reliable support for good scientists. In addition, these methods are likely biased toward incremental and established research over higher-risk, higher-reward research. It is particularly difficult for young researchers to secure adequate funding.

Neal Lane, currently Provost at Rice University, who has headed both the National Science Foundation and the White House’s Office of Science and Technology Policy:

The National Science Foundation (NSF), the National Institutes of Health (NIH), as well as the Department of Energy’s Office of Science, NASA and other agencies support basic research. But, increasingly, these agencies have been challenged to ensure that the research they support has potential practical benefits for the country. As a result, support for bold, sometimes called “high risk,” research has suffered. There has been a growing pressure to identify outcomes, and that discourages potentially path-breaking investigations.

Bruce Alberts, currently of UCSF, formerly Editor-in-Chief of Science and President of the National Academy of Sciences:

The current funding system for scientific research is biased toward supporting short-term, translational research (research that looks for practical applications of basic science) … I am painfully aware of the huge gaps in our understanding of fundamental life processes. Many great opportunities to advance this understanding through basic research in biology are not receiving funding from the National Institutes of Health (NIH), the largest funder of biomedical research. Changing incentives to more effectively recognize the critical importance of such understanding would have a strong effect on researchers’ choices and help produce more outstanding basic research.

Robert Tjian and Cheryl Moore, President and Chief Operating Officer of Howard Hughes Medical Institute (which I believe is the largest private science funder in the U.S.):

One of the major issues in biomedical research is that biology is not understood well enough to get to the root of problems … There’s a lot of pressure to push science in applied or clinical directions before it’s ready, which can result in money being poorly spent.

A paper in PNAS co-authored by Bruce Alberts (listed above), Harold Varmus (former Director of the National Cancer Institute and former Director of the National Institutes of Health) and others:

The system now favors those who can guarantee results rather than those with potentially path-breaking ideas that, by definition, cannot promise success. Young investigators are discouraged from departing too far from their postdoctoral work, when they should instead be posing new questions and inventing new approaches. Seasoned investigators are inclined to stick to their tried-and-true formulas for success rather than explore new fields … Many surprising discoveries, powerful research tools, and important medical benefits have arisen from efforts to decipher complex biological phenomena in model organisms. In a climate that discourages such work by emphasizing short-term goals, scientific progress will inevitably be slowed, and revolutionary findings will be deferred (3).

A few notes based on my recollections, though largely not captured in public records:

  • My recollection is that many were particularly energized about the difficulty of funding research aiming to improve tools and techniques, which I discussed in a previous post (see classification (A) in that post).
  • Nobody claimed that there is a small amount of research projects attempting breakthrough fundamental science, only that there ought to be far more due to the high importance.
  • In addition, it’s worth noting that breakthrough fundamental science is often greatly rewarded in the long run; for example, many relevant Nobel Prizes seem to be for work that broadly fits in this category. (That is to say, many of the Prizes seem to have gone to work with broad applications for understanding biological processes in general, but no obvious application to a particular disease, condition or applied medical technology.) But having a chance at a Nobel Prize decades down the line isn’t necessarily helpful for a scientist seeking to do breakthrough fundamental research; the work needs to be funded today in order to be practicable.
  • The concept of “risk” is somewhat ambiguous in some of the quotes above. It could refer to the risk that a project will fail on its own terms (e.g. failing to answer its own question or effectively test its own hypothesis). It could also refer to the uncertainty involved in the applications of particular research. My sense is that most attempts at breakthrough fundamental science are risky in both senses, but particularly the second. Regarding the first – it seems likely that attempts to make major breakthroughs will rarely be able to stick with familiar approaches and be assured of useful results. Regarding the second – when one’s goal is to achieve major insights useful for understanding biological processes in general, it may often be difficult to say in advance just what sorts of clinical applications these insights will have. This could be a problem for funders focused on the most direct, high-confidence paths to new drugs, diagnostics and other medical technologies.

What is the underlying dynamic?

As noted above, there are a good number of people voicing support for the idea of supporting more attempts at breakthrough fundamental science. However, the problem arguably derives from factors that are fairly deep and difficult to change.

The PNAS paper mentioned in the previous section lists multiple “systemic flaws” in the current system. The one it focuses most on is increasing competitiveness between scientists, brought about by an imbalance between supply and demand for academic positions:

There is now a severe imbalance between the dollars available for research and the still-growing scientific community in the United States. This imbalance has created a hypercompetitive atmosphere in which scientific productivity is reduced and promising careers are threatened … Now that the percentage of NIH grant applications that can be funded has fallen from around 30% into the low teens, biomedical scientists are spending far too much of their time writing and revising grant applications and far too little thinking about science and conducting experiments. The low success rates have induced conservative, short-term thinking in applicants, reviewers, and funders.

As this chart from the NIH shows, success rates for research project grants have fallen from ~30% to just under 20% since 1998, and the change has been driven by a growing number of applicants for a fairly constant number of annual awards. One might imagine that more applicants and more competitiveness would be a good thing, if the process consistently funded the most promising projects. However, my impression is that the NIH grant review process isn’t necessarily optimized for identifying the most promising projects and applicants, as opposed to simply eliminating the least promising ones. Thus, it may be poorly suited to such a high level of competitiveness. For example, grant applications are given scores on a 1-9 scale by all reviewers, and then ultimately funded (or not) based on the average; this arguably privileges incremental science (likely to appear clearly worthwhile to large numbers of people) over higher-risk science (which might appear extremely promising to some and not at all promising to others).

The PNAS paper lists multiple problems brought about by high competitiveness, in addition to the risk aversion discussed above:

  • It argues that competing for publication in top journals has caused scientists to “rush into print, cut corners, exaggerate their findings, and overstate the significance of their work”, contributing to issues with reproducibility that we’ve written about before.
  • It points to the increasing domination of the field by later-career scientists, and states that early-career scientists now face poor prospects and long time frames for getting substantial support for their research. I believe this sort of dynamic risks driving out the most promising scientists (who may have other career options) while retaining less promising ones; it also risks mis-allocating support, by funding scientists whose most productive years are behind rather than ahead of them.
  • It discusses the “crippling demands on a scientist’s time” brought on by the increasing difficulty of grant applications (it also cites an increasing regulatory burden as being relevant here). It argues that in addition to reducing time for scientific reflection, the increasing administrative burdens on senior scientists reduce the time they have available for peer review, which worsens the quality of the peer review process.
  • It explicitly argues that there is excessive interest in translational science, and that this is another “manifestation of [a] shift to short-term thinking,” which in turn may be another outgrowth of increased competitiveness.

In my view, all of the above represent different aspects of distortion caused by the disconnect between what science is most valuable and what science is most straightforward to evaluate. Breakthrough fundamental science is characterized by being highly innovative (making it difficult to form a consistent framework for judging it), and by having far-in-the-future and difficult-to-predict ultimate impacts. It may be possible for top scientists to evaluate it using their judgment and intuitions, but any system that seeks consistent, well-defined, practically important outcome metrics will likely struggle to do so. Instead, such a system risks rewarding those who can game it, as well as those who can show more quick and consistent (even if ultimately less important) results.

It’s worth noting that the criticism of “rewarding the measurable, rather than the important” has often been leveled at GiveWell’s work on top charities. I have long felt that focusing on the measurable is quite appropriate when (a) serving individual donors seeking somewhere they can give without having to invest a lot of time in learning; (b) working on issues related to global health and development, where higher-risk/higher-reward approaches have a history of coming up empty. However, the world of scientific research is very different. In this environment, it seems to me that insisting on accountability to meaningful short-term metrics could easily do more harm than good.

Should we focus on funding breakthrough fundamental science?

The idea that breakthrough fundamental science is under-supported makes a good deal of sense to me, and I perceive a great deal of consensus on this point among scientists. However, evaluating – and implementing – the goal of “funding breakthrough fundamental science” is fraught with challenges. Defining just what constitutes potential “breakthrough fundamental science” seems to be extremely difficult and to require a good deal of scientific expertise and judgment. It would be a major challenge to estimate how much is being spent, vs. how much “should be” spent, on potential breakthrough fundamental science – far more so than with neglected goals, and more so even than with translational research.

In addition, it certainly isn’t the case that this type of work is highly neglected. After all, it appears that breakthrough fundamental science is well-represented among Nobel Prize winners, and as the quotes above show, it is a major concern of some very large funders. It’s highly possible that there are still far too few attempts at breakthrough fundamental science, but it’s far from clear how to determine this.

At this time, our biggest focus is on trying to improve our general capacity to investigate scientific research, which we’re working on as described previously. We’re also trying to get more context on the history of major breakthroughs in biomedical sciences, and the role of different kinds of science in these breakthroughs. We will hopefully be better equipped for more investigation of breakthrough fundamental science after we’ve made more progress on those fronts.

One more note: while the “breakthrough fundamental science” idea is often presented as a contrast to focusing on “translational research”, the two are not mutually exclusive. It could easily be the case that the existing system under-supports both, while focusing most of its resources on a particular kind of research that fits in neither category. My current picture is that, when looking at the stages of research I laid out earlier, the existing system is quite focused on stage (C) – identifying the cause of particular diseases and conditions of interest – while potentially underinvesting in multiple other stages (some of which might be classified as “breakthrough fundamental science” and some of which might be classified as “translational research”).

GiveWell’s money moved and web traffic in 2014

This is the final post (of six) we have made focused on our self-evaluation and future plans.

This post lays out highlights from our metrics report for 2014. For more detail, see our full metrics report (PDF). Note, we report on “metrics years” that run from February – January; for example, our 2014 data cover February 1, 2014 through January 31, 2015.

  1. In 2014, GiveWell influenced charitable giving in several ways. The following table summarizes the money that were able to track.
    Table_Summary.png

  2. In 2014, GiveWell tracked $27.8 million in money moved to our recommended charities, about 60% more than in 2013. This total includes $14.8 million from Good Ventures (up from $9.3 million) and $13.0 million from other donors (up from $8.2 million).
    Chart_MoneyMoved.png

  3. As part of our work on the Open Philanthropy Project, we advised Good Ventures to make grants totaling $8.1 million (this was in addition to Good Ventures’ support for our top charities and standout charities). In addition, the Laura and John Arnold Foundation provided a commitment of up to $6 million to the Meta-Research Innovation Center at Stanford after we connected these organizations.
  4. Our total expenses were $1.8 million in 2014. We estimate that about half supported our traditional top charity work and about half supported the Open Philanthropy Project. Our expenses increased from about $960,000 in 2013 and about $560,000 in 2012 as the size of our staff grew.
  5. Our four top charities received the majority of the $28.1 million tracked to our recommended charities. Our four standout charities received about $1.7m total (mostly from Good Ventures).

    Table_ByCharity.png

  6. In 2014, the number of donors and amount donated increased across each donor size category. Last year, we discussed a substantial decrease among the largest donors from 2012, which we expected might be somewhat temporary. While that category rebounded strongly, it was driven by donors who gave $50,000 or more to our recommended charities for the first time.

    Table_ByDonorSize.png

  7. In 2014, the total number of donors giving to our recommended charities or to GiveWell unrestricted did not grow significantly (up 9% to about 9,300). This is largely due to many new donors in 2013 (particularly donors who gave less than $1,000) not giving again in 2014.

    Table_Retention.png
    Our retention was stronger among donors who gave larger amounts or who first gave to our recommendations prior to 2013. Of larger donors (those who gave $10,000 or more in either of the last two years), about 80% who gave in 2013 gave again in 2014.

    Table_Retention10k.png

  8. Prior to 2013, GiveWell relied on a small number of donors to provide unrestricted support for our operations. Over the last two years, we’ve asked donors for more operational support. In 2014, we raised $3.0 million, up from $1.8 million in 2013 and $0.8 million in 2012. Four institutions and the nine largest individual donors contributed about 75% of GiveWell’s funding in 2014.

    Table_Unrestricted.png

  9. We continued to collect information on our donors. We found the picture of our 2014 donors to be broadly consistent with previous information. Based on reports from donors who gave $2,000 or more, we found:
    • The most common ways donors found us was via Peter Singer and personal referrals.
    • Many of the donors are under 40 and work in technology and finance.
  10. Excluding AdWords, unique visitors to our website increased by 9% in 2014 compared to 2013. Including AdWords, unique visitors decreased by 11%. In late 2013, we removed some AdWords campaigns that were driving substantial traffic but appeared to be largely resulting in visitors who were not finding what they were looking for (as evidenced by short visit duration and high bounce rates). Traffic directly to our website increased, but traffic from other non-paid sources was basically unchanged.

    Chart_WebTraffic.png

    Table_WebSources.png

  11. In the past, we compared GiveWell’s online money moved to that of Charity Navigator and GuideStar. This year, we did not find data from Charity Navigator and GuideStar so do not have an updated comparison.

For more detail, see our full metrics report (PDF).

Translational science and the “valley of death”

As we’ve looked for potential gaps in the world of scientific research funding – focusing for now on life sciences – we’ve come across many suggestions to look at the “valley of death” that sits between traditional academic research and industry research. Speaking very broadly, the basic idea is that:

  • The world of life sciences research has become increasingly complex, with a widening gulf between traditional academic research – which aims at uncovering fundamental insights – and industry work, which is focused on developing drugs and other marketable products.
  • There is a lot of work that could be done on figuring out how to apply insights from academia and therefore close this gap. However, this “translational science” tends not to fit well within the traditional academic ecosystem – perhaps because it focuses on useful applications rather than on intellectual novelty – and so may be under-supported.
  • As a result, the world is becoming increasingly inefficient at translating basic research into concrete applications, and this explains why drug development has seemingly been slowing despite increasing expenditures on biomedical research (though recent data suggests that this trend may be changing).

For examples of this basic argument, see Translational Research: Crossing the Valley of Death (Nature 2008) and Helping New Drugs Out of Research’s ‘Valley of Death’ (New York Times 2011). In particular, the Nature article contains a pair of charts giving a rough illustration of two basic trends that may represent the causes and consequences of the growing “valley of death”: (a) rising government expenditures on research, increasingly supporting pure academics as opposed to medical practitioners, and (b) declining drug development production despite rising pharmaceutical R&D expenditures. (As noted above, more recent data may indicate that these trends are changing.)

We find this theory extremely challenging to assess for several reasons. One is that there doesn’t appear to be any one clear definition of “translational science” or of the “valley of death,” and some “translational science” seems quite well-suited to industry – to the point where it’s not entirely clear why we should think of it as a candidate for philanthropic or government funding at all. Another is that there has been growing interest in the issue over the last decade, including the 2011 debut of NCATS, a new institute at NIH dedicated to translational science; it’s hard to say whether translational science still represents one of the main “gaps” in the existing system.

Finally, there are other strong explanations for the observed decline in pharmaceutical output. The most comprehensive article I’ve seen on the subject names multiple possible explanations for the decline, many having to do with regulatory issues as well as the inherent challenges of improving on already-available drugs. The “valley of death,” as outlined above, doesn’t figure prominently in its account.

I am skeptical of some of the arguments people have made for the importance of translational science. These arguments often do not distinguish between different possible definitions of “translational science,” and often do not make a strong case that nonprofit funding (as opposed to industry funding) is what’s needed. In addition, it seems quite possible to me that the goals of promoting “translational science” might be better served by policy change (on regulatory and intellectual property law, for example) than by scientific research. With that said, I think the idea of translational science is worth keeping in mind, and that certain kinds of research in this category could be under-invested in because they do not fit cleanly into an academic or for-profit framework.

The rest of this post will:

  • List several different definitions of “translational science” that I’ve come across, noting that in some cases it isn’t clear why a proposed sort of research is a fit for the nonprofit as opposed to for-profit world. More
  • Briefly discuss the recent creation of the U.S. government’s National Center for Advancing Translational Sciences (NCATS). More
  • List some other potential reasons for the decline in pharmaceutical output, which may point to solutions outside of “translational science.” More

Five different definitions of “translational science”

The Nature article on translational science states, “Ask ten people what translational research means and you’re likely to get ten different answers.” Here I give five definitions I’ve come across that seem quite distinct from each other – particularly in terms of what they imply about the appropriateness of nonprofit funding.

1. Not-for-profit preclinical research. “Preclinical research” here refers to categories D-E (mostly E) from my previous post on different phases of scientific research. A possible new medical treatment is often first tested “in vitro” – in a simplified environment, where researchers can isolate how it works. (For example, seeing whether a chemical can kill isolated parasites in a dish.) But ultimately, a treatment’s value depends on how it interacts with the complex biology of the human body, and whether its benefits outweigh its side effects. Since testing with human subjects is extremely expensive and time-consuming, it can be valuable to first test and refine possible treatments in other ways, including animal testing.

The idea of carrying out this kind of work outside of industry – both in vitro screening to identify potential new medical technologies, and other tests to improve estimates of their promise – appears to be one of the most common definitions of translational research.

 

  • The Nature article states “For basic researchers clutching a new prospective drug, it might involve medicinal chemistry along with the animal tests and reams of paperwork required to enter a first clinical trial … In some sense much translational research is just rebranding — clinical R&D by a different name.”
  • The NYT article states, “For a discovery to reach the threshold where a pharmaceutical company will move it forward what’s needed is called “translational” research — research that validates targets and reduces the risk. This involves things like replicating and standardizing studies, testing chemicals (potentially millions) against targets, and if something produces a desired reaction, modifying compounds or varying concentration levels to balance efficacy and safety (usually in rats). It is repetitive, time consuming work — often described as “grunt work.” It’s vital for developing cures, but it’s not the kind of research that will advance the career of a young scientist in a university setting.”
  • The examples of translational research listed by the Science: Translational Medicine journal seem to fit this basic framework, as does much of the activity described in the most recent annual report for NCATS (the recently created government institute focused on translational science).

I don’t feel that there’s a clear case for supporting this kind of work with nonprofit (government or philanthropic) funds. Unlike much basic research, this sort of work seems generally to have a very specific medical application in mind, and I believe that companies are often able to monetize the value created by new technologies they develop (especially drugs). Therefore, when looking at this kind of “translational science,” I think it is fair to ask: “If this research is generating more expected value than it costs, why isn’t industry investing in it? Why the need for nonprofit funds?”

There are a few possible answers. One is that that this kind of research may have positive expected value, but it is too risky for any one investor to take on – even the large industries that consider investing in it. This may be true, but I’ve rarely seen it spelled out by comparing the level of risk in particular kinds of research to the level of risk that various industry players are likely to bear. In addition, if risk is the key issue, this doesn’t necessarily call for a nonprofit solution. An economics-and-financed-focused group at MIT has proposed that a large enough for-profit fund – perhaps made possible via financial engineering – could result in much more investment in this type of research. This group appears to be working on a collaboration with NCATS. I am unsure about whether (and if so, for what diseases) financial engineering could ever turn a set of biomedical research investments (which I believe will generally have fairly correlated odds of success) into a high-grade-bond-quality investment, but I think it is an interesting approach.

There are other possible answers to the question. Perhaps industry can’t fully monetize the benefits its products bring, for reasons including the fact that (a) there may be many beneficiaries who can’t afford to pay (and don’t have insurance for paying) full price; (b) patents on medical products eventually expire. Taking existing health care and intellectual property law as a given, this could serve as some defense of investing nonprofit funds in “industry-style” research. I haven’t explicitly seen this argument made anywhere, except in cases where a disease has a clearly disproportionate impact on very low-income people.

In my limited readings on translational science, I’ve felt that this basic issue – the question of why we ought to support research with nonprofit funds when it appears to be a fairly good fit for industry – is rarely addressed.

2. Research on public goods – such as new tools and techniques – for preclinical research. The 2012-2013 annual report from NCATS cites several projects aimed at developing generally-useful tools and insights, that might be taken up by industry for a broad variety of purposes. For example, improving general methods for predicting how toxic a drug will end up being (page 7). In cases where such research aims to release public insights that others can build on, the case for a nonprofit model seems stronger than with the above category (targeted preclinical work with more specific aims).

3. Improving communication between clinical and academic professionals, via multidisciplinary groups as well as multidisciplinary career tracks. The idea here is that academics might do more useful research if they had more observations about how medical care works in practice – not only in terms of understanding the greatest needs, but also in terms of potentially drawing scientific inspiration from observing the effects of treatments on patients. It could be argued that there were more medical breakthroughs in the past, before academic biology and clinical medicine became as separated as they are today. A related idea is that it might be productive to provide academics with more support in understanding market demand for the kinds of technologies they’re working toward, via market research, competition analysis, etc.

The Nature article states,

Back in the 1950s and 60s, basic and clinical research were fairly tightly linked in agencies such as the NIH. Medical research was largely done by physician–scientists who also treated patients. That changed with the explosion of molecular biology in the 1970s. Clinical and basic research started to separate, and biomedical research emerged as a discipline in its own right, with its own training … Science and innovation have become too complex for any nostalgic return to the physician–scientist on their own as the motor of health research. Reinventing that culture is therefore the focus of the CTSCs [CTSCs are centers supported by NCATS] in the form of larger, multidisciplinary groups, including both basic scientists and clinicians, but also bioinformaticians, statisticians, engineers and industry experts. Zerhouni says he expects them to be breeding grounds for a new corps of researchers who will effectively stand on the bridge and help others across.

This issue was a major focus of a 2000 roundtable on clinical research as well.

4. Conducting academic research in the “style” of industry research. The NYT article highlights research-focused nonprofits that are “intensely goal-directed and collaborative; they see the creation of new cures as a process that needs to be managed; and they bring a sense of urgency to the task.” The Nature article mentions that CTSCs (the same NCATS-supported centers discussed above) will evaluate scientists “with business techniques, such as milestones and the ability to work in multidisciplinary groups, rather than by their publications alone.” The focus on collaboration and setting specific goals seems conceptually distinct from a focus on the preclinical phases of research, though I’ve generally seen the two side by side in discussions of translational science.

5. Supporting and improving clinical trials. Clinical trials (category F from my previous post on different phases of scientific research) are generally the most expensive part of developing new medical technologies, and they are traditionally paid for mostly by industry. NCATS reports (page 10) working to improve their cost-effectiveness and usefulness in a variety of ways, including improving data sharing and recruitment of participants: “investigators work together on data sharing, multisite trial regulatory hurdles, patient recruitment, communication and other functional areas of research to enhance the efficiency and quality of clinical and translational research … the University of California Research eXchange (UC ReX) Data Explorer is a secure,online system that enables cross-institution queries of clinical aggregate data from 12 million de-identified patient records derived from patient care activities.”

The recent creation of NCATS

The National Center for Advancing Translational Sciences (NCATS) was established in December 2011, making it “the newest of 27 Institutes and Centers (ICs) at the National Institutes of Health (NIH).” Its annual budget is in the range of $600 million (page 4). Going over its 2012-2013 annual report, I note quite a broad variety of activities, seemingly including all five of the categories described above (note that it spends over $400 million per year (page 5) on clinical research centers, which I believe are the same as the centers referred to under #3 and #4 from the previous section). NCATS also appears to engage in attempting to improve policy (e.g., regulation and intellectual property law – see page 22). It appears to pay special attention to rare diseases (pages 13-16), though the reasons for this are not obvious to me.

It appears to me that the creation of NCATS was met with some negative reaction from the scientific community, as evidenced by three posts (1, 2, 3) by chemist Derek Lowe. The negative reaction appears to be based partly on a perceived vagueness of mission and partly on fears of diverting funding from other science.

Most discussion I’ve seen of the “valley of death” and need for translational science pre-dates the creation of NCATS. It is unclear to what extent the creation of NCATS has addressed the relevant gaps.

I should also note that there are longer-running NIH mechanisms for supporting translational science, such as SBIR and STTR grants for “domestic small businesses [that] engage in R&D that has a strong potential for technology commercialization.”

Why has pharmaceutical productivity been declining in recent years?

Advocates of translational science often point to the seeming paradox of declining pharmaceutical productivity despite an ever-growing world of academic research (example). It appears that the decline in productivity has been real, and concerning (though there is also preliminary data that the situation may be changing). However, the decline has multiple possible explanations. The most useful-seeming paper I’ve seen on this topic is Scannell et al. 2012, and I highly recommend it to those interested in the subject. A brief summary:

  • Over the past 60 years, “R&D efficiency, measured simply in terms of the number of new drugs brought to market by the global bio- technology and pharmaceutical industries per billion US dollars of R&D spending, has declined fairly steadily.” The authors call this “Eroom’s law” (Moore’s Law reversed).
  • The decline has occurred despite major improvements in efficiency on many fronts, from better understanding of biology to more efficient methods for screening large numbers of potential drugs. The authors are skeptical that there is any easy fix, noting that many potential fixes have been explored. They believe the magnitude and consistency of the decline in productivity “indicates that powerful forces have outweighed scientific, technical and managerial improvements over the past 60 years, and/or that some of the improvements have been less ‘improving’ than commonly thought.”
  • One of the major explanations the authors offer is the “better-than-the-Beatles problem”: each potential new drug has to compete with the best drugs developed to date in order to justify its development. It has to compete in clinical trials (making the trials challenging and expensive), and it has to compete for patients (making it hard to recoup revenue). The authors list some classes of drugs that “could have been blockbusters” 15 years ago, but today are not worth the costs and risks of development because there are existing drugs that are probably nearly as good.
  • The authors also hypothesize that drug development has transitioned to a fundamentally different new kind of approach, and that this approach – while superficially seeming clearly superior – may actually be inferior. In the past, drug development consisted largely of testing a relatively small number of potential drugs in animals (and humans), and observing results via trial-and-error. Today, there are more attempts to logically segment the process: for example, it is common to first identify a biological “target” via academic research, then look for compounds that do an outstanding job binding to the target in a lab environment, and only then to move on to animal/human trials. The authors believe that the old process may in fact have been more efficient (their arguments are somewhat complex and I do not summarize them here). It’s worth noting that if true, this hypothesis calls for a different approach to drug development, but does not necessarily call for “translational science” as defined above.
  • Many of the other explanations offered by the authors have to do with increasingly cautious regulation, which is likely responsible for longer, more expensive, more challenging clinical trials. From my limited readings on the history of biomedical research, it seems to me that getting drugs tested and approved used to be much easier than it is today, and that many key experiments were highly speculative and dangerous; such experiments would have been much more difficult to carry out with today’s regulation and social norms.

If the authors were right, it wouldn’t necessarily mean translational science isn’t valuable. It does seem true that academic biology has gotten far more complex, and translational science may be crucial in taking advantage of improved basic science and thereby improving pharmaceutical productivity. But I believe it is far from clear that translational challenges are the source of the productivity decline we’ve seen.