Update on GiveWell’s web traffic / money moved: Q2 2014

In addition to evaluations of other charities, GiveWell publishes substantial evaluation of itself, from the quality of its research to its impact on donations. We publish quarterly updates regarding two key metrics: (a) donations to top charities and (b) web traffic.

The table and chart below present basic information about our growth in money moved and web traffic in the first half of 2014 (note 1).

Money moved: first two quarters

Growth in money moved, as measured by donations from donors giving less than $5,000 per year, slowed in the second quarter of 2014 compared with the first quarter, and was substantially weaker than growth in the first two quarters of 2013.

The total amount of money we move is driven by a relatively small number of large donors. These donors tend to give in December, and we don’t think we have accurate ways of predicting future large gifts (note 2). We therefore show growth among small donors, the portion of our money moved about which we think we have meaningful information at this point in the year.

Web traffic through July 2014

We show web analytics data from two sources: Clicky and Google Analytics. The data on visitors to our website differs between the two sources. We do not know the cause of discrepancy (though a volunteer with a relevant technical background looked at the data for us to try to find the cause). Full data set available at this spreadsheet. (Note on how we count unique visitors.)

Traffic from AdWords decreased in the first two quarters because in early 2014 we removed ads on searches that we determined were not driving high quality traffic to our site (i.e. searches with very high bounce rates and very low pages per visit).

Data in the chart below is an average of Clicky and Google Analytics data, except for those months for which we only have data (or reliable data) from one source (see full data spreadsheet for details).

Slowing growth?

The above indicates that our growth slowed significantly in 2014 relative to last year (and previous years). It is possible that the numbers above are affected by the fact that (a) growth in the second quarter of 2013 was particularly strong due to a series of media mentions (as we previously noted) or (b) differences in the way that our recommended charities track donations (we would guess that this could explain a difference of a few hundred donors). Our guess is that both of these factors contribute but do not explain the slower growth.


Note 1: Since our 2012 annual metrics report we have shifted to a reporting year that starts on February 1, rather than January 1, in order to better capture year-on-year growth in the peak giving months of December and January. Therefore metrics for the “first two quarters” reported here are for February through July.

Note 2: In total, GiveWell donors have directed $2.41 million to our top charities this year, compared with $1.46 million at this point in 2013. For the reason described above, we don’t find this number to be particularly meaningful at this time of year.

Note 3: We count unique visitors over a period as the sum of monthly unique visitors. In other words, if the same person visits the site multiple times in a calendar month, they are counted once. If they visit in multiple months, they are counted once per month.

Google Analytics provides ‘unique visitors by traffic source’ while Clicky provides only ‘visitors by traffic source.’ For that reason, we primarily use Google Analytics data in the calculations of ‘unique visitors ex-AdWords’ for both the Clicky and Google Analytics rows of the table. See the full data spreadsheet, sheets Data and Summary, for details.

 

Thoughts on the End of Hewlett’s Nonprofit Marketplace Initiative

Note: we sent a pre-publication draft of this post to multiple people who had been involved in the Hewlett program discussed here. A response from the Hewlett Foundation is available in the comments of this post; a response from Jacob Harold is available on the GuideStar blog.

Last April, the Chronicle of Philanthropy covered the decision by the William and Flora Hewlett Foundation to end its Nonprofit Marketplace Initiative, which in 2008 was the source of GiveWell’s first grant from a foundation, and has continued to be a source of substantial support for GiveWell’s operations in the years since. The Hewlett Foundation has been unusually transparent about the thinking behind its decision, and we have unusual context on the program as one of its grantees, so we find it worthwhile to reflect on this episode – how we perceived the Nonprofit Marketplace Initiative, its strengths and weaknesses, and the decision to end it.

The Nonprofit Marketplace Initiative aimed to improve the giving of individual donors. Hewlett states, “This Initiative’s goal was that by 2015, ten percent of individual philanthropic donations in the US (or $20 billion), would be influenced by meaningful, high-quality information about nonprofit organizations’ performance.” Grantees included GiveWell, GuideStar, Charity Navigator, Philanthropedia and Great Nonprofits.

In short:

  • We believe that Hewlett’s philanthropy program was a strong use of philanthropic funds. The program is reported to have spent a total of $12 million over 8 years, and we think its impact on GiveWell alone will likely ultimately be responsible for enough influence on donations to easily justify that expenditure.
  • We believe that ending this program may have been the right decision. With that said, we disagree with the specific reasoning Hewlett has given, for the same reason that we disagreed with its strategic plan while the program was running. We believe that Hewlett’s goal of influencing 10% of donors was unrealistic and unnecessary, at least over the time frame in question. We believe the disagreement may reflect a broader difference in how we see the yardstick by which a philanthropic program ought to be evaluated. 
  • We are very positive on how Hewlett ended the program. Great care was taken to end it in a way that gave grantees ample advance notice and aimed to avoid disruptive transitions. We also applaud Hewlett’s decision to publish its reasoning in ending the program and invite a public discussion, and we broadly feel that Hewlett is delivering on its stated intent to become a highly transparent grantmaker.

Our experience with the program

In 2008, Bill Meehan introduced us to Jacob Harold, who was then the Program Officer for Hewlett’s Nonprofit Marketplace Initiative program. Jacob met with us several times, getting to know us and the project. Late in 2008, we were invited to submit a proposal and were awarded a one-year, $100,000 grant. This grant was crucial for us. At the time, we had little to no name recognition, a major mistake on our record, and uncertainty about whether we’d be able to raise enough to continue operating. We were in the midst of a change of direction, after disappointing results from our first attempt at high-intensity outreach. We had determined that we needed to take a longer view and focus on research quality for the time being – and it was thanks to the support of Hewlett, among others, that we felt it was possible to do so. We benefited both from Hewlett’s financial support (which helped answer crucial questions about whether we’d be able to fund our plans at the time) and from Hewlett’s brand (being able to say we were a Hewlett grantee substantially improved our credibility and appeal in the eyes of many, something Hewlett was cognizant of).

Over the years, we continued to meet periodically with Jacob and to periodically submit grant proposals. For the most part, Hewlett continued to fund us at the level of $100,000 per year (there was one year where the support temporarily dropped to $60,000). As our audience and budget grew, this support became a smaller part of our revenue and became less crucial to us, but it remained quite valuable. Hewlett’s support reduced the amount of time we had to spend fundraising and worrying about sustainability, and increased the amount of time spent on core activities.

In addition to supporting us financially, Hewlett sought to integrate our work into its own vision for the “nonprofit marketplace.” Jacob encouraged us to attend convenings with other groups working on helping individual donors give effectively, such as Charity Navigator, GuideStar, Philanthropedia and Great Nonprofits (and we generally did so). He also discussed his vision for how impact would be achieved, and particularly emphasized the importance of working with portals and aggregators (such as GuideStar, where he now serves as CEO) that could pull together information from many different kinds of resources. He encouraged us to build an API in order to make aggregation easier, and saw aggregation as a more promising path than building our own website, brand and audience.

We disagreed with him on some of these points. We felt that his vision was overly specific, overly focused on reaching the “average” donor, and was under-emphasizing the promise of different organizations targeting different audiences in different ways. When the Hewlett-funded Money for Good study came out, we publicly disagreed with the common interpretation, and argued that the most promising path for nonprofit evaluation groups is to target passionate niche audiences rather than focusing on the unrealistic (as both we and Money for Good saw it) goal of influencing 10%+ of all U.S. giving

However, we never found Jacob or anyone else at Hewlett to be pushing its vision on us hard enough to cause problems. We certainly weighed Jacob’s encouragement when attending convenings and working on a partnership with GuideStar, but we were comfortable with the cost-benefit tradeoffs involved in these activities and didn’t undertake them solely to please a funder. We particularly valued some of the opportunities to get to know other organizations in our space. We didn’t build an API, and Hewlett didn’t pressure us to do so (its support continued).

All in all, our general feeling was that Hewlett was accomplishing substantial good via its relatively reliable, unrestricted funding even as its strategy was something we disagreed with.

Hewlett’s reasoning for ending the program, and our take on it

In a response to the Chronicle of Philanthropy, Larry Kramer (Hewlett’s current President) wrote:

We launched NMI in 2006 with the objective of influencing 10% of individual donors to be more evidence-based in their giving, a goal we sought to achieve by making high-quality information available about nonprofit performance. Based on independent research and evaluation, we concluded we were not going to meet that goal. And because we are committed to being transparent about our work – both successes and failures – we openly shared our reasons for ending the initiative in a video and blog post on our web site.

Hewlett also states that staff transitions provided a good opportunity to reflect systematically on the initiative: between late 2012 and early 2013, Larry Kramer replaced Paul Brest as President, Fay Twersky became the first Director of the newly formed Effective Philanthropy Group, and Lindsay Louie replaced Jacob Harold in a slightly different program officer role.

We believe that ending this program may have been the right decision. With that said, we disagree with the specific reasoning Hewlett has given, for the same reason that we disagreed with its strategic plan while the program was running. We believe that the goal of influencing 10% of donors was unrealistic and unnecessary, at least over the time frame in question. We believe that this is a case in which a commitment to specific quantitative targets, and a specific strategy for getting there, was premature and did not make the program better.

Despite this, we believe that Hewlett succeeded in choosing an important problem to work on and in finding and funding promising groups working on the problem, and that it played a real role in the development of at least one organization (ours) that is poised to influence far more dollars than Hewlett spent on the program. For this reason, we think it would be reasonable to consider the program a success, though not necessarily something that should have been continued.

In short, we feel this program was an instance of good and successful philanthropy, and that it may indeed have been time to end it, but we disagree with the way the program framed and evaluated itself and the way Hewlett justified the end of the program.

How Hewlett ended the program

Hewlett took great care to end the program in a way that would not be overly disruptive for grantees. We were notified well in advance of the public announcement about the program’s end; we were able to ask questions and receive helpful answers; and our two-year grant was renewed as an “exit grant.” We were told that other grantees had been treated similarly. By clearly communicating its intent to end the program and committing “exit funding,” Hewlett ensured that we would have ample time to adjust for the loss of this revenue.

We also applaud Hewlett’s decision to publish its reasoning in ending the program and invite a public discussion.

A note on Hewlett’s transparency

Shortly after taking over as President of the Hewlett Foundation, Larry Kramer expressed his desire to further improve Hewlett’s transparency, and we think there has indeed been substantial progress. The public discussion of the end of the Nonprofit Marketplace Initiative represents some of this progress. In addition:

  • Hewlett’s relatively new blog is frequently updated and has given us a window into the day-to-day work and thoughts of its staff.
  • Hewlett recently held conference calls with open Q&A for grantees.

As a result, we believe Hewlett has become one of the easiest foundations to learn about and get a feel for from the outside. We think this is quite a positive development, and may write more in the future about what we’ve learned from examining Hewlett’s output.

Key takeaways

Hewlett’s vision of good philanthropy, at least in this case, seems to have involved setting extraordinarily ambitious and specific goals, laying out a plan to get there, and closing the program if the goals aren’t reached. By this measure, the Nonprofit Marketplace Initiative apparently failed (though Hewlett followed its principles by closing a program falling short of its goals).

Our vision for good philanthropy is that it finds problems worth working on (in terms of importance, tractability and uncrowdedness) and supports strong organizations to work on them, while ensuring that any “active” funding (restrictions, advice, requests of grantees) creates more value than it detracts. We think that specific quantitative goals are sometimes called for, but are more appropriate in domains where the background data is stronger and the course is easier to chart (as with our top charities). By our measure, we think the Nonprofit Marketplace Initiative was at least reasonably successful.

Recognizing this difference in the way we think about good philanthropy will help us to better understand Hewlett’s decisions going forward, and will give us a disagreement to reflect on as we move forward with our vision. We’re glad to have examined Hewlett’s thinking on this matter, and see the chance to do so as a benefit of Hewlett’s improved commitment to transparency.

A note on the role of Hewlett’s funding in our budget:

Because this post discusses Hewlett’s work in an evaluative manner, we think it’s worth being clear about the support we receive so that people may take into account how this may influence our content.

Hewlett has provided generous support to GiveWell since 2008. We hope that it will continue doing so even after the end of our current grant, depending on how our work and Hewlett’s evolve (our work on GiveWell Labs seems to us to be relevant to Hewlett’s work on encouraging transparency among major funders). We are currently projecting expenses of and revenues of over $1.5 million per year, and Hewlett’s support has historically been around $100,000 per year.

Our ongoing review of ICCIDD

The International Council for the Control of Iodine Deficiency Disorders Global Network (ICCIDD) advocates for and assists programs that fortify salt with iodine. Our preliminary work (writeup forthcoming) implies that even moderate iodine deficiency can lead to impaired cognitive development.

ICCIDD tracks iodine deficiency around the world and encourages countries with iodine deficient populations to pass laws requiring iodization for all salt produced in and imported to the country. ICCIDD also provides – and helps countries find – general support and assistance for their iodization programs.

In February, we wrote that we were considering ICCIDD for a 2014 GiveWell top charity recommendation. We’ve now spent a considerable amount of time talking to and analyzing ICCIDD. This post shares what we’ve learned so far and what questions we’re planning to focus on throughout the rest of our investigation. (For more detail, see our detailed interim review.)

ICCIDD has successfully completed the first phase of our investigation process and we view it as a contender for a recommendation this year. We now plan (a) to make a $100,000 grant to ICCIDD (as part of our “top charity participation grants,” funded by Good Ventures) and (b) continue our analysis to determine whether or not we should recommend ICCIDD to donors at the end of the year.

Reasons we prioritized ICCIDD

We prioritized ICCIDD because of our impression that iodization has strong evidence of effectiveness, cost-effectiveness, and room for more funding.

The evidence of effectiveness for salt iodization is not fully straightforward – we plan to publish an intervention report with details before the end of the year – but multiple randomized controlled trials imply that reducing iodine deficiency in children leads to moderate (~3-4 points) gains in IQ.

We have yet to find well-documented assessments of the cost of iodization, but the estimates we have seen most commonly estimate approximately $0.10 per person reached.

Although iodization rates have increased dramatically over the past 20 years, significant deficiency still exists. ICCIDD publishes a scorecard showing countries’ iodine status; many fall significantly below the benchmark of 100 µg of iodine per liter of urine.

Questions we hope to answer in our ongoing analysis

What would have happened to iodization programs in ICCIDD’s absence?

Because ICCIDD is an advocacy/technical assistance organization (it does not directly implement iodization programs but advocates that others do so), it is difficult to assess its impact.

ICCIDD has provided us with several examples of countries in which it believes it played an essential role (some of which we discuss briefly in our interim review page), but we have not yet investigated these cases sufficiently to form a confident view about what role ICCIDD played and how crucial its contributions were to the program.

What role does ICCIDD play relative to other organizations that work on iodization?

A number of organizations support government and private-sector salt iodization programs, especially UNICEF, the Global Alliance for Improved Nutrition (GAIN), and the Micronutrient Initiative.

We hope to better understand the roles each organization plays so that we can formulate a view about where donated funds are likely to have the greatest impact. (We’re considering the possibility that funds donated to any should be thought of as “supporting the international effort to support iodization” and that the important question is assessing the combined costs and impacts of all 4 organizations.)

We are also considering GAIN for a 2014 GiveWell recommendation. We do not expect our decision about GAIN to affect the likelihood of ICCIDD receiving a recommendation.

Program monitoring

Surveys to assess iodine consumption and status are completed more than once a decade in most countries, and are usually conducted by country governments or UNICEF. We have yet to analyze these surveys carefully enough to know whether or not they provide a reliable assessment of the track record of iodization programs: i.e., do iodization programs lead to a reduction in iodine deficiency?

Room for more funding

We have seen strong evidence that ICCIDD is funding constrained. It told us that its staff members have, over the past few years, consistently submitted requests for funds that are significantly higher than it is able to allocate. Additionally, ICCIDD lost what had been its largest funder in 2012. It has also shared an overall budget with us requesting significantly more funding than it has received in the past.

Nevertheless, we have two major questions about room for more funding:

  1. Given iodization’s cost-effectiveness and track record, why haven’t others closed the funding gap? We have been told that the lack of funds may be due to “donor fatigue” (i.e., donors have supported iodization in the past and iodized a large proportion of the countries in need, so they no longer view it as a priority), but we have yet to investigate this question sufficiently to feel comfortable with our understanding.
  2. Will ICCIDD’s future activities be as cost-effective as past attempts to increase iodization rates? One possible explanation for the lack of donor funds is that the countries that remain iodine deficient are particularly problematic. Were this true, it might be the case that donors are acting rationally because future efforts to iodize could be significantly more costly than past efforts.

Note that previously the Gates Foundation made a $40 million grant to support universal salt iodization (USI) in 16 countries over seven years. That grant ends in March of 2015 and no extension of the grant has yet been scheduled.

Partnership with The Pew Charitable Trusts

Throughout the post, “we” refers to GiveWell and Good Ventures, who work as partners on GiveWell Labs.

We have agreed to a major partnership with The Pew Charitable Trusts as part of our work on criminal justice reform. Good Ventures will provide $3 million to support and expand the work of Pew’s public safety performance project (PSPP), which aims “to advance data-driven, fiscally sound policies and practices in the criminal and juvenile justice systems that protect public safety, hold offenders accountable, and control corrections costs” through technical assistance to states, research and public education, and promotion of nontraditional alliances and collaboration around smart criminal justice policies.

We came into contact with Pew through our investigation on criminal justice reform. Our impression is that PSPP has been intensively involved in the criminal justice reform packages that have passed in over two dozen states since 2007. PSPP now seeks more funding to work in additional states, help states to cement existing reforms, explore the potential for reform at the federal level, and continue pursuing research and public education and engaging with nontraditional allies of reform.

In discussions with Pew, we have been impressed with the knowledge and thoughtfulness both of the PSPP team and of The Pew Charitable Trusts as a whole. It appears to us that Pew has worked in a substantial number of policy areas, often with concrete goals and concrete stated results over several-year time frames, and that Pew has a good deal of general capacity for assessing the opportunities in a policy space and developing a relatively systematic strategy for working within it. (This does not mean that we see eye to eye with Pew on all matters. We believe it sets policy priorities using a different value system from ours; for example, we have stronger interest in foreign aid and other issues related to developing-world poverty reduction.) More information on Pew as a whole will be forthcoming, including notes from a day-long visit in November and a potential historical case study on its work in an another area. Our current writeup includes an assessment of the track record of PSPP specifically.

We see this partnership as an important step on multiple fronts:

  • Criminal justice reform is a current focus area for us, and PSPP appears to be one of the most prominent and effective organizations working toward change on this front. Funding and following its work represents an opportunity for both impact and learning.
  • We are also interested in developing a relationship with Pew as a whole; we believe this relationship will be a valuable resource as we continue to explore policy-oriented philanthropy. Based on conversations with Pew representatives, we see supporting PSPP as one of the best ways to support Pew as a whole.
  • Finally, the process of establishing this partnership has itself been a valuable learning opportunity. With PSPP’s help, we have conducted a brief review of PSPP’s track record, which was our first attempt to assess the track record of a U.S.-policy-focused organization and taught us a fair amount about the criminal justice reform space. We have also dealt with new challenges around how to balance our goal of transparency with the goal of having maximal impact; when working on policy, there can be particular tension between these, and we have established an agreement regarding public discussion of PSPP that may serve as a guide to future grant agreements. Note that we have agreed to a review process for public updates that is likely to be time-consuming for both us and Pew, and accordingly we have agreed to limit the frequency with which we publish updates on the project.

Our full writeup has further discussion of PSPP, its track record, our cost-effectiveness estimate, and the case for (and details of) this collaboration.

Writeup on our partnership with PSPP
Note that we believe PSPP has room to productively use more than the $3 million Good Ventures will be providing. Donors interested in contributing to PSPP should contact us.

The moral value of the far future

A popular idea in the effective altruism community is the idea that most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. By working to lower global catastrophic risks, speed economic development and technological innovation, and generally improve people’s resources, capabilities, and values, we may have an impact that (even if small today) reverberates for generations to come, helping more people in the future than we can hope to help in the present.

This belief is sometimes coupled with a belief that the most important goal of an altruist should be to reduce “existential risk”: the risk of an extreme catastrophe that causes complete human extinction (as, for example, a sufficiently bad pandemic – or extreme unexpected developments related to climate change – could theoretically do), and thus curtails large numbers of future generations.

We are often asked about our views on these topics, and this post attempts to lay them out. There is not complete internal consensus on these matters, so I speak for myself, though most staff members would accept most of what I write here. In brief:

  • I broadly accept the idea that the bulk of our impact may come from effects on future generations, and this view causes me to be more interested in scientific research funding, global catastrophic risk mitigation, and other causes outside of aid to the developing-world poor. (If not for this view, I would likely favor the latter and would likely be far more interested in animal welfare as well.) However, I place only limited weight on the specific argument given by Nick Bostrom in Astronomical Waste – that the potential future population is so massive as to clearly (in a probabilistic framework) dwarf all present-day considerations. More
  • I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes such as extreme climate change, pandemics, misuse of advanced artificial intelligence, etc. Even one who fully accepts the conclusions of “Astronomical Waste” has good reason to consider focusing on shorter-term, more tangible, higher-certainty opportunities to do good – including donating to GiveWell’s current top charities and reaping the associated flow-through effectsMore
  • I consider “global catastrophic risk reduction” to be a promising area for a philanthropist. As discussed previously, we are investigating this area actively. More

Those interested in related materials may wish to look at two transcripts of recorded conversations I had on these topics: a conversation on flow-through effects with Carl Shulman, Robert Wiblin, Paul Christiano, and Nick Beckstead and a conversation on existential risk with Eliezer Yudkowsky and Luke Muehlhauser.

The importance of the far future

As discussed previously, I believe that the general state of the world has improved dramatically over the past several hundred years. It seems reasonable to state that the people who made contributions (large or small) to this improvement have made a major difference to the lives of people living today, and that when all future generations are taken into account, their impact on generations following them could easily dwarf their impact in their own time.

I believe it is reasonable to expect this basic dynamic to continue, and I believe that there remains huge room for further improvement (possibly dwarfing the improvements we’ve seen to date). I place some probability on global upside possibilities including breakthrough technology, space colonization, and widespread improvements in interconnectedness, empathy and altruism. Even if these don’t pan out, there remains a great deal of room for further reduction in poverty and in other causes of suffering.

In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations. I see no obvious analytical flaw in this claim, and give it some weight. However, because the argument relies heavily on specific predictions about a distant future, seemingly (as far as I can tell) backed by little other than speculation, I do not consider it “robust,” and so I do not consider it rational to let it play an overwhelming role in my belief system and actions. (More on my epistemology and method for handling non-robust arguments containing massive quantities here.) In addition, if I did fully accept the reasoning of “Astronomical Waste” and evaluate all actions by their far future consequences, it isn’t clear what implications this would have. As discussed below, given our uncertainty about the specifics of the far future and our reasons to believe that doing good in the present day can have substantial impacts on the future as well, it seems possible that “seeing a large amount of value in future generations” and “seeing an overwhelming amount of value in future generations” lead to similar consequences for our actions.

Catastrophic risk reduction vs. doing tangible good
Many people have cited “Astronomical Waste” to me as evidence that the greatest opportunities for doing good are in the form of reducing the risks of catastrophes such as extreme climate change, pandemics, problematic developments related to artificial intelligence, etc. Indeed, “Astronomical Waste” seems to argue something like this:

For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

I have always found this inference flawed, and in my recent discussion with Eliezer Yudkowsky and Luke Muehlhauser, it was argued to me that the “Astronomical Waste” essay never meant to make this inference in the first place. The author’s definition of existential risk includes anything that stops humanity far short of realizing its full potential – including, presumably, stagnation in economic and technological progress leading to a long-lived but limited civilization. Under that definition, “Minimize existential risk!” would seem to potentially include any contribution to general human empowerment.

I have often been challenged to explain how one could possibly reconcile (a) caring a great deal about the far future with (b) donating to one of GiveWell’s top charities. My general response is that in the face of sufficient uncertainty about one’s options, and lack of conviction that there are good (in the sense of high expected value) opportunities to make an enormous difference, it is rational to try to make a smaller but robustly positive difference, whether or not one can trace a specific causal pathway from doing this small amount of good to making a large impact on the far future. A few brief arguments in support of this position:

  • I believe that the track record of “taking robustly strong opportunities to do ‘something good’” is far better than the track record of “taking actions whose value is contingent on high-uncertainty arguments about where the highest utility lies, and/or arguments about what is likely to happen in the far future.” This is true even when one evaluates track record only in terms of seeming impact on the far future. The developments that seem most positive in retrospect – from large ones like the development of the steam engine to small ones like the many economic contributions that facilitated strong overall growth – seem to have been driven by the former approach, and I’m not aware of many examples in which the latter approach has yielded great benefits.
  • I see some sense in which the world’s overall civilizational ecosystem seems to have done a better job optimizing for the far future than any of the world’s individual minds. It’s often the case that people acting on relatively short-term, tangible considerations (especially when they did so with creativity, integrity, transparency, consensuality, and pursuit of gain via value creation rather than value transfer) have done good in ways they themselves wouldn’t have been able to foresee. If this is correct, it seems to imply that one should be focused on “playing one’s role as well as possible” – on finding opportunities to “beat the broad market” (to do more good than people with similar goals would be able to) rather than pouring one’s resources into the areas that non-robust estimates have indicated as most important to the far future.
  • The process of trying to accomplish tangible good can lead to a great deal of learning and unexpected positive developments, more so (in my view) than the process of putting resources into a low-feedback endeavor based on one’s current best-guess theory. In my conversation with Luke and Eliezer, the two of them hypothesized that the greatest positive benefit of supporting GiveWell’s top charities may have been to raise the profile, influence, and learning abilities of GiveWell. If this were true, I don’t believe it would be an inexplicable stroke of luck for donors to top charities; rather, it would be the sort of development (facilitating feedback loops that lead to learning, organizational development, growing influence, etc.) that is often associated with “doing something well” as opposed to “doing the most worthwhile thing poorly.”
  • I see multiple reasons to believe that contributing to general human empowerment mitigates global catastrophic risks. I laid some of these out in a blog post and discussed them further in my conversation with Luke and Eliezer.

For one who accepts these considerations, it seems to me that:

  • It is not clear whether placing enormous value on the far future ought to change one’s actions from what they would be if one simply placed large value on the far future. In both cases, attempts to reduce global catastrophic risks and otherwise plan for far-off events must be weighed against attempts to do tangible good, and the question of which has more potential to shape the far future will often be a difficult one to answer.
  • If one sees few robustly good opportunities to “make a huge difference to the far future,” the best approach to making a positive far-future difference may be “make a small but robustly positive difference to the present.”
  • One ought to be interested in “unusual, outstanding opportunities to do good” even if they don’t have a clear connection to improving the far future.

With that said:

  • This line of reasoning is not the only or overwhelming consideration in our current top charity recommendations. As discussed in the previous section, we place some weight on the importance of the far future but believe it would be irrational to let our beliefs about it take on excessive weight in our decision-making. The possibility that arguments about the importance of the far future are simply mistaken, and that the best way to do good is to focus on the present, carries weight.
  • I also do not claim that the above reasoning should push all those interested in the far future into nearer-term, higher-certainty actions. People who are well-positioned to take on low-probability, high-upside projects aiming to make a huge difference – especially when their projects are robustly worthwhile and especially when their projects represent promising novel ideas – should do so. People who have formed the deep understanding necessary to evaluate such projects well should not take us to be claiming that their convictions are irrational given what they know (though we do believe some people form irrationally confident convictions based on speculative arguments). As GiveWell has matured, we’ve become (in my view) much better-positioned to take on such low-probability, high-upside projects; hence our launch of GiveWell Labs and our current investigations on global catastrophic risks. The better-informed we become, the more willing we will be to go out on a limb.

Global catastrophic risk reduction as a promising area for philanthropy
I see global catastrophic risk reduction as a promising area for philanthropy, for many of the reasons laid out in a previous post:

  • It is a good conceptual fit for philanthropy, which is seemingly better suited than other approaches to working toward diffused benefits over long time horizons.
  • Many global catastrophic risks appear to get little attention from philanthropy.
  • I place some (though not overwhelming) weight on the argument that the implications of a catastrophe for the far future could be sufficiently catastrophic and long-lasting that even a small mitigation could have huge value.

I believe that declaring global catastrophic risk reduction to be the clearly most important cause to work on, on the basis of what we know today, would not be warranted. A broad variety of other causes could be superior under reasonable assumptions. Scientific research funding may be far more important to the far future (especially if global catastrophic risks turn out to be relatively minor, or science turns out to be a key lever in mitigating them). Helping low-income people (including via our top charities) could be the better area to work in if our views regarding the far future are fundamentally flawed, or if opportunities to substantially mitigate global catastrophic risks turn out to be highly limited. Working toward better public policy could also have major implications for both the present and the future, and having knowledge of this area could be an important tool no matter what causes we end up working on. More generally, by exploring multiple promising areas, we create better opportunities for “unknown unknown” positive developments, and the discovery of outstanding giving opportunities that are difficult to imagine given our current knowledge. (We also will become more broadly informed, something we believe will be very helpful in pitching funders on the best giving opportunities we can find – whatever those turn out to be.)

Potential global catastrophic risk focus areas

Throughout the post, “we” refers to GiveWell and Good Ventures, who work as partners on GiveWell Labs. This post draws substantially on our recent updates on our investigation of policy-oriented philanthropy, including using much of the same language.

As part of our work on GiveWell Labs, we’ve been exploring the possibility of getting involved in efforts to ameliorate potential global catastrophic risks (GCRs), by which we mean risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction). Examples of such risks could include a large asteroid striking earth, worse-than-expected consequences of climate change, or a threat from a novel technology, such as an engineered pathogen.

In our annual plan for 2014, we set a stretch goal of making substantial commitments to causes within global catastrophic risks by the end of this calendar year. We are still hoping to decide whether to make commitments in this area, and if so which causes to commit to, on that schedule. At this point, we’ve done at least some investigation of most of what we perceive as the best candidates for more philanthropic involvement in this category, and we think it is a good time to start laying out how we’re likely to choose between them (though we have a fair amount of investigative work still to do). This post lays out our current thinking on the GCRs we find most worth working on for GiveWell Labs.

Why global catastrophic risks?

We believe that there are a couple features of global catastrophic risks that make them a conceptually good fit for a global humanitarian philanthropist to focus on. These map reasonably well to two of our criteria for choosing causes, though GCRs generally seem to perform relatively poorly on the third:

  • Importance. By definition, if a global catastrophe were to occur, the impact would be devastating. However, most natural GCRs appear to be quite unlikely, making the annual expected mortality from natural GCRs low (e.g., perhaps in the hundreds or thousands; more on the distinction between natural and anthropogenic GCRs below). The potential importance of GCRs comes both from novel technological threats, which could be much more likely to cause devastating impacts, and from considering the very long-term impacts of a low-probability catastrophe: depending on the moral weight one assigns to potential future generations, the expected harm of (even very unlikely) GCRs may be quite high relative to other problems.
  • Crowdedness. Because GCRs are generally perceived to have a very low probability, many other social agents that are normally devoted to protecting against risks (e.g. insurance companies, governments in wealthy countries) appear not to pay them much attention. This should not necessarily be surprising, since much of the benefits of averting GCRs seem to accrue to future generations, which cannot hold contemporary institutions accountable, and to the extent they accrue to present generations, they are distributed very widely, with no clear concentrated constituency that has an incentive to prioritize them. The possibility that a long time horizon may be required to justify investment in averting GCRs also seems to make them a good conceptual fit for philanthropy, which, as GiveWell board member Rob Reich has argued, is unusually institutionally suited to long time horizons. This makes it all the more notable that, with the key exception of climate change, most potential global catastrophic risks seem to receive little or no philanthropic attention (though some receive very significant government support). The overall lack of social attention to GCRs is not dispositive, but it suggests that if GCRs are genuinely worthy of concern, a new philanthropist aiming to address them may encounter some low-hanging fruit.
  • Tractability. The very low frequencies of GCRs suggest that tractability is likely to be a challenge. Humanity has little experience dealing with such threats, and it may be important to get them right the first time, which seems likely to be difficult. A philanthropist would likely struggle to know whether they were making a difference in reducing risks.

Our tentative conclusion on GCRs as a whole is that the balance of strong performance on the importance and crowdedness criteria outweighs low expected tractability, but we are open to revising that view on the basis of deeper explorations of particularly promising-seeming GCRs.

What we’ve done to investigate GCRs
We have published shallow investigations on both GCRs in general and a variety of specific (potential) GCRs:

We also have an investigation forthcoming on potential risks from artificial intelligence, and we commissioned former GiveWell employee Nick Beckstead to do a shallow investigation of efforts to improve disaster shelters to increase the likelihood of recovery from a global catastrophe. We are still hoping to conduct shallow investigations of nanotechnology, synthetic biology governance (aimed more at ecological threats than biosecurity), and the field of emerging technology governance, though we may not do so before prioritizing causes within GCRs.

Beyond the shallow level, we have done a deeper investigation on geoengineering research and continued our investigation of biosecurity through a number of additional conversations.

Our investigations have been far from comprehensive; we’ve prioritized causes we’ve had some reason to think were particularly promising, often because we suspected a relative lack of interest from other philanthropists relative to the causes’ humanitarian importance or because we encountered a specific idea from someone in our network.

We have also made attempts to have conversations with people who think broadly and comparatively about global catastrophic risks. As far as we can tell, most such people tend to be connected to the effective altruist community (to which we have strong ties and which tends to take a strong interest in GCRs). Many of our conversations with such people have been informal, but public notes are available from our conversations with Carl Shulman, a research associate at the Future of Humanity Institute, and Seth Baum, executive director of the Global Catastrophic Risk Institute.

General patterns in what we find promising
The following two general observations are major inputs into our thinking:

“Natural” GCRs appear to be less harmful in expectation.

After a number of shallow investigations, we’ve tentatively concluded that “natural” (i.e. not human-caused) GCRs seem to present smaller threats than “anthropogenic” (i.e. human-caused) GCRs. The specific examples we’ve examined and a general argument point the same direction.

The general argument for being more worried about anthropogenic GCRs is as follows. The human species is fairly old (Homo sapiens sapiens is believed to have evolved several hundred thousand years ago), giving us a priori reason to believe that we do not face high background extinction risk: if we had a random 10% chance of going extinct every 10,000 years, we would have been unlikely to have survived this long (0.9^30 = ~4%). Note that anthropic bias can make this kind of reasoning suspect, but this reasoning also seems to map well to available data about different potential GCRs, as discussed below (i.e., we do not observe natural risks that appear likely to cause human extinction). By contrast with “natural” risks, anthropogenic risks present us with potentially unprecedented situations, for which history cannot serve as much of a guide. Atomic weapons and biotechnology are only decades old, and some of the most dangerous technologies may be those that don’t yet exist. With that said, some “natural” risks could present us with somewhat unprecedented situations, due to the modern world’s historically high level of interconnectedness and reliance on particular infrastructure.

On the specifics of various “natural” GCRs:

  • Near earth asteroids. A 2010 U.S. National Research Council report estimates that the background annual probabilities of an impact as large as the one that is believed to have caused the extinction of the dinosaurs and a “possible global catastrophe” are 1/100 million and 1/700,000 respectively (PDF, page 19). NASA reports that it has tracked 93% of the near earth asteroids large enough to cause a “possible global catastrophe” and all of the ones as large as the one believed to have caused the extinction of the dinosaurs (and none of them are on track to hit Earth in the next few centuries), suggesting a residual possibility of a “possible global catastrophe” of ~1/100,000 during the next century (and likely lower). There may be a comparable remaining risk from comets—Vaclav Smil claims that “probabilities of the Earth’s catastrophic encounter with a comet are likely less than 0.001% during the next 50 years,” which would be about the same as the remaining asteroid risk—but our understanding is that comets are much harder to detect. As a result of the attention from NASA and the B612 Foundation, this cause also appears more “crowded” than others, though seemingly more tractable as well.
  • Large volcanic eruptions. Estimates of the frequency of volcanic eruptions large enough to count as global catastrophic risks differ by several orders of magnitude, but our current understanding is that volcanic eruptions large enough to cause major crop failures are likely to occur no more frequently than 1/10,000 years, and perhaps significantly less frequently (suggesting a <1% chance of such an eruption in the next century). Large volcanic eruptions may be much more of a cause for concern than asteroid strikes, but this cause performs relatively poorly on tractability, since our ability to predict eruptions is limited, and we are not currently capable of preventing an eruption.
  • Antibiotic resistance. Microbes are currently evolving to be resistant to antibiotics faster than new antibiotics are being developed, posing a growing public health threat. However, antibiotic resistance is unlikely to represent a threat to civilization, since humanity survived without antibiotics until ~1940, including during the period when most gains against infectious diseases were made. We also expect other actors to work to address antibiotic resistance as it continues to become a more pressing public health issue. (More at our writeup.)
  • Geomagnetic storms. The major threat from geomagnetic storms is to potentially imperil some large-scale power infrastructure, but the risks are not well-understood. A consultant who has contributed to many of the published reports on the topic contends that a worst-case, 1/200 year storm could result in a “years-long global blackout,” but other sources show less concern (e.g. modeling the impact of a ~200 year storm as a risk of a blackout for ~10% of the U.S. population for somewhere between 2 weeks and 2 years).

The only GCRs that receive large amounts of philanthropic attention are nuclear security and climate change.

We do not have precise figures aggregated across causes, but our impression is that climate change is an area in which hundreds of millions of dollars a year are spent by U.S. philanthropic funders, while philanthropic funding addressing nuclear security appears to be in the tens of millions.

We don’t know of philanthropic funding for any of the other GCRs exceeding the single digit millions of dollars per year.

Leading focus area contenders

The leading contenders described below are among the most apparently dangerous and potentially unprecedented GCRs (seemingly – to us – more worrisome than the “natural” GCRs listed above, though such a comparison is necessarily a judgment call). At the same time, all appear to have limited “crowdedness,” at least in terms of philanthropic attention, unlike nuclear security (and unlike most of the climate change space, though one of the contenders described below relates to climate change). They are discussed in the order I would pick between them if I had to pick today, though we have not decided how many we expect to commit to by the end of the year, and other GiveWell staff may disagree. Though these are the GCRs I would choose to work on if I were picking today, we don’t have high confidence that they represent the correct set. There are a number of questions (discussed below) that we hope to address before reaching a conclusion at the end of the year.

Biosecurity

By biosecurity, we mean the constellation of issues around pandemics, bioterrorism, biological weapons, and biotechnology research that could be used to inflict great harm (“dual use research”). Our understanding is that natural pandemics (especially flu pandemics) likely present the greatest current threat, but that the development of novel biotechnology could lead to greater risks over the medium or long term. We see this GCR as having a strong case for “importance” because it seems to combine relatively credible, likely, current threats with more speculative potential longer-term threats in a fairly coherent program area. The space receives significant attention from the U.S. government (with ~$5 billion in funding in 2012) but little from foundations: the Skoll Global Threats Fund is the only U.S. foundation we know to be engaging in this area currently, at a relatively low level, though the Sloan Foundation also used to have a program in this area. (We believe the distinction between government and philanthropic funding is at least potentially meaningful, as the two types of actors have different incentives and constraints; in particular, philanthropic funding could potentially influence a much larger amount of government funding.) Although we are not sure of the activities that would be best for a philanthropist to support, many people we spoke with argued that current preparedness is subpar and that there is significant room for a new philanthropic funder.

Although we have had a number of additional conversations since the completion of our shallow investigation, we continue to regard the question of what a philanthropist should fund within this broad issue as an open one. We expect to address it with a deeper investigation and a declared interest in funding.

Geoengineering research and governance

We see a twofold case for the importance of work on geoengineering research and governance:

Although solar geoengineering is in the news periodically, research on the science or governance appears to receive relatively little dedicated funding: our rough survey found about $10 million/year in identifiable support from around the world (mostly from government sources), and we are not aware of any institutional philanthropic commitment in the area (though Bill Gates personally supports some research in the area).

Our conversations have led us to believe that there is significant scientific interest in conducting geoengineering research and that funding is an obstacle, but, as with biosecurity, we do not have a very detailed sense of what we might fund. We’re wary of the concern that further geoengineering research could conceptually undermine support for emissions reductions, but we regard it as relatively unlikely, and also find it plausible that further research could contribute significantly to governance efforts.

We expect to address the question of what a philanthropist could support in this area with a deeper investigation and a declared interest in funding. Note that we don’t envision ourselves as trying to encourage geoengineering, but rather as trying to gain better information and governance structures for it, which could make the actual use more or less likely (and given the high potential risks of both climate change and geoengineering, we could imagine that shifting the probabilities in either direction – depending on what comes of more exploratory work – could do great good).

Potential risks from artificial intelligence

We are earlier in this investigation than in investigations of the above two causes, and have not yet produced a writeup. There is internal disagreement about how likely this cause is to end up as a priority; I don’t feel highly confident that it should be above some of the other contenders not discussed in depth here.

In brief, it appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. Such a scenario could carry great potential benefits, but could carry significant dangers (e.g. technological disemployment, accidents, crime, extremely powerful autonomous agents) as well. The majority of academic artificial intelligence researchers seem not to see the rapid development of powerful autonomous agents as a substantial risk, but to believe that there are some potential risks worth preparing for now (such as accidents in crucial systems or AI-enabled crime; see slides 20-22). However, some people, including the Machine Intelligence Research Institute and computer scientist Stuart Russell, feel that there are important things that should be done today to substantially improve the social outcomes associated with the rapid development of powerful artificial intelligence.

In general, my inclination would be to defer to the preponderance of expert opinion, but I think this area could potentially be promising for philanthropy partly because I have not seen a rigorous public assessment by credible AI researchers to support the (seemingly predominant) lack of concern over risks from the rapid development of powerful autonomous agents. Since this topic seems to be drawing increasing attention from some highly credentialed people, supporting such a public assessment seems like it could be valuable, even if the conclusion is that most researchers are right to not be concerned. The fact that a substantial portion of mainstream AI researchers also seem to think that more traditional risks from AI progress (e.g. accidents, crime) are worth addressing in the near term does increase my interest in the area, though not by much, since I don’t see those issues as GCRs, whereas the rapid development of powerful autonomous agents could conceivably be one. Should we decide to pursue this area further, I would guess that it would be at a lower level of funding than the other potential priority areas described above.

Note from Holden: I currently see this cause as more promising than Alexander does, to a fairly substantial degree. I agree that there are reasons, including the preponderance of expert opinion, to think that there is little preparatory work worth doing today; however, I see the stakes as large enough to justify work in this area even at a relatively low probability of having impact. I would like to see reasonably well-resourced, full-time efforts – with substantial input from mainstream computer scientists – to think about what preparations could be done for major developments in artificial intelligence, and my perception is that efforts fitting this description do not exist currently. We are currently working on trying to understand whether the seeming lack of activity comes from a place of “justified confidence that action is not needed now” or of “lack of action despite a reasonable possibility that action would be helpful now.” My current guess is that the latter is the case, and if so I hope to make this cause a priority.

We will be writing more on this topic in the future.

Why these three risks stand out

Generally speaking, the causes highlighted above (geoengineering, biosecurity and potentially (pending more investigation) artificial intelligence) seem to us to have:

  • Greater potential for the most extreme direct harms (extreme enough to make a substantial change to the long-term trajectory of civilization likely) relative to other risks we’ve looked at, with the exception of nuclear weapons (an area that we perceive as more “crowded” than these three).
  • Very difficult to quantify, but potentially reasonably high (1%+), risk of such extreme harm in the next 50-100 years.
  • Very little philanthropic attention.

Our guess is that most other candidate risks would, upon sufficient investigation, appear less worth working on than at least one of our top candidates – due to presenting less potential for harm, less tractability, or more crowdedness, while being roughly comparable on other dimensions. That said, (a) the specific assessment of artificial intelligence is still in progress and we don’t have internal agreement on it, as discussed above; (b) we have low confidence in our working assessment, and plan both to do more investigation and to seek out more critical viewpoints on our current priorities.

Topics for further investigation

While I currently see the three potential GCRs discussed above as the leading contenders for GCR focus areas, there are a number of questions we would like to answer before committing.

Our shallow investigations have generated a number of follow-up questions that we would like to resolve before committing to causes:

  • Our current understanding is that major volcanic eruptions are currently neither predictable nor preventable, making this cause apparently rather intractable. To what extent could further research help remedy these shortcomings, and are there other ways a philanthropist could help address the risk from a large volcanic eruption?
  • How do risks from comets compare to the remaining risks from untracked near earth asteroids? Our understanding is that these risks are likely to be an order of magnitude or two lower than volcanic eruption risks that would cause similar harm, but we aren’t sure how they compare in tractability. What could be done about potential risks from comets?
  • How credible are existing estimates of the potential harm of geomagnetic storms? In particular, how do experts assess the risks to the power grid from a rare geomagnetic event? How prepared are power companies for geomagnetic storms?
  • Are there any important gaps in current funding for efforts to improve nuclear security?

In addition, we are still hoping to conduct shallow investigations of nanotechnology, synthetic biology governance (aimed more at ecological threats than biosecurity), and the field of emerging technology governance as a whole, which we think could potentially be competitive with some of the risks described as potential focus areas.