The GiveWell Blog

Open Philanthropy Project update: Global catastrophic risks

This post is more than 9 years old

This post lays out our progress, since last year, on identifying potential focus areas for our work on global catastrophic risks.

Summary
Note: this section is similar to the introduction of our previous post on U.S. policy. The overall approach of our work has evolved similarly in the two areas.

Last year, we set a “stretch goal” for the Open Philanthropy Project:

There are two types of causes – global catastrophic risks and US policy issues – that we now feel generally familiar with (particularly with the methods of investigation). We also believe it is important for us to pick some causes for serious commitments (multiple years, substantial funding) as soon as feasible, so that we can start to get experienced with the process of building cause-specific capacity and finding substantial numbers of giving opportunities. As such, our top goal for 2014 is a stretch goal (substantial probability we will fail to hit it): making substantial commitments to causes within these two categories. We aren’t sure yet how many causes this will involve; it will depend partly on our ability to find suitable hires. We also haven’t fully formalized the notion of a “substantial commitment to cause X,” but it will likely involve having at least one staff member spending a substantial part of their time on cause X, planning to do so for multiple years, and being ready to commit $5-30 million per year in funding.

Since then:

  • Our thinking on how, and how much, to “commit” to causes has evolved. Rather than commit major time and funding up front to a small number of causes, we are going with a longer list of prioritized causes, and we’re looking for a good combination of “high-priority cause” with “strong specific giving and/or hiring opportunity.”
  • With that said, we feel that we’ve fulfilled the spirit of the above goal, about a month behind the date we had set. We’ve done a large number of shallow- and medium-depth cause investigations, and we’re now transferring the bulk of our energy from these sorts of investigations to seeking out hires and grants in the causes we’ve prioritized.
  • Our new goal is to be in the late stages of making at least one “big bet” – a major grant ($5+ million) or full-time hire – in the next six months. We think there is a moderate likelihood that we will hit this goal; if we do not, we will narrow our focus to a smaller number of causes in order to raise our odds.
  • Our highest priority is to make a full-time hire for working on biosecurity. As a second priority, we are spending significant time on various aspects of geoengineering, geomagnetic storms, risks from artificial intelligence, and some issues that cut across different global catastrophic risks. A more extensive summary of our priorities and reasoning is available as a Google sheet.
  • We have recently been prioritizing investigation over public writeups, and there are many shallow- and medium-depth investigations we have completed but not written up. We are experimenting with different processes for writing up completed investigations – in particular, trying to assign more of the work to more junior staff – so our public writeups could remain behind our private investigations for much of the next few months.

Below, we go into more detail on:

Progress since our June update
Biosecurity. We put significantly more work into understanding the fairly large and complex biosecurity space, which includes efforts to prevent or mitigate the harm from natural pandemics, accidental release of dangerous natural pathogens, currently existing biological weapons, and accidental or purposeful release of synthetic pathogens in the future. We believe there are significant philanthropic opportunities here. We are currently strongly considering one grant and may consider others, though we believe this space is complex enough that the best way to approach it would be with specialized staff.

Artificial intelligence. We began an investigation of risks from potential unintended consequences of advances in artificial intelligence. We hoped to hear the perspectives of mainstream computer scientists, artificial intelligence experts, and machine learning experts regarding arguments like those advanced in the recent book Superintelligence. We temporarily paused this investigation on learning that the Future of Life Institute was planning a conference on this topic; Howie Lempel and Jacob Steinhardt attended the conference on our behalf. We see the conference as a major update:

  • An open letter following the conference makes it fairly clear, to us, that a wide variety of people with relevant expertise see artificial intelligence as a technology whose potential for great benefits may come along with real risks on which meaningful preparatory research can and should be done.
  • This was followed by a $10 million commitment from Elon Musk to fund such research.
  • We see this cause as highly important and worthy of investment. It remains unclear to us how to think about its “crowdedness,” and we plan to coordinate closely with the Future of Life Institute to follow what gets funded and what gaps remain.

Geoengineering. We continued to investigate the cause of governance of and research into geoengineering, and are currently strongly considering a grant in this space.

Geomagnetic storms. We began an investigation (by consultant David Roodman, who previously investigated labor mobility and the mortality-fertility connection) into the conflicting claims we’ve seen about the threat posed by geomagnetic storms. This investigation is still in progress. Depending on its outcome, we may become interested in funding research into electrical grid robustness.

Other risks. We looked further into philanthropic possibilities for reducing risks from nuclear weapons, completed a shallow investigation on risks from atomically precise manufacturing, and did a small amount of investigation on general food security (a cross-cutting issue, since several different global catastrophic risks could disrupt global agriculture).

We have not yet made the results of any of the above investigations public, though we plan to. As mentioned early in this post, we have been prioritizing investigation over public writeups, and we are experimenting with different processes for writing up completed investigations – in particular, trying to assign more of the work to more junior staff.

As with U.S. policy, we have noted significant variation in the extent to which different issues are suitable for specialized staff. We feel that biosecurity would be best handled by specialized staff. The other areas we’re considering – with the possible exception of geoengineering – seem better suited to a “broad” model in which we scan multiple areas at once, looking for the most outstanding grant opportunities.

Plans
While there are more cause investigations we could do, at this point we think it’s appropriate to shift our priorities in the direction of granting out significant funds in the causes we’ve already identified as promising. At the same time, we’re trying to give ourselves the flexibility to look across multiple possible causes, and only make a “big bet” (a full-time hire or major grant) where we feel the opportunity is outstanding. As such, we’ve created a relatively long prioritized list of causes, with goals for each, and our six-month goal is to be in the late stages of making a “big bet” in at least one area. We may continue to make smaller grants, with relatively light investigation, when we see reasonably strong opportunities, but this is not our main goal.

We’ve ranked biosecurity as our top priority, for the following reasons.

Suitability for a full-time hire. Biosecurity stands out along several dimensions that make it an appealing but also particularly complex target for philanthropy:

  • Governments spend a large amount on biosecurity preparations but many opportunities to improve preparedness remain and there is little philanthropic spending in the field. This suggests an opportunity for philanthropy to leverage public money but also increases the complexity of the cause.
  • Some interventions may increase our preparedness for both near-term risks from natural pandemics and larger, longer-term risks related to the misuse or abuse of emerging synthetic biology technology. Efforts to reduce long-run risks may be more sustainable if they simultaneously produce verifiably short-run benefits but also risk losing sight of their long-run mission. Comparing the expected impact of interventions focused on different time horizons also presents a challenge and is one reason that hiring a specialist may be particularly valuable.
  • Biosecurity presents opportunities to intervene in many venues. Preparations include global, regional, national, and local components and a biosecurity strategy may target or fund governments, intergovernmental organizations, NGOs, for-profits, or other entities.

Overall, we feel biosecurity is the best-suited (of the causes we have ranked relatively highly) to a specialized hire, and hiring is a top priority of ours.

Importance, tractability, crowdedness. We see this area as the most threatening risk on the list, with the possible exception of artificial intelligence, in terms of probability of a massive global disruption to civilization, and we are fairly convinced that there are real opportunities to improve preparedness.

We find it difficult to predict whether the additional attention brought to the cause by the Ebola outbreak in West Africa will lead to major changes in available funding. We plan to monitor this situation and expect the most important effects to be on the relative crowdedness of different interventions within the cause. Our current view is that it would be a surprise if most of the promising opportunities to increase preparation were funded by other actors in the near future.

Our next few priorities are a set of risks that we see as (a) posing substantial threats of massive global disruptions to civilization in the next century; (b) presenting a strong possibility of useful, not-already-funded preparatory work in the near future; (c) not being a good fit for extremely intensive or full-time investigation at this time, either because we have some key open questions remaining or because we aren’t aware of a large enough space of giving opportunities. Specifically:

  • We believe geoengineering research and governance is a promising philanthropic space. Because it is a relatively thin space (not many researchers or organizations currently devoted to it), a specialized hire in this area may need to very actively field-build and generate interest from potential grantees who are not currently seeking (additional) funding to work on geoengineering; we remain uncertain of how wise or efficient such a strategy would be at this time. We might make a specialized hire if we found an outstanding fit, but might also simply continue to monitor the space and capitalize on giving opportunities that arise.
  • We believe that research on risks of unintended consequences from the development of artificial intelligence is a promising philanthropic space. Here again, the field is relatively thin; in addition, we are unsure what sorts of giving opportunities will remain in the wake of Elon Musk’s $10 million commitment. We are monitoring this space and communicating closely with the Future of Life Institute.
  • We plan to finish our investigation of risks from geomagnetic storms, after which point we might pursue the idea of funding research on electrical grid robustness. We don’t think we would fund other work in this area before learning more about the amount of damage that could be done by a severe storm.
  • Now that we have formed a broad view of the most threatening global catastrophic risks, we are interested in giving opportunities that could “cut across risks,” addressing multiple risks at once – for example, improving food security (which we have looked into a bit; we have preliminarily have found a lack of consensus on promising projects), forecasting future risks, researching ways to increase society’s general resilience to shocks, or improving general mechanisms for governance of emerging technologies. We are currently assessing some such opportunities and will continue to be open to more.

Below these priorities, we list risks where (a) a massive global disruption to civilization is highly unlikely to occur in the next century; or (b) we have found less useful preparatory work that is not already being done.

An additional goal for the next several months is to write up the more recent work we’ve done, most of which is not yet public.

Public summary of our global catastrophic risk priorities

Comments

  • Martin Randall on March 11, 2015 at 6:36 pm said:

    What does “somewhat conjunctive” mean (in the public summary of risk priorities)?

  • Holden on March 12, 2015 at 5:07 pm said:

    It’s flagging that the envisioned scenario requires more than one fairly specific event to happen in tandem. All else equal, more conjunctive scenarios (i.e., scenarios that involve multiple specific events happening) should be considered less likely than less conjunctive scenarios (i.e., scenarios that involve fewer specific events happening), and for the highest-uncertainty cases, this is most of what we have to go on for judging relative likelihoods.

Comments are closed.