Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on global catastrophic risks.
One focus area for the Open Philanthropy Project is reducing global catastrophic risks (such as from pandemics, potential risks from advanced artificial intelligence, geoengineering, and geomagnetic storms). A major reason that the Open Philanthropy Project is interested in global catastrophic risks is that a sufficiently severe catastrophe may risk changing the long-term trajectory of civilization in an unfavorable direction (potentially including human extinction if a catastrophe is particularly severe and our response is inadequate).
One possible perspective on such risks—which I associate with the Future of Humanity Institute, the Machine Intelligence Research Institute, and some people in the effective altruism community who are interested in the very long-term future—is that (a) the moral value of the very long-term future overwhelms other moral considerations; (b) given any catastrophe short of an outright extinction event, humanity would eventually recover, leaving humanity’s eventual long-term prospects relatively unchanged. On this view, seeking to prevent potential outright extinction events has overwhelmingly greater significance for humanity’s ultimate future than seeking to prevent less severe global catastrophes.
In contrast, the Open Philanthropy Project’s work on global catastrophic risks focuses on both potential outright extinction events and global catastrophes that, while not threatening direct extinction, could have deaths amounting to a significant fraction of the world’s population or cause global disruptions far outside the range of historical experience. This post explains why I believe this approach is appropriate even when accepting (a) from the previous paragraph, i.e., when assigning overwhelming moral importance to the question of whether civilization eventually realizes a substantial fraction of its long-run potential. While it focuses on my own views, these views are broadly shared by several others who focus on global catastrophic risk reduction at the Open Philanthropy Project, and have informed the approach we’re taking.
In brief:
- Civilization’s progress over the last few centuries—in scientific, technological, and social domains—has no historical parallel.
- With the possible exception of advanced artificial intelligence, for every potential global catastrophic risk I am aware of, I believe that the probability of an outright extinction event is much smaller than the probability of other global catastrophes. My understanding is that most people who favor focusing work almost entirely on outright extinction events would agree with this.
- If a global catastrophe occurs, I believe there is some (highly uncertain) probability that civilization would not fully recover (though I would also guess that recovery is significantly more likely than not). This seems possible to me for the general and non-specific reason that the mechanisms of civilizational progress are not understood and there is essentially no historical precedent for events severe enough to kill a substantial fraction of the world’s population. I also think that there are more specific reasons to believe that an extreme catastrophe could degrade the culture and institutions necessary for scientific and social progress, and/or upset a relatively favorable geopolitical situation. This could result in increased and extended exposure to other global catastrophic risks, an advanced civilization with a flawed realization of human values, failure to realize other “global upside possibilities,” and/or other issues.
- Consequently, for almost every potential global catastrophic risk I am aware of, I believe that total risk (in terms of failing to reach a substantial fraction of humanity’s long-run potential) from events that could kill a substantial fraction of the world’s population is at least in the same ballpark as the total risk to the future of humanity from potential outright extinction events.
Therefore, when it comes to risks such as pandemics, nuclear weapons, geoengineering, or geomagnetic storms, there is no clear case for focusing on preventing potential outright extinction events to the exclusion of preventing other global catastrophic risks. This argument seems most debatable in the case of potential risks from advanced artificial intelligence, and we plan to discuss that further in the future.