I previously discussed our view that in general, further economic development and general human empowerment are likely to be substantially net positive, and are likely to lead to improvement on many dimensions in unexpected ways. In my view, the most worrying counterpoint to this view is the possibility of global catastrophic risks. Broadly speaking, while increasing interconnectedness and power over our environment seem to have many good consequences, these things may also put us at greater risk for a major catastrophe – one that affects the entire world (or a large portion of it) and threatens to reverse, halt, or substantially slow the ongoing global progress in living standards.
This post lists the most worrying global catastrophic risks that I’m aware of, and briefly discusses the role that further technological and economic development could play in exacerbating – or mitigating – them. A future post will discuss how I think about the overall contribution of economic/technological development to exacerbating/mitigating global catastrophic risks in general (including risks that aren’t salient today). The purpose of this post is to (a) continue fleshing out the broad view that further economic development and general human empowerment are likely to be substantially net positive, which is one of the deep value judgments and worldview characteristics underlying our approach to giving recommendations; (b) catalogue some possible candidates for philanthropic focus areas (under the theory that major global catastrophic risks are potentially promising areas for philanthropy to address).
In general, I’m trying to list factors that could do not just large damage, but the kind of damage that could create an unprecedented global challenge.
- More powerful technology – particularly in areas such as nuclear weapons, biological weapons, and artificial intelligence – may make wars, terrorist acts, and accidents more dangerous. Further technological progress is likely to lead to technology with far more potential to do damage. Somewhat offsetting this, technological and economic progress may also lead to improved security measures and lower risks of war and terrorism.
- A natural pandemic may cause unprecedented damage, perhaps assisted by the development of resistance to today’s common antibiotics. On this front I see technological and economic development as mostly risk-reducing, via the development of better surveillance systems, better antibiotics, better systems for predicting/understanding/responding to pandemics, etc.
- Climate change may lead to a major humanitarian crisis (such as unprecedented numbers of refugees due to sea level rise) or to other unanticipated consequences. Economic development may speed this danger by increasing the global rate of CO2 emissions; economic and technological development may mitigate this danger via the development of better energy sources (as well as energy storage and grid systems and other technology for more efficiently using energy), as well as greater wealth leading to more interest in – and perceived ability to afford – emissions reduction.
- Technological and economic progress could slow or stop due to a failure to keep innovating at a sufficient rate. Gradual growth in living standards has been the norm for a long time, and a prolonged stagnation could cause unanticipated problems (e.g., values could change significantly if people don’t perceive living standards as continuing to rise).
- Global economic growth could become bottlenecked by a scarcity of a particular resource (the most commonly mentioned concern along these lines is “peak oil,” but I have also heard concerns about supplies of food and of water for irrigation). Technological and economic progress could worsen this risk by speeding our consumption of a key resource, or could mitigate it by leading to the development of better technologies for finding and extracting resources and/or effective alternatives to such resources.
- An asteroid, supervolcano or solar flare could cause unprecedented damage. Here I largely see economic and technological progress as risk-reducing factors, as they may give us better tools for predicting, preventing and/or mitigating damage from such natural disasters.
- An oppressive government may gain power over a substantial part of the world. Technological progress could worsen this risk by improving the tools of such a government to wage war and monitor and control citizens; technological and economic progress could mitigate this risk by strengthening others’ abilities to defend themselves.
I should note that I perceive the odds of complete human extinction from any of the above factors, over the next hundred years or so, to be quite low. #1 would require the development of weapons with destructive potential far in excess of anything that exists today, plus the deployment of such weapons either by superpowers (which seems unlikely if they hold the potential for destroying the human race) or by rogue states/individuals (which seems unlikely since rogue states/individuals don’t have much recent track record of successfully obtaining and deploying the world’s most powerful weapons). #2 would require a disease to emerge with a historically unusual combination of propensity-to-kill and propensity-to-spread. And in either case, the odds of killing all people – taking into account the protected refuges that many governments likely have in place and the substantial number of people who live in remote areas – seem substantially less than the odds of killing many people. We have looked into #3 and and parts of #6 to some degree, and currently believe that there are no particularly likely-seeming scenarios with risk of human extinction.
- Massive reduction or elimination of poverty.
- Massive improvements in quality of life for the non-poor.
- Improved intelligence, wisdom, and propensity for making good decisions across society.
- Increased interconnectedness, empathy and altruism.
- Space colonization or other developments leading to lowered potential consequences of global catastrophic risks.
I feel that humanity’s future may end up being massively better than its past, and unexpected new developments (particularly technological innovation) may move us toward such a future with surprising speed. Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable. In other words, if I somehow knew that economic and technological development were equally likely to lead to human extinction or to a brighter long-term future, it’s easy for me to imagine that I could still prefer such development to stagnation.
I see technological and economic development as essential to raising the odds of reaching a much brighter long-term future, and I see such a future as being much less vulnerable to global catastrophic risks than today’s world. I believe that any discussion of technological/economic development global catastrophic risks (and the role of technological/economic development in such risks) is incomplete if it leaves out this consideration.
A future post will discuss how I think about the overall contribution of economic/technological development to our odds of having a very bright, as opposed to very problematic, future. For now, I’d appreciate comments on any major, broad far-future considerations this post has neglected.
Comments
Holden, have you consider a significant extension in human lifespan (not just in developing countries) as an upside possibility? I know that some of what Peter Thiel is funding is related to that.
For risks (#1 and #7), a couple of different framings for “is economic/technological progress good or bad for this risk?” is as follows:
1. Will a higher rate of growth make the world adapt to the new technology more quickly or more slowly? Under a higher rate of growth, the world may adapt to new challenges more quickly than it otherwise would, since more empowered people can, in general, meet now challenges more quickly than less empowered people can. This factor could mean less total danger from the risky tech.
2. How is the *rate* of technological progress is related to the world’s *level* of risk-preparedness at any given level of development? On one hand, people may be less prepared if technology arrives more quickly. On the other, some people have made arguments which suggest that a higher *rate* of progress results in a more open, democratic, moral society at any given *level* of development. (Arguments in this book http://www.amazon.com/Moral-Consequences-Economic-Growth/dp/1400095719 support this view, though the issues are complicated and I haven’t reviewing this book yet.)
If you answer both questions favorably, it would strengthen the case that a higher rate of economic growth is good with respect to the global catastrophic risks in question. Perhaps these are the answers we’d arrive at if we keep thinking about the issue.
To clarify, I see the above comments as an advance on
> Further technological progress is likely to lead to technology with far more potential to do damage. Somewhat offsetting this, technological and economic progress may also lead to improved security measures and lower risks of war and terrorism.
because it may be more feasible to answer the questions I posed than it is to compare the effects mentioned here.
I think it’s important to stress the conceptual and normative difference between global catastrophic risks and existential risks. You do this to some degree, but I still get the impression that your analysis could benefit from greater clarity on this point. For example, you present global upscale possibilities as the flip side of global catastrophic risks; but while your analysis of such risks highlights that most do not constitute major existential threats, you conclude by saying that a realized upscale possibility may be as good, relative to the status quo, as an extinction event would be bad. In the future, you may want to discuss these two types of risks–catastrophic and existential–separately, perhaps even in separate posts.
Related to other items on your list (most obviously #1 and #7, also #3 and #5 and possibly others) is the risk of global war on the scale of say WWI or WWII. People doing straight line extrapolation of economic trends as of say 1910, might be off (warning: what follow is both vague and guesswork) by half a century in terms of their predictions of when certain benchmarks would be hit.
I am far from a singularity believer, but one of the inventors of the singularity in science fiction Vernor Vinge makes what I think is a relevant point in his book Rainbow’s end.
The problem with the advance of potentially dangerous technology is that the amount of resources to do something might become progressively smaller, so eventually things that would have previously required a superpower require only a group of a few or even one person. I think genetic engineering of pathogens might be one avenue where this could happen.
This obviously is a problem as it is somewhat more obvious where a threat is coming from if the infrastructure being used is the size of a country rather than one person.
However, an important lesson we learn from many attempts by individuals or groups to attack people today is that a lot of them are really not very good at it (see e.g. the real but basically unworkable plans of the people who triggered the current ban on flying with liquids when they were discovered). So combined with the inherent uncertainty of something like a biological agent, I don’t imagine that complete human extinction by such a method would be very likely.
Basically what I am saying is that I agree with the premise that somehow attempting to stem the tide of technological change a) wouldn’t work and b) would prevent huge positive changes, especially because the regulation would be based on what is known, not what may be discovered.
Exactly agree — we need to look at issues broadly, globally, futuristically. I may have missed this, it may be so obvious to everyone, but a personal conern I have is hacking — stated specifically and not just framed as part of “technology” as I believe the implications are differnt and solutions require a different approach. It seems to be that there are a number of skirmishes going on in this realm, and that there is potential for enormous harm to be done, especially with regard to risk #1. Have we been hearing for so long about potential harm to our power grids, nuclear in particular, that we have become innured to it? Is this problem immediate or chronic? Are the implications largely limited in scope, or much broader than I am imagining? In addition, besides the most ominous concerns like penetration of nuclear plans, what would hacking of our financial system do? Are we studying these sufficiently? Can we bring the hackers over to the “good” side? In addition to everything else mentioned, this is the kind of thing that makes me lie awake at night.
Things may be gently globally progressing, or not. But if you live in a place or population with one or more of these big risks, the very idea of this kind of progress (the “whiggish” view, as the Economist calls it, or Pinker’s point, to others) will do you no good. To the contrary, it may hurt you if it makes the world relaxed or complacent. Further, calculating low probabilities of risk at the global level will tend to mask higher probabilities for particular places and populations. Better to remain agnostic on the global view, keep the skeptics on board,and focus on actions needed now and in the future.
Thanks for the comments, all.
Rohit, I agree that a significant increase in human lifespan should be considered an upside possibility.
Nick, this post focuses on the role of economic/technological progress in global catastrophic risks, with a frame of “does progress make things more dangerous relative to no progress?” I agree that in some ways the more important question is “Is faster progress preferable to slower progress if we take some rate of progress as given?” A future post will focus on that frame.
Pablo, the point I was trying to make was that (a) I don’t see specific high-probability risks of human extinction in the next 100 years but that (b) even if I did, I could imagine that such risks could be outweighed (for purposes of answering “Is economic/technological progress desirable?”) by upside possibilities.
Colin, agreed.
Tom, I largely agree. I also think it is worth noting how long the lag seems to have generally been between the invention of a powerful weapon and its availability to malicious individuals (nuclear weapons and many other military weapons have been available to governments and not individuals for a long time).
Dawn, good point – I think the risk of hacking or computer glitches becoming more dangerous as the world becomes more connected and digitized belongs on the list. I think further development can worsen this risk by increasing the degree of complexity and interconnectivity, but may also lower it via redundancies and response plans.
Jonathan, our values are global, so to the extent we’re interested in the risks to particular populations, it’s because addressing such risks would have outsized “good accomplished per dollar spent.” We are interested in such opportunities, but they weren’t the focus of this post.
Holden, can you (and Rohit) please elaborate on the extension of lifespan as an “upside possibility?” Do you mean that longer-life is intrinsically better? Also, I am not fully satisfied about what you are saying about reponse times with regard to my hacking concern. Seems like a 13 year old could pretty much take the place down now if he or she wanted to. Institutions have a natural disadvantage in the cyber-security-race because knowledge is outdated pretty much as it is rendered usable.
Dawn, I do think that a longer healthy lifespan (as opposed to extending life merely by adding on low-quality, poor-health years) is an intrinsic improvement. I believe the vast majority of people would prefer longer healthy lifespans if given the choice.
I disagree with your statement that “a 13 year old could pretty much take the place down now if he or she wanted to.” If this were true, I would expect this to have happened by now; I don’t think it’s the case that our systems are relatively stable only because no technically competent people want to disrupt them. Overall, it seems to me that viruses and spam have become less of a problem, not more of one, over the last decade, which provides some evidence against the claim that “institutions have a natural disadvantage,” but my original response to you didn’t presume that institutions or hackers have an advantage. Rather, it stressed the potential value of “redundancies and response plans” in lowering the consequences (as opposed to the probability) of a major disruption.
Regarding your point in the previous (not this) post that “It may be true that we would be safer from global catastrophic risks if we had never had any economic/technological development, but a faster rate seems safer than a slower rate from here” — this seems contrary to what Nick Bostrom observes about GCRs arising from technology being most important. It seems fair to say that we today would *not* be safer, but that most of us wouldn’t exist in the first place because population would be lower. The relevant question is however probably less whether we are safer but whether the ultimate number of human lives (or life years) until some terminal disaster happens is increased or decreased.
On the answer to *this* question, I am personally agnostic. What I would suggest is what Bostrom has called “differential technological development,” or trying to promote technologies likely to lead to good effects.
In any case, here at GiveWell you are donating to charities aimed directly at very poor people, and therefore relying on first-order effects of giving in determining your choices. So it seems like even with a simpler view of the effects of technology than I have, it is not affecting your decisions much. Is this correct? Do you believe that incorporating such effects would be productive?
Sorry, that should actually read, “It seems fair to say that we today *would* be safer if we existed, but that most of us actually wouldn’t exist because population would be lower.” Whoops!
Alexander, I will be fleshing out my “faster vs. slower” claim in a future post. Part of the reason we’re starting to write about these issues is that we are broadening our research to look beyond direct aid.
I notice you missed “overpopulation” in your list. I mean, pandemics and climate change would definitely be less of a threat with lower population densities and lower overall population, respectively. But why focus on the symptoms and ignore the underlying cause, which is the unavoidable mathematical fact that the carrying capacity of the planet cannot grow indefinitely.
What has kept us out of a Malthusian scenario so far has been technological progress, so you are also underestimating the lethality of catastrophes #4 and #5. Also, keep in mind, that “better energy sources” cost money and energy to develop, so a depletion-triggered economic collapse could squeeze us from both directions.
Alternative energy sources are such a tiny fraction of the market right now that if you genuinely believe they will supplant fossil fuels quickly enough that we shouldn’t be worried, you must believe that it is going to grow faster than the internet did in the 90’s. If this is what you believe, the most good you can do is to invest your money in alternative energy, multiply it by orders of magnitude over the next decade, and then give _that_ to charity.
AFB:
I’m surprised by the apparent lack of attention given to global economic collapse or GEC. By this I mean not a recession, depression or Great Depression, but the utter failure of the entire global economic system. We are still in the throes of the 2008 “financial” crisis, which, on more than one occasion, has threatened GEC. I don’t even see this mentioned.
To me, GEC is as lethal as anything else, though not as quick. It’s in our face, right there, unlike an asteroid, or volcano. It’s almost as uncontrollable as the weather, being under the sway not only of the entire human species, but everything else on this planet (at least with nuclear war, somebody has to press a button somewhere). It certainly isn’t very easy to “solve” and it’s arguably the most unstable catalyst for global catastrophe that exists. Imagine a damaged planet-killing bomb that could, all by itself, blow-up at any moment, hopping around the countryside like a gazelle while the person chasing it, in a frightening attempt to disarm, is a hapless Don Knots on methamphetamine. Why isn’t GEC discussed?
Julie, good point – I think the risk of major disruptions to the global economic system belongs on the list. As with the risk raised by Dawn in an earlier comment, I think further development can worsen this risk by increasing the degree of complexity and interconnectivity, but may also lower it via improved prevention and response plans.
Comments are closed.