The GiveWell Blog

The moral value of the far future

A popular idea in the effective altruism community is the idea that most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. By working to lower global catastrophic risks, speed economic development and technological innovation, and generally improve people’s resources, capabilities, and values, we may have an impact that (even if small today) reverberates for generations to come, helping more people in the future than we can hope to help in the present.

This belief is sometimes coupled with a belief that the most important goal of an altruist should be to reduce “existential risk”: the risk of an extreme catastrophe that causes complete human extinction (as, for example, a sufficiently bad pandemic – or extreme unexpected developments related to climate change – could theoretically do), and thus curtails large numbers of future generations.

We are often asked about our views on these topics, and this post attempts to lay them out. There is not complete internal consensus on these matters, so I speak for myself, though most staff members would accept most of what I write here. In brief:

  • I broadly accept the idea that the bulk of our impact may come from effects on future generations, and this view causes me to be more interested in scientific research funding, global catastrophic risk mitigation, and other causes outside of aid to the developing-world poor. (If not for this view, I would likely favor the latter and would likely be far more interested in animal welfare as well.) However, I place only limited weight on the specific argument given by Nick Bostrom in Astronomical Waste – that the potential future population is so massive as to clearly (in a probabilistic framework) dwarf all present-day considerations. More
  • I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes such as extreme climate change, pandemics, misuse of advanced artificial intelligence, etc. Even one who fully accepts the conclusions of “Astronomical Waste” has good reason to consider focusing on shorter-term, more tangible, higher-certainty opportunities to do good – including donating to GiveWell’s current top charities and reaping the associated flow-through effectsMore
  • I consider “global catastrophic risk reduction” to be a promising area for a philanthropist. As discussed previously, we are investigating this area actively. More

Those interested in related materials may wish to look at two transcripts of recorded conversations I had on these topics: a conversation on flow-through effects with Carl Shulman, Robert Wiblin, Paul Christiano, and Nick Beckstead and a conversation on existential risk with Eliezer Yudkowsky and Luke Muehlhauser.

The importance of the far future

As discussed previously, I believe that the general state of the world has improved dramatically over the past several hundred years. It seems reasonable to state that the people who made contributions (large or small) to this improvement have made a major difference to the lives of people living today, and that when all future generations are taken into account, their impact on generations following them could easily dwarf their impact in their own time.

I believe it is reasonable to expect this basic dynamic to continue, and I believe that there remains huge room for further improvement (possibly dwarfing the improvements we’ve seen to date). I place some probability on global upside possibilities including breakthrough technology, space colonization, and widespread improvements in interconnectedness, empathy and altruism. Even if these don’t pan out, there remains a great deal of room for further reduction in poverty and in other causes of suffering.

In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations. I see no obvious analytical flaw in this claim, and give it some weight. However, because the argument relies heavily on specific predictions about a distant future, seemingly (as far as I can tell) backed by little other than speculation, I do not consider it “robust,” and so I do not consider it rational to let it play an overwhelming role in my belief system and actions. (More on my epistemology and method for handling non-robust arguments containing massive quantities here.) In addition, if I did fully accept the reasoning of “Astronomical Waste” and evaluate all actions by their far future consequences, it isn’t clear what implications this would have. As discussed below, given our uncertainty about the specifics of the far future and our reasons to believe that doing good in the present day can have substantial impacts on the future as well, it seems possible that “seeing a large amount of value in future generations” and “seeing an overwhelming amount of value in future generations” lead to similar consequences for our actions.

Catastrophic risk reduction vs. doing tangible good
Many people have cited “Astronomical Waste” to me as evidence that the greatest opportunities for doing good are in the form of reducing the risks of catastrophes such as extreme climate change, pandemics, problematic developments related to artificial intelligence, etc. Indeed, “Astronomical Waste” seems to argue something like this:

For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

I have always found this inference flawed, and in my recent discussion with Eliezer Yudkowsky and Luke Muehlhauser, it was argued to me that the “Astronomical Waste” essay never meant to make this inference in the first place. The author’s definition of existential risk includes anything that stops humanity far short of realizing its full potential – including, presumably, stagnation in economic and technological progress leading to a long-lived but limited civilization. Under that definition, “Minimize existential risk!” would seem to potentially include any contribution to general human empowerment.

I have often been challenged to explain how one could possibly reconcile (a) caring a great deal about the far future with (b) donating to one of GiveWell’s top charities. My general response is that in the face of sufficient uncertainty about one’s options, and lack of conviction that there are good (in the sense of high expected value) opportunities to make an enormous difference, it is rational to try to make a smaller but robustly positive difference, whether or not one can trace a specific causal pathway from doing this small amount of good to making a large impact on the far future. A few brief arguments in support of this position:

  • I believe that the track record of “taking robustly strong opportunities to do ‘something good'” is far better than the track record of “taking actions whose value is contingent on high-uncertainty arguments about where the highest utility lies, and/or arguments about what is likely to happen in the far future.” This is true even when one evaluates track record only in terms of seeming impact on the far future. The developments that seem most positive in retrospect – from large ones like the development of the steam engine to small ones like the many economic contributions that facilitated strong overall growth – seem to have been driven by the former approach, and I’m not aware of many examples in which the latter approach has yielded great benefits.
  • I see some sense in which the world’s overall civilizational ecosystem seems to have done a better job optimizing for the far future than any of the world’s individual minds. It’s often the case that people acting on relatively short-term, tangible considerations (especially when they did so with creativity, integrity, transparency, consensuality, and pursuit of gain via value creation rather than value transfer) have done good in ways they themselves wouldn’t have been able to foresee. If this is correct, it seems to imply that one should be focused on “playing one’s role as well as possible” – on finding opportunities to “beat the broad market” (to do more good than people with similar goals would be able to) rather than pouring one’s resources into the areas that non-robust estimates have indicated as most important to the far future.
  • The process of trying to accomplish tangible good can lead to a great deal of learning and unexpected positive developments, more so (in my view) than the process of putting resources into a low-feedback endeavor based on one’s current best-guess theory. In my conversation with Luke and Eliezer, the two of them hypothesized that the greatest positive benefit of supporting GiveWell’s top charities may have been to raise the profile, influence, and learning abilities of GiveWell. If this were true, I don’t believe it would be an inexplicable stroke of luck for donors to top charities; rather, it would be the sort of development (facilitating feedback loops that lead to learning, organizational development, growing influence, etc.) that is often associated with “doing something well” as opposed to “doing the most worthwhile thing poorly.”
  • I see multiple reasons to believe that contributing to general human empowerment mitigates global catastrophic risks. I laid some of these out in a blog post and discussed them further in my conversation with Luke and Eliezer.

For one who accepts these considerations, it seems to me that:

  • It is not clear whether placing enormous value on the far future ought to change one’s actions from what they would be if one simply placed large value on the far future. In both cases, attempts to reduce global catastrophic risks and otherwise plan for far-off events must be weighed against attempts to do tangible good, and the question of which has more potential to shape the far future will often be a difficult one to answer.
  • If one sees few robustly good opportunities to “make a huge difference to the far future,” the best approach to making a positive far-future difference may be “make a small but robustly positive difference to the present.”
  • One ought to be interested in “unusual, outstanding opportunities to do good” even if they don’t have a clear connection to improving the far future.

With that said:

  • This line of reasoning is not the only or overwhelming consideration in our current top charity recommendations. As discussed in the previous section, we place some weight on the importance of the far future but believe it would be irrational to let our beliefs about it take on excessive weight in our decision-making. The possibility that arguments about the importance of the far future are simply mistaken, and that the best way to do good is to focus on the present, carries weight.
  • I also do not claim that the above reasoning should push all those interested in the far future into nearer-term, higher-certainty actions. People who are well-positioned to take on low-probability, high-upside projects aiming to make a huge difference – especially when their projects are robustly worthwhile and especially when their projects represent promising novel ideas – should do so. People who have formed the deep understanding necessary to evaluate such projects well should not take us to be claiming that their convictions are irrational given what they know (though we do believe some people form irrationally confident convictions based on speculative arguments). As GiveWell has matured, we’ve become (in my view) much better-positioned to take on such low-probability, high-upside projects; hence our launch of GiveWell Labs and our current investigations on global catastrophic risks. The better-informed we become, the more willing we will be to go out on a limb.

Global catastrophic risk reduction as a promising area for philanthropy
I see global catastrophic risk reduction as a promising area for philanthropy, for many of the reasons laid out in a previous post:

  • It is a good conceptual fit for philanthropy, which is seemingly better suited than other approaches to working toward diffused benefits over long time horizons.
  • Many global catastrophic risks appear to get little attention from philanthropy.
  • I place some (though not overwhelming) weight on the argument that the implications of a catastrophe for the far future could be sufficiently catastrophic and long-lasting that even a small mitigation could have huge value.

I believe that declaring global catastrophic risk reduction to be the clearly most important cause to work on, on the basis of what we know today, would not be warranted. A broad variety of other causes could be superior under reasonable assumptions. Scientific research funding may be far more important to the far future (especially if global catastrophic risks turn out to be relatively minor, or science turns out to be a key lever in mitigating them). Helping low-income people (including via our top charities) could be the better area to work in if our views regarding the far future are fundamentally flawed, or if opportunities to substantially mitigate global catastrophic risks turn out to be highly limited. Working toward better public policy could also have major implications for both the present and the future, and having knowledge of this area could be an important tool no matter what causes we end up working on. More generally, by exploring multiple promising areas, we create better opportunities for “unknown unknown” positive developments, and the discovery of outstanding giving opportunities that are difficult to imagine given our current knowledge. (We also will become more broadly informed, something we believe will be very helpful in pitching funders on the best giving opportunities we can find – whatever those turn out to be.)


  • Jacy Anthis on July 3, 2014 at 1:57 pm said:

    I’d like to add my view on one specific point, although I agree “most staff members would accept most of” the points outlined above:

    “(If not for this view, I would likely favor the latter and would likely be far more interested in animal welfare as well.)”

    It’s important to distinguish different animal welfare considerations here. Through our discussion of this post, I take it you mean “animal welfare” as “the non-proxy goal of short-term improvements in non-human animal welfare.” There are at least two other ideas included in the concept of “animal welfare”:

    (i) Given the likelihood of vast numbers of non-human sentient beings in the far future (factory farming, terraforming and wild animals, sentient machines, simulations, experimentation, etc.), it seems the welfare of far-future non-human sentient beings could be vastly important just as the welfare of far-future humans could be vastly important. This expected impact is an important consideration.

    (ii) “Animal welfare” can also refer to interventions like vegan leafleting, policy work on factory farming, research on non-human sentience, and fighting for legal personhood for other sentient beings. To me, I expect the flow-through effects from interventions such as these to be at least as great as the flow-through effects of fighting global poverty, especially for the expected impact on (i) through changes in social values and norms.

    I also find these effects exceptionally compelling because it seems unlikely that they could be negative, whereas I can easily imagine scenarios where technological progress and economic growth would be harmful to future individuals.

    Overall, I think it’s important not to dismiss non-human animal welfare for a lack of flow-through effects because that only applies to a specific subset of non-human animal welfare considerations.

  • Rajat Dhameja on July 3, 2014 at 1:58 pm said:

    Great reading this article on emphasis on value of Far Away future. Not many organizations are visionary to this extent and many would agree that such a cause should be included in their portfolio of charitable causes. For a moment, I wish to play devil’s advocate by raising the question, ” Just as individuals invest in a portfolio of stocks to minimize risk and maximize gain in the long run, how should the same individuals invest their resources ( money, time , hours ) in charitable organizations that differ vastly from one another. In other words, what would this portfolio look like.

    Contributions to local charities and established causes are tangible and the results are available quicker and the difference is visible now or in the near future. In contrast, far away’s results may take decades or may be seen by future generations. If this is so, how would people be more motivated to direct resources to such causes.

    Without a doubt, this remains a challenge as most people are looking for returns ( social good and fulfillment )in the short term ( 1 year ) and long term ( decades or their children ). But it takes vision to focus on gains that will be seen generations away.

    Thanks for sharing this article.

    Rajat Dhameja, MD, MHA

  • Robin Hanson on July 3, 2014 at 2:25 pm said:

    This post describes attempts to help the future as speculative and non-robust in contrast to helping people today. But it doesn’t at all address the very robust strategy of simply saving resources for use in the future. That may not be the best strategy, but surely one can’t complain about its robustness.

  • Pablo Stafforini on July 3, 2014 at 6:25 pm said:

    “it doesn’t at all address the very robust strategy of simply saving resources for use in the future. That may not be the best strategy, but surely one can’t complain about its robustness.”

    I agree, and Paul Christiano makes the same point in a recent blog post

  • Brian Tomasik on July 4, 2014 at 5:15 am said:

    I agree that the maxim, “Minimize existential risk” is kind of trivial if “existential risk” is defined as “bad futures.” And I would not agree with Bostrom’s claim if “existential risk” were replaced by “extinction risk,” because I think the scope for changing future trajectories conditional on human survival is much bigger than the scope for changing whether human survival happens.

    That said, I think Bostrom makes an important point: Namely, that all that matters for our actions is how they affect the quality of the future, not the speed with which the future arrives. Speed only matters insofar as it influences quality. This is different from many ordinary world views, according to which faster (civilian) technology is obviously beneficial. Holden and others seem to maintain that faster civilian technology is generally better even for the quality of the far future, but it’s important to realize that this is a more controversial claim among EAs and requires more argument than the relatively clear argument that more civilian technology is better for the quality of the next few decades.

    I also agree with Jacy’s point about animal charities.

  • I feel silly spouting a couple of intuitions in what looks like a long, carefully thought-out discussion I haven’t read all of. But that doesn’t stop me. 🙂

    One thing that really makes an impression on me is that I doubt that the folks who arguably changed the a couple hundred years ago, the Hookes and Watts and so on, really had even a sketch of their work’s long-term effects in mind. You can solve problems in front of you, you can try to advance general values (scholars can learn stuff, policy people can argue for good governance), but history just seems too unpredictable to *plan* specific results (like colonization of space) you want your work to have across such long timeframes.

    I also suspect it’s easy to underestimate the benefits for long-term progress (and shorter-term development) of regular folks being healthy happy and able to do more with their lives. I can’t present great evidence or arguments for this, but as a feeble gesture towards them, progress has often been partly a function of economics, and economies need regular folks to function, too.

  • Laura S. on July 24, 2014 at 5:48 am said:

    @ Jacy Anthis

    I’m really happy to see that you put this into consideration. It is not about surviving in the long term. I do not think we have some collective conscience that works towards this end. It is more about the quality of life – be it our own or other sentient being. Without taking into consideration our responsibility for the habitat, we cannot develop our altruism further and proclaim ourselves the true protectors of this planet’s future.

    We should also consider possible dangers that stem from our own technological advancements. Monsanto, genetic manipulation, social engineering or just becoming addicted to the internet and its virtual relations can pose a great threthreat mankind as well, and thus are worth further research.

  • Holden on July 24, 2014 at 9:15 pm said:

    Thanks for the comments, all.

    Jacy, I agree conceptually with your comment, but I think that empowering humans has more paths for “flowing through” to other effects than changing attitudes toward animal suffering. (And as you point out, changing attitudes is a particular subset of animal-welfare-focused interventions; prospects for flow-through effects seem weaker still on interventions that seek to reduce suffering without changing attitudes.)

    Rajat, I don’t think I understand your question.

    Robin and Pablo, I think the lack of robustness for the proposed intervention comes from the difficulty of putting conditions in place now that will still be relevant – and cause money to be well-spent toward its intended ends – far in the future. I’d find it very hard to lay out conditions for how money is to be spent 100+ years in the future that I’d be confident would reflect my actual values. (As a side note, the presentations I’ve seen of this idea emphasize how much an investment grows over some long time period: for example, a 2% real interest rate over ~350 years would turn $1000 into $1,000,000 (in real terms). But this doesn’t mean that $1,000,000 in real value is created; much of the interest could reflect compensation for time preference rather than value created. In addition, I believe that direct good accomplished “compounds” as well; one could think of bednets as investments in human capital, for example. Thus, it isn’t clear to me that saving over very long time periods represents an unusually cost-effective way to help people.)

    Brian, I agree (note that I don’t think Bostrom’s essay conclusively establishes this point; but it does give an interesting argument for it).

    Will, we use Linode.

    R, I agree with you, and my post tried to express similar ideas; this sort of intuition plays a major role in my belief that short-term, tangible giving opportunities are serious contenders for the best giving opportunities, regardless of whether we can trace compelling paths from the short-term impacts to impacts on the far future.

Comments are closed.