The GiveWell Blog

Singularity Summit

This post is more than 14 years old

Among those who follow GiveWell, there is some interest in the Singularity Institute for Artificial Intelligence and its mission of lowering the risks associated with the creation of artificial intelligence that “[leaves] human abilities far behind.” We have been asked several times to share our views on its work and the value of a donation given to it.

My only knowledge of this issue, as of now, comes from reading Less Wrong and Overcoming Bias and speaking with the Institute’s President, Michael Vassar. I’m interested in this community partly because it has a lot of people (including Mr. Vassar) who think critically and analytically about how to accomplish as much good as possible, considering all options and putting positive impact above other considerations; in that sense their values overlap strongly with ours.

At this point

  • I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion.
  • I do not feel that the Institute is a strong opportunity for a donor to accomplish good. I sketched my reasons in this comment and will eventually lay out my thoughts more thoroughly.
  • I do intend to learn more about this area and am open to changing my mind in either direction.

Consistent with the last point, I will be attending the Singularity Summit this year and encourage others interested in this topic to consider doing the same.

Comments

  • Nick Beckstead on June 29, 2010 at 4:26 pm said:

    I’ll be glad to hear more about your views here. I have a lot of questions about existential risks and I’d been meaning to ask you about them.

  • anonynamja on July 7, 2010 at 5:16 pm said:

    Holden, FAI is a problem not amenable to Givewell’s standard methodology – even defining the right questions to ask (the specifics of #4-6) would be a huge leap forward towards the goal.

  • Holden on July 8, 2010 at 4:55 pm said:

    Anonynamja, agreed. Our “standard methodology” is very much a work in progress, and we hope to expand the sort of work we do over time.

  • Paul Crowley on August 16, 2010 at 1:40 pm said:

    Leaving aside the specifics of SIAI’s approach, would you agree that for those who value total utility, it’s hard to see how even the most efficient charity that directly benefits those alive today could begin to match the efficiency of a charity that manages even a relatively small reduction in existential risk?

  • Robert Daoust on August 16, 2010 at 5:41 pm said:

    Well, I value total utility, to a certain extent, but I still prefer a charity that benefits those alive today, because clear and present benefits are actual, while risks are only ‘risks’. I am all in favor of activities against existential risks, but I think they can only be ‘marginal’ in an economy like ours. However, it would be a good thing, I guess, if those activities were more explicitly incorporated in charities against our present woes, because solving those woes is probably the best way to prevent future problems.

  • Ian Turner on August 17, 2010 at 8:33 am said:

    Paul: Only if you can be confident that the charity’s work is actually effective. What if in attempting to reduce existential risk, the charity ended up bringing about the end of life?

  • Jonah S. on August 17, 2010 at 11:04 pm said:

    Robert: You might benefit from reading the collection of posts linked at the Less Wrong wiki under the article titled “Shut up and multiply”

    http://wiki.lesswrong.com/wiki/Shut_up_and_multiply

    In particular the posting titled “Circular Altruism.”

    Paul: I agree with Ian.

  • Paul Crowley on August 18, 2010 at 3:02 am said:

    Jonah: If you believe in “shut up and multiply”, one doesn’t have to be *confident* that the donation will reduce the risk; one just has to believe that the expected change in risk will be negative and non-negligible, using “expected” in the technical sense of probability theory.

  • Jonah S. on August 18, 2010 at 4:36 am said:

    Paul: You are right.

    But my impression is that as a practical matter, one has to exercise special vigilance to make sure that an organization is making a positive difference even with respect to a well considered expected value calculation.

    Some of my thinking on this point comes across in my post titled “Existential Risk and Public Relations.” But there’s more to say about this matter. Explaining my position will take some more posts.

  • Robert Daoust on August 18, 2010 at 12:44 pm said:

    Jonah, I had read ‘Circular altruism’ already, along with many posts at the Less Wrong wiki, and that’s why I choose to comment on Holden’s post here. I’m not as familiar as I’d like with Less Wrong discussions, but my feeling is that rationality remains unreliable in the current state of our civilization, especially when such an intractably huge question as our present and future suffering is concerned.

    More specifically, my feeling and my rational analysis as well make me suspicious of preoccupations with future suffering when clear and present excessive sufferings are so neglected, and that’s why I suggest that research dealing with friendly intelligence or existential risks might become involved right now into the resolution of our present problems…

  • Jonah S. on August 18, 2010 at 1:26 pm said:

    Robert,

    Interesting to know that you had already read those articles. I personally do not find your views compelling but wish you well.

  • Holden on August 27, 2010 at 7:21 am said:

    Paul, I agree with your statement if taken literally. However, in addition to the fact that I (and many others) am not a strict utilitarian in my values, I feel that people often jump too quickly from this argument to “Therefore existential risk charities are the best ones to support.” To me there is an enormous gulf between these ideas. I am not convinced that any of the ideas I’ve heard for directly addressing existential risks (and I have discussed many) have any non-negligible positive expected impact on survival. By “non-negligible” I mean “both above the positive expected impact of everyday good works, which improve the total resources and human capital at our disposal, and above the threshold where taking action would not constitute falling prey to Pascal’s Mugging.”

    I think I am somewhere in between Paul and Robert on future/speculative vs. present/tangible impact. I definitely value future generations and think that projects with a substantial expected impact on survival (which would include, for example, reducing the odds of a major planet-wide catastrophe by 1%) are worth a lot of investment. But I also think there are limits to how well our rationality applies to thinking about such things. In particular, I am bothered by how non-contingent most of the arguments I hear for focusing on existential risk sound (that is, it sounds as though they’d sound equally good regardless of almost any empirical facts about our world and times as well as the specific proposals and personnel on the table). I think that projects that expect to see some short- or medium-term tangible impact are inherently more accountable, more prone to reality checks and meaningful “bets” on the underlying beliefs, and more likely to succeed.

    Most of the supporters of existential risk charities I’ve engaged with do not seem, to me, to be giving enough weight to these considerations. On the other hand, most non-supporters seem overly dismissive to me.

  • Nick Beckstead on August 30, 2010 at 7:58 am said:

    I share some of your worries about non-contingency and Pascal’s Mugging.

    On the Pascal’s Mugging point, I think it is important to remember that much of the low chances of making a difference associated with existential risk turn on the fact that we’re dealing with a collective action problem. Like voting, fighting in a just and important war, or not producing a lot of C02, the odds are that your personal action will not be decisive. However, when your action is part of a large collective and the action of the collective does well on average, it is often reasonable to cooperate. (Parfit makes related points in the section called “Five Mistakes in Moral Mathematics” in Reasons and Persons).

    As an example, take voting. In some swing states, the odds that your vote is decisive in a presidential election is in the ballpark of 1 in 10 million (http://onlinelibrary.wiley.com/doi/10.1111/j.1465-7295.2010.00272.x/abstract). If there were an important election going on, it would be unreasonable to argue that voting was irrational on the grounds that it is like a case of Pascal’s Mugging. Perhaps you have better things to do with your time, but it wouldn’t be a mugging. (Perhaps you wouldn’t have better things to do: if the decision would significantly affect the lives of 10 million people (which it easily could in an important election), a few minutes to vote might not be a bad idea.)

    In my view, decreasing the chance of existential risk by 1% would be more important than having a presidential election turn out well (provided the election itself didn’t also significantly reduce existential risk; counting only people who already exist, this would save 70 million expected lives). By parallel reasoning, if some effort we could make would have a 1 in 10 million chance of decreasing existential risk by 1%, paying for the opportunity wouldn’t count as getting mugged. Here, though, one would only be decreasing the chance of existential risk by 1 in a billion. This suggests, but does not show, that the probabilities can be quite small before a low probability high stakes bet becomes a mugging.

    The argument does not show that folks who worry about existential risk are not being mugged because the analogy is not perfect; in the case of voting, many other people are likely to cooperate (though your personal decision may have little to do with that). In the case of existential risk, few people are likely to cooperate (though here there seems to be more potential for influence). If this difference is deemed to be relevant, than existential risks folks are possibly being mugged. If it is not, then they are not being mugged (though other things may have higher expected return).

    Let’s say the number of potential cooperators is not relevant in this way (provided the chance of success remains constant). If we could show that it were possible to decrease the probability of existential risk by more than 1 in a billion, that would show that we aren’t really dealing with a mugging. I can’t show this, but I do find it plausible. A lifetime of effective donations really could, I believe, decrease the odds of existential risk by 1 in a billion. If you buy all of that, we aren’t dealing with a mugging.

  • Holden on August 31, 2010 at 3:20 am said:

    Nick, I don’t automatically associate low probabilities with Pascal’s Mugging. For example, I would pay to avert a one-in-a-billion chance of a billion untimely deaths, if I had high confidence in the quantification. What I am inclined to call a “Pascal’s Mugging” is a situation in which I have no sense of an event’s probability, other than that it seems very unlikely (it fits in the category of “story someone told me with no serious empirical grounding”), yet the event would hypothetically involve a huge cost/benefit, leading to the implication that I should invest significantly. In this case it feels like I simply don’t have the mental precision required to assign a probability low enough to recommend against investment, yet investment would be foolish. That is how I feel at this point about investing in the various anti-existential-risk projects I have discussed.

  • Nick Beckstead on August 31, 2010 at 7:37 am said:

    I guess I misunderstood what you said here: “By “non-negligible” I mean “both above the positive expected impact of everyday good works, which improve the total resources and human capital at our disposal, and above the threshold where taking action would not constitute falling prey to Pascal’s Mugging.”

    At any rate, the concern about murky probabilities might be misplaced. Sometimes, I won’t stand behind a point-valued probability estimate, but I’ll be confident that the probability is at least a certain amount. Example: I don’t know how likely it is that the copper market will do really well in 30 years, but I’d say the odds are at least 1 in a thousand. If this is what I think, if I’m willing to bet at 1:1000 odds about some event of “known” probability, I should also be willing to bet at 1:1000 odds about the claim that the copper market will be doing really well in 30 years. Similarly, even if you can’t tell what the odds are for various existential risks, I think you can say that many of them have odds above certain low thresholds (1 in a million, 1 in a thousand, etc.). And if, given that those are the odds, we can reduce the probabilities, even by a small amount, I don’t think the mugging analogy will work out. [Brian Weatherson, a Rutgers philosopher, defends a view of this kind about probability. You can see some details of it on the first few pages of the paper “Keynes, Uncertainty, and Interest Rates”, available here: http://brian.weatherson.org/papers.shtml, under “Major Published Works”.]

    I think it is important to remember here that ignoring the existential risks under discussion is also a choice; if the probabilities are higher than you think, it could be a big mistake.

  • Nick, what I mean by ‘Pascal’s mugging’ is still narrower than what you’re saying. It isn’t just a reference to low and fuzzy probabilities.

    What I’m saying is that in this case, I find the arguments for the existential risk reduction charities I’ve seen to be very uncompelling and that I can’t comfortably place something like P(SIAI will be successful) above any particular threshold. (Not even 10^-(3^^^^3).) The only reason I would place it above that threshold is out of a feeling that my brain pretty much isn’t capable of assigning that low a probability to any vaguely defined hypothesis.

    Given this state of mind, it seems that it would be a mistake for me to invest, for the same reason that it’s a mistake to give in to a Pascal’s Mugging. If you have a different state of mind and feel confident that, say, P(SIAI will be successful)>0.01%, different conclusions follow.

  • Florent Berthet on January 6, 2012 at 4:53 am said:

    Holden, it’s been more than a year since your last comment. Do you still hold the same position regarding the SIAI?

  • Holden on January 6, 2012 at 12:55 pm said:

    Florent, I still endorse all three of the bullet points in the above post. I have done more investigation of SIAI; you can see more of my thoughts & reasoning at

    In the first interview, SIAI implied (in my view) that it did not have room for more funding. Representatives have since implied that it does in fact have room for more funding, but haven’t indicated an interest in spelling this out for me (more).

Comments are closed.