GiveWell is hiring Summer Research Analysts for summer 2016

We’re looking for people who will be entering their final year of undergrad or graduate school in 2016 to work at GiveWell in the summer of 2016 as Summer Research Analysts, with the possibility of joining GiveWell full time after graduation.

More information and how to apply can be found here:

If you follow GiveWell and want to help us out, please share this post with anyone whom you think might be a good fit for the role.

Differential technological development: some early thinking

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on science philanthropy and global catastrophic risks. Throughout, “we” refers to positions taken by the Open Philanthropy Project as an entity rather than to a consensus of all staff.

Two priorities for the Open Philanthropy Project are our work on science philanthropy and global catastrophic risks. These interests are related because—in addition to greatly advancing civilization’s wealth and prosperity—advances in certain areas of science and technology may be key to exacerbating or addressing what we believe are the largest global catastrophic risks. (For detail on the idea that advances in technology could be a driver, see “‘Natural’ GCRs appear to be less harmful in expectation” in this post.) For example, nuclear engineering created the possibility of nuclear war, but also provided a source of energy that does not depend on fossil fuels, making it a potential tool in the fight against climate change. Similarly, future advances in bioengineering, genetic engineering, geoengineering, computer science (including artificial intelligence), nanotechnology, neuroscience, and robotics could have the potential to affect the long-term future of humanity in both positive and negative ways.

Therefore, we’ve been considering the possible consequences of advancing the pace of development of various individual areas of science and technology in order to have more informed opinions about which might be especially promising to speed up and which might create additional risks if accelerated. Following Nick Bostrom, we call this topic “differential technological development.” We believe that our views on this topic will inform our priorities in scientific research, and to a lesser extent, global catastrophic risks. We believe our ability to predict and plan for future factors such as these is highly limited, and we generally favor a default presumption that economic and technological development is positive, but we also think it’s worth putting some effort into understanding the interplay between scientific progress and global catastrophic risks in case any considerations seem strong enough to influence our priorities.

The first question our investigation of differential technological development looked into was the effect of speeding progress toward advanced AI on global catastrophic risk. This post gives our initial take on that question. One idea we sometimes hear is that it would be harmful to speed up the development of artificial intelligence because not enough work has been done to ensure that when very advanced artificial intelligence is created, it will be safe. This problem, it is argued, would be even worse if progress in the field accelerated. However, very advanced artificial intelligence could be a useful tool for overcoming other potential global catastrophic risks. If it comes sooner—and the world manages to avoid the risks that it poses directly—the world will spend less time at risk from these other factors.

Curious about how to compare these two factors, I tried looking at a simple model of the implications of a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University. I found that speeding up advanced artificial intelligence—according to my simple interpretation of these survey results—could easily result in reduced net exposure to the most extreme global catastrophic risks (e.g., those that could cause human extinction), and that what one believes on this topic is highly sensitive to some very difficult-to-estimate parameters (so that other estimates of those parameters could yield the opposite conclusion). This conclusion seems to be in tension with the view that speeding up artificial intelligence research would increase risk of human extinction on net, so I decided to write up this finding, both to get reactions and to illustrate the general kind of work we’re doing to think through the issue of differential technological development.

Below, I:

  • Describe our simplified model of the consequences of speeding up the development of advanced AI on the risk of human extinction using a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University.
  • Explain why, in this model, the effect of faster progress on artificial intelligence on the risk of human extinction is very unclear.
  • Describe several of the model’s many limitations, illustrating the challenges involved with this kind of analysis.

We are working on developing a broader understanding of this set of issues, as they apply to the areas of science and technology described above, and as they relate to the global catastrophic risks we focus on.

A simple model of the consequences of faster advanced AI development
“Artificial intelligence is risky if we are not sufficiently prepared” and “Artificial intelligence will reduce other risks if developed safely” are probably the two clearest considerations related to the impact of faster progress toward advanced artificial intelligence on global catastrophic risks, and how much weight to put on them depends on a number of highly uncertain questions, including:

  1. How large other global catastrophic risks are in comparison with risks from advanced artificial intelligence
  2. How much advanced artificial intelligence would affect the other important risks
  3. How much (if any) less prepared we’d be for advanced artificial intelligence if the pace of progress in artificial intelligence were faster

To illustrate what might be involved in reasoning about this kind of problem, I made a simplified model using a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University (where I worked before joining GiveWell). We do not know the details regarding the specific participants, but would guess that participation in this survey selects for unusually high levels of concern about global catastrophic risk.

The model makes the following assumptions:

  1. It uses the conference’s estimates of the probability of extinction from various risks.
  2. It assumes the risk from possible extinction events other than advanced AI is evenly distributed from 2015 to 2100.
  3. It assumes that if advanced AI is developed safely, the remaining years of risks from other factors will become negligible.
  4. It focuses exclusively on extinction risk, which we believe is the main type of risk that the people most concerned about advanced AI have in mind.

The model outputs how much the extinction risk from advanced AI would have to increase (as a fraction of its initial value), as a result of arriving X years sooner, in order to offset the risk reduction from having X fewer years exposed to other risks. The model only considers what happens between 2015 and 2100.

The median participant in FHI’s conference estimated that the probability of human extinction by 2100 was 19%, with 5 percentage points of that risk coming from advanced artificial intelligence. In my model, those numbers yield the following table:

Number of years advanced AI is sped up
1 5 10
Percentage point decrease in other risks 0.16% 0.82% 1.65%
Fractional decrease in other risks 1.18% 5.88% 11.76%
Fractional increase in risk from advanced AI required to offset decrease in other risk 3.29% 16.47% 32.94%


For example, this table says that if progress toward advanced AI were sped up by 5 years, it would reduce other risks by about 6% (or about 0.8 percentage points). Since the probability of extinction from advanced AI is lower than the total probability of extinction from other causes, this would require an increase of over 16% to outweigh those 0.8 percentage points. The figures for 5 and 10 years are just multiples of the figures for one year.

The model doesn’t say anything about how large the increase in risk from advanced AI would actually be if advanced AI were sped up by a given number of years, largely because I couldn’t think of any simple and reasonable way to model that. The model allows readers to check the implications of their own views on this topic.

One possible example for some context: if one believed that 20 additional years of research on risks from advanced AI would be needed in order to cut risk from advanced AI in half (from 5% to 2.5%), this would imply an average of 0.125 additional percentage points of risk from advanced AI per year of speedup. Since each year of speedup is estimated at ~0.164 percentage points of risk reduction from other causes, this would imply that speedup is safety-improving for the average year. (The picture becomes substantially more complicated if one does not evenly divide the relevant probabilities across years.)

Overall, under the assumptions of this model, the net effect of faster progress toward advanced AI on total global catastrophic risk seems unclear.

To a be a bit more general and illustrate the impacts of alternative assumptions, here’s a table that looks at the relevant figures under different assumptions about the ratio of risk from advanced AI to all other risks. These are the proportional increases in risk from advanced AI required to offset X years of AI progress if you accept the above model and assume that the ratio of (risk from advanced AI:all other risks) is as specified. For example, the table says that if you think the ratio is 0.3:1, and you otherwise accept the model described above, then you should think that one year of faster progress toward advanced AI would have neutral consequences for the probability of human extinction before 2100 if faster progress proportionally increased the risk of advanced AI by about 4%.

Number of years development of advanced AI is sped up
1 5 10
Ratio of risk from advanced AI to all other risks 0.1 11.76% 58.82% 117.65%
0.3 3.92% 19.61% 39.22%
1 1.18% 5.88% 11.76%
3 0.39% 1.96% 3.92%
10 0.12% 0.59% 1.18%


Limitations to the model

This model is extremely simplified, and limitations include the following:

  1. Risks from future technology are not likely to be evenly spread across the century. They seem small right now, and presumably will be largest during some future time when technology has advanced. They might decline after that as prevention/response mechanisms improve. And each might occur before or after advanced artificial intelligence, in which case the effect of additional progress toward advanced AI on each risk could be much greater or much smaller. For example, if atomically precise manufacturing would by default be developed in 2060 and advanced AI would by default be developed in 2050, then additional progress toward advanced AI would not have any effect on risks from atomically precise manufacturing.
  2. It’s extremely hard to say how much additional preparation for advanced AI would reduce risks from advanced AI. It’s possible that years of preparation work will have increasing or diminishing returns, which could make later years more or less productive relative to earlier years. It’s possible preparation efforts will be more effective in years when AI development is further along. It’s possible that advanced AI is sufficiently distant, and preparation sufficiently subject to diminishing returns, that additional progress toward advanced AI relative to the status quo wouldn’t diminish the level of preparation appreciably.
  3. Artificial intelligence may not fully eliminate other risks to civilization, and it might vary by risk. For example, nuclear and other advanced weapons could still be used with great destructive potential. But biological weapons might pose much smaller risks.
  4. The same action (e.g. publishing a series of important papers) might speed up progress toward advanced AI by a significantly greater amount of time if advanced AI is coming in relatively soon (e.g. 20 years), in comparison with potential scenarios where the arrival of advanced AI is much farther in the future (e.g. more than 100 years). Pushing forward progress toward advanced AI could have importantly different consequences in terms of reducing other risks or adequacy of preparations for advanced AI in scenarios where advanced AI is coming relatively soon in comparison with scenarios where advanced AI is coming relatively late. For example, if advanced AI is coming much later, I would guess that (i) it’s more likely we’ll be adequately prepared when advanced AI arrives, and (ii) there will already be other advanced technologies that pose risks to civilization when advanced AI arrives.
  5. I have little confidence in the conference’s estimates of the probability of various catastrophic events. Readers can try their own estimates in our model if they prefer, or refer to the second table above for a rough sense of what their views would imply.
  6. The conference’s 5% estimate for risk from advanced AI presumably includes some assumptions about when advanced AI is likely to arrive and how much preparation is likely to be in place before then. I haven’t modeled this; I’ve merely laid out the implications of various assumptions about the impact of marginal speedup on marginal risk increase.
  7. The model does not include positive/negative outcomes other than extinction, such as global catastrophes or other positive/negative events with lasting consequences.
  8. There are other important considerations related to the pace of progress in artificial intelligence that I did not consider. For example, the rate of progress—and where the progress happens—could affect:
    1. What country or organization obtains advanced capabilities first, which could positively or negatively affect overall risk. It could also affect the attitudes with which advanced AI is developed; for example, in today’s relatively stable and peaceful geopolitical environment, people seem particularly likely to prioritize safety before deploying advanced AI, relative to a possible world where geopolitics is more unstable and the incentives favor deploying advanced AI as soon as possible.
    2. How advanced available hardware and robotics is when advanced artificial intelligence is developed, which may affect “hardware overhang” and how powerful advanced artificial intelligence is at early stages of its development.
    3. How long it takes to move from it being possible to create advanced artificial intelligence to adequate safety measures being implemented (which could potentially be longer or shorter if advanced AI develops more quickly).

We are working on developing a broader—though still relatively simple—picture of how large various global catastrophic risks are, how those risks depend on progress in different areas of science and technology, and how progress in different areas of science and technology interact with each other.


Thanks to the following people for reviewing a draft of this post and providing thoughtful feedback (this of course does not mean they agree with the post or are responsible for its content): Stuart Armstrong, Nick Bostrom, Paul Christiano, Owen Cotton-Barratt, Daniel Dewey, Eric Drexler, Holden Karnofsky, Luke Muehlhauser, Toby Ord, Anders Sandberg, and Carl Shulman.

Open Philanthropy Project update

This post gives an overall update on progress and plans for the Open Philanthropy Project. Our last update was about six months ago, and the primary goals it laid out were six-month goals.


The overall theme is that we are putting most of our effort into capacity building (recruiting, trial hires, onboarding new hires). This is in contrast to six months ago, when most of our effort went into selecting focus areas. Six months from now, we hope to be putting most of our effort into recommending grants and putting out public content. (Specifically, we hope that our efforts within the “U.S. policy” and “global catastrophic risks” categories will fit this description. We expect it to take longer to choose focus areas within scientific research.)


Donating to help with the Syrian refugee crisis

We’ve received a number of questions about where to donate to help with the Syrian refugee crisis. Millions of Syrians have fled the war in their country, risking their lives. Images of the immense suffering accompanying this journey have captured headlines in recent weeks.

Although GiveWell hasn’t focused its work on disaster relief organizations, we have in the past recommended support to Doctors Without Borders (MSF), which we feel has distinguished itself as a relief organization with above-average transparency. Previous general advice from GiveWell on giving to help with disaster relief can be found here.

According to its website, MSF is operating six medical facilities in northern Syria and is directly supporting over 100 health posts and field hospitals across the country, including in besieged areas. MSF has also provided health services to Syrian refugees in Iraq, Jordan, and Lebanon, as well as Roszke, Hungary. MSF generally asks that donors make unrestricted donations to enable the organization to allocate its resources where needs are the greatest, a position we support.

We have not looked into other organizations that are fundraising to support their work on this crisis.

Journalists report on deworming program supported by Deworm the World Initiative in Kenya

Last year, Jacob Kushner, a journalist living in Kenya, reported on his observations from villages in which GiveDirectly had distributed some of its earliest cash transfers. This year, we funded him to report on the National School-Based Deworming Programme in Kenya, a program supported by Deworm the World.

Mr. Kushner’s article follows. His colleague, Anthony Langat, observed the program and interviewed some stakeholders; his interview notes are posted here.

In addition to Mr. Kushner’s article:

  • We summarize our takeaways here.
  • Evidence Action, which runs Deworm the World, responds to the article here.

School-based deworming: A parent’s perspective

In Kenya, some parents fear potential, minor side effects of deworming, and a few may even oppose it for religious reasons. Others say they just want to be better informed.

By Jacob Kushner and Anthony Langat

Since 2009, Evidence Action’s Deworm the World Initiative has provided technical assistance to the highly praised government-run National School-Based Deworming Programme, in Kenya. Last year the program de-wormed 6.4 million students in 16,000 schools. The Deworm the World Initiative encourages a multi-faceted strategy toward informing parents about the program, using radio announcements, community meetings, and by training teachers to encourage students to let their parents know about the program in advance of each ‘deworming day.’

But to ensure that each and every parent is made aware of it beforehand is impossible. Many families in rural Kenya lack radios, and communication between schools and certain families can be limited. And it’s safe to assume that children don’t always dutifully relay to parents each and every announcement they hear in class.


Incoming Program Officer: Lewis Bollard

We’re excited to announce that Lewis Bollard has accepted our offer to join the Open Philanthropy Project as a Program Officer, leading our work on treatment of animals in industrial agriculture.

Lewis currently works as Policy Advisor & International Liaison to the CEO at The Humane Society of the United States (HSUS). Prior to that, he was a litigation fellow at HSUS, a law student at Yale, and an associate consultant at Bain & Company.

Our process, and our bet on Lewis

We hired Lewis using a broadly similar process and criteria to what we described previously. (Because we have recently laid out the basics of that process, this post is much shorter.)

In Lewis’s case, the “mini-trial” we did as part of the interview process focused on exploring some disagreements we had about the relative value of (a) targeting corporations in the hopes of improving farm conditions vs. (b) targeting the broader public. Lewis created writeups laying out his thinking, and we had extensive discussions that resulted in some updating on both sides. We will be writing more about our strategy for this cause in the future.

Lewis initially came to us via a referral from Howie Lempel, who co-led an animal law reading group with him when they were both in law school, and was then recommended by several other people we spoke to in the field. Based on the referrals, interviews and the “mini-trial,” we see the following strengths:

  • We’ve been extremely impressed by his thinking and communication style. We see him as a very strong generalist.
  • We believe Lewis has a passion for, and strong knowledge of, the cause he will be focusing on.
  • We believe that our core values with respect to this cause are highly aligned. In particular, our primary goal is to reduce suffering of animals as much as possible, and we believe this will sometimes mean pushing for incremental improvements in how animals are treated rather than focusing exclusively on reducing meat production/consumption.

Our reservations come from the fact that Lewis is relatively early in his career:

  • He has only 3 years of work experience, and accordingly does not have a very informative track record as a funder or advocate.
  • While he has some strong relationships in the field, he is not as well-connected as a more senior candidate would likely be.
  • We believe he will have a steep learning curve in order to get up to speed on philanthropy, and, secondarily, on some parts of the field he has been less exposed to.

We are betting that Lewis’s strong generalist qualities will allow him to quickly develop the relationships and expertise he needs to recommend outstanding grants. We recognize that this is a risky proposition. We are comfortable with the risk, partly because we feel that this cause (treatment of animals in industrial agriculture) has relatively few organizations working on it, and the need for pre-existing expertise and connections is not as great as it is for our criminal justice reform Program Officer.

Because Lewis is a New Zealand citizen, our offer and his acceptance are conditional on our ability to secure a work visa for him. We expect that to be completed in a few weeks, and we’re anticipating that he will start in October.