The GiveWell Blog

The lack of controversy over well-targeted aid

There are a number of high-profile public debates about the value of overseas aid (example). These debates generally have intelligent people and arguments on both sides, and they rightly give many people the sense that “Does aid work?” is a complex question with no simple answer.

However, we believe these debates are sometimes misinterpreted, causing unnecessary confusion and concern. Specifically, people sometimes ask questions along the lines of: “Since many well-informed people believe that aid does more harm than good, should I believe that GiveWell’s top charities are even helping?”

We believe that the most prominent people known as “aid critics” do not give significant arguments against the sorts of activities our top charities focus on, particularly with respect to health interventions. Instead, their critiques tend to focus on the harms of government-to-government aid, particularly when it is not effectively targeting those most in need and not effectively focusing on interventions with strong track records.

While we seek out and acknowledge the possible downsides of our top charities’ work (example), we don’t see a serious case to be made that the harms outweigh the benefits. Going through each potential harm and discussing how it relates to our top charities could make for a lengthy writeup (note that we address many potential harms in our research FAQ); this post has the simpler goal of discussing the people best known as “aid critics” and establishing that they provide few (if any) arguments against the sort of work our top charities carry out.

We focus on the three people we believe are best known as aid critics: Bill Easterly, Angus Deaton and Dambisa Moyo.

Why a new study of the Mariel boatlift has not changed our views on the benefits of immigration

Mariel_Refugees[1]

As a consultant for the Open Philanthropy Project last year, I reviewed the research on whether immigration reduces employment or earnings for workers in receiving countries. I concluded that for natives the harm, if any, is small.

Last month the prominent immigration researcher George Borjas posted a challenge to a seminal study in my review. His new paper contends that the Mariel boatlift, which brought some 60,000 Cuban refugees to Miami in 1980, did profoundly affect the labor market there, depressing wages for low-education men (ones with less than a high school education) by 10–30%.

Borjas’s work is especially significant because it seems to upend a study of the boatlift published by David Card 25 years ago, which found little impact of all that immigration on workers in Miami. Interestingly, Borjas, who emphasizes the harm of Cuban immigration, is himself a Cuban emigré.

I probed this dispute, replicating and checking the results in the dueling papers. I ultimately found little cause to change my views. The main reasons:

  • Of the two Census Bureau data sets that Borjas relies on, the one with larger samples shows smaller impacts.
  • According to that data set, wages for women, which Borjas excludes, rose, if anything, after immigration spikes (especially after a second one in 1994–95).
  • I see no sharp breaks from long-term trends of the sort that could be confidently attributed to the 1980 immigration surge. The Borjas analysis appears correct that wages for low-education Miami men (defined henceforth as those with less than a high school education) were lower on average in 1981–83 than in 1977–79—with the drop being larger than in most other US cities. But the data argue more for a steady long-term decline than sudden drops after immigration surges. The Borjas analysis tends to obscure this distinction by aggregating or smoothing data over several years.
  • The original study by David Card is one of 17 covered in my review, including three others exploiting natural experiments in mass migration. None of the studies is as compelling as a randomized trial, but the overall picture—of at most modest harm from substantial immigration—does not change if the Card study is removed.

Details follow.

Charities we’d like to see

We wish we had more top charities, and as we look to the future we expect (and hope) that there will need to be more recommended charities in order to productively use all the donations that GiveWell-influenced donors are making. One of our major activities is trying to expand our top charities list – both by investigating charities that already exist, and by supporting activities (from new nonprofits to studies) that could eventually result in a larger set of evidence-backed programs and a larger set of top charities.

This post discusses types of charities that we would be excited to learn more about if they existed. We would also consider providing support to individuals trying to create the types of organizations described below. In a similar spirit to a request for startups, we’re sharing this list in the hopes that it might help us find out about such charities – or might help us find and support people looking to create them.

In brief, we would be excited to see:

  • Charities that implement GiveWell’s priority programs: vitamin A supplementation, immunizations, conditional cash transfers, micronutrient fortification, or even bednets and deworming (since our top charities that focus on the latter two have limited room for more funding). More
  • Charities implementing potential priority programs that are particularly challenging, particularly those revolving around (a) treatment of treatable conditions in a hospital or clinic setting; (b) behavior change for improving health. We see several hurdles to successfully focusing on such programs, but would be excited to see charities that overcome such hurdles. More
  • Charities that collect or generate information and data relevant to our recommendations. Currently, we recommend charities based partly on the data they themselves collect and share. But we could potentially recommend an organization that does not, itself, collect and share strong monitoring data, if we had independent data showing its activities’ effectiveness. More

GiveWell is hiring Summer Research Analysts for summer 2016

We’re looking for people who will be entering their final year of undergrad or graduate school in 2016 to work at GiveWell in the summer of 2016 as Summer Research Analysts, with the possibility of joining GiveWell full time after graduation.

More information and how to apply can be found here: https://www.givewell.org/about/jobs/summer-research-analyst

If you follow GiveWell and want to help us out, please share this post with anyone whom you think might be a good fit for the role.

Differential technological development: Some early thinking

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on science philanthropy and global catastrophic risks. Throughout, “we” refers to positions taken by the Open Philanthropy Project as an entity rather than to a consensus of all staff.

Two priorities for the Open Philanthropy Project are our work on science philanthropy and global catastrophic risks. These interests are related because—in addition to greatly advancing civilization’s wealth and prosperity—advances in certain areas of science and technology may be key to exacerbating or addressing what we believe are the largest global catastrophic risks. (For detail on the idea that advances in technology could be a driver, see “‘Natural’ GCRs appear to be less harmful in expectation” in this post.) For example, nuclear engineering created the possibility of nuclear war, but also provided a source of energy that does not depend on fossil fuels, making it a potential tool in the fight against climate change. Similarly, future advances in bioengineering, genetic engineering, geoengineering, computer science (including artificial intelligence), nanotechnology, neuroscience, and robotics could have the potential to affect the long-term future of humanity in both positive and negative ways.

Therefore, we’ve been considering the possible consequences of advancing the pace of development of various individual areas of science and technology in order to have more informed opinions about which might be especially promising to speed up and which might create additional risks if accelerated. Following Nick Bostrom, we call this topic “differential technological development.” We believe that our views on this topic will inform our priorities in scientific research, and to a lesser extent, global catastrophic risks. We believe our ability to predict and plan for future factors such as these is highly limited, and we generally favor a default presumption that economic and technological development is positive, but we also think it’s worth putting some effort into understanding the interplay between scientific progress and global catastrophic risks in case any considerations seem strong enough to influence our priorities.

The first question our investigation of differential technological development looked into was the effect of speeding progress toward advanced AI on global catastrophic risk. This post gives our initial take on that question. One idea we sometimes hear is that it would be harmful to speed up the development of artificial intelligence because not enough work has been done to ensure that when very advanced artificial intelligence is created, it will be safe. This problem, it is argued, would be even worse if progress in the field accelerated. However, very advanced artificial intelligence could be a useful tool for overcoming other potential global catastrophic risks. If it comes sooner—and the world manages to avoid the risks that it poses directly—the world will spend less time at risk from these other factors.

Curious about how to compare these two factors, I tried looking at a simple model of the implications of a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University. I found that speeding up advanced artificial intelligence—according to my simple interpretation of these survey results—could easily result in reduced net exposure to the most extreme global catastrophic risks (e.g., those that could cause human extinction), and that what one believes on this topic is highly sensitive to some very difficult-to-estimate parameters (so that other estimates of those parameters could yield the opposite conclusion). This conclusion seems to be in tension with the view that speeding up artificial intelligence research would increase risk of human extinction on net, so I decided to write up this finding, both to get reactions and to illustrate the general kind of work we’re doing to think through the issue of differential technological development.

Below, I:

  • Describe our simplified model of the consequences of speeding up the development of advanced AI on the risk of human extinction using a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University.
  • Explain why, in this model, the effect of faster progress on artificial intelligence on the risk of human extinction is very unclear.
  • Describe several of the model’s many limitations, illustrating the challenges involved with this kind of analysis.

We are working on developing a broader understanding of this set of issues, as they apply to the areas of science and technology described above, and as they relate to the global catastrophic risks we focus on.

Open Philanthropy Project update

This post gives an overall update on progress and plans for the Open Philanthropy Project. Our last update was about six months ago, and the primary goals it laid out were six-month goals.

Summary:

The overall theme is that we are putting most of our effort into capacity building (recruiting, trial hires, onboarding new hires). This is in contrast to six months ago, when most of our effort went into selecting focus areas. Six months from now, we hope to be putting most of our effort into recommending grants and putting out public content. (Specifically, we hope that our efforts within the “U.S. policy” and “global catastrophic risks” categories will fit this description. We expect it to take longer to choose focus areas within scientific research.)