The GiveWell Blog

Worth watching

The Brookings Institution is hosting a conference this week called “What works in development?” including an interesting paper by Simon Johnson (International Monetary Fund) and Peter Boone (London School of Economics) titled, “Do Health Interventions Work? Which and in What Sense?

Johnson and Boone review the existing literature and conclude that there is very little knowledge about the most effective methods for reducing child mortality in the developing world, and that without improved knowledge, aid organizations may fail to reduce child mortality as much as they hope.

Knowledge is limited in the following ways:

  • We know what works clinically, but know far less about the most effective ways to fully implement an intervention.

    This rings true to me and makes me think of bednets. We know that insecticide treated bednets, when used properly and consistently prevent deaths from malaria (at least in the short term), but we don’t know the most effective way to ensure proper use, a critical component of the intervention. Evidence for the effectiveness of bednets comes from aid experience in very specific contexts (e.g., the way in which the nets are distributed, the education level of those who receive them, etc) which means that the distributing bednets may not be as effective when implemented in a context different from that of the initial evaluations.

  • When we have multiple, proven interventions, we generally don’t know which to implement where or how they’d work as a package.

    Keeping the point above in mind, there’s strong evidence that both insecticide treated bednets and artemisinin-based therapies reduce child mortality when implemented properly. However, little is known about how they work together (as a package) or which situations are best suited to one or the other. (It doesn’t make sense to implement both everywhere because a) the more the treatment is used the more quickly resistant strands of malaria will likely develop and b) the cost of implementing both everywhere will obviously exceed an approach that implements only what is necessary).

  • Evaluations of interventions’ effectiveness often stop at measuring reduction in incidence as opposed to total mortality.

    Often, evaluations of interventions focus on an intervention’s effect on disease incidence (e.g., the reduction in cases of diarrhea caused by building improved water and sanitation infrastructure). This is a problem because many of the causes of death in the developing world are interrelated – i.e., one problem increases the likelihood of death from another. UNICEF estimates that malnutrition is a contributing factor to 50% of child deaths (from malria, diarrhea, etc.), and the WHO finds that measles contributes deaths from pneumonia and diarrhea. Because of these interrelationships, evaluations that only asses an intervention’s effect on disease incidence may not accurately identify the effect on mortality.

    This problem is illustrated in a recent paper cited by Johnson and Boone that finds that while water and sanitation projects reduce incidence of diarrhea, they have a minimal impact on child mortality. Johnson and Boone hypothesize that:

    It seems plausible that the much wider coverage of water and sanitation today, along with the advent of vaccines and treatments for the main causes of death from infectious disease, mean that further improvements in water and sanitation are no longer necessary or very significant to eliminate remaining deaths (pg 21).

Johnson and Boone have their own view on the best approach: targeting parental knowledge rather than distribution of materials. They observe that:

  • Many interventions are extremely inexpensive (e.g., Oral rehydration therapy costs $.10/packet and malaria treatment costs $.50 cents/dose – Pg 17), and are not beyond the means of many people in the developing-world.
  • There is good evidence that parents are not very knowledgeable about health (Pg 25), and parents’ education is highly correlated with child mortality (Pg 23).

It’s a plausible hypothesis, but could easily be flawed, as Johnson and Boone point out themselves. For example, it’s possible that the observed correlation between parental education and child health is a simple consequence of the fact that more educated parents also tend to be wealthier, and that wealth is in fact the primary factor here.

Knowing that their hypothesis could be right or wrong, Johnson and Boone have set out to test it. Working with Effective Intervention, a UK-based charity, they’re planning to implement a series of randomized controlled trials of comprehensive aid programs focusing on a) educating parents and b) providing access to necessary health products. In some areas, they’ll also include education for children as part of the intervention, planning to follow the children at least 10 years after the completion of the trial. Eventually, they plan to run trials in 600 villages in Africa and India covering 500,000 children. They say it will take three years for the first findings.

This is the first we’ve heard of Effective Intervention, but they are taking exactly the approach we identify with most: starting with a systematic review of what we do know, pinpointing what it is we want to know next, and then focusing on producing that knowledge rather than on scaling up a program with unknown effectiveness. We’re looking forward to their results.

Good vs. better

I recently read Better by Atul Gawande, and found myself particularly struck – and reminded of our own situation – by his analysis of hospital care.

According to Dr. Gawande, conventional wisdom has long been that the vast majority of hospitals provides top-notch, quality care, and only a fraction treats their patients incompetently. This implies that the most important thing a patient can do is weed out incompetent hospitals – but worrying about “average” vs. “exceptional” isn’t worth it. But a systematic study found otherwise.

The Cystic Fibrosis Foundation had long monitored the well-being of CF patients at hospitals around the country. When they evaluated the data, they found that quality of care and patient outcomes, such as life expectancy and quality of life, varied more than they had expected. A relatively small group of hospitals provided low-quality care, but another group provided top-notch care. Their patients lived longer and lived better. (The foundation also found that the vast majority of hospitals fell in the giant, indistinguishable middle, providing average care that was neither incompetent nor excellent.)

The differences in impact weren’t just academic. The evaluation found that average life expectancy for someone diagnosed with CF was 30 yrs, but at top hospitals, life expectancy was significantly longer, averaging 46 yrs. While alive, patients at average hospitals had lower quality of life than those without CF because they had lower lung function and consequently couldn’t participate in a host of normal activities. At top hospitals, patients’ lung function was equivalent to those without the illness. The differences were real and they were stunning.

Recognizing the significance of the results, the Cystic Fibrosis Foundation did the only thing they could: they made all the information public. Patients should know which hospitals provide best care, shouldn’t they? Doctors should know which methods work best, shouldn’t they?

Critics feared that demand for care at the best hospitals would outstrip the hospitals’ ability to provide it, but instead something else happened. Care improved across the board as hospitals’ staff met and implemented the practices of those at the top.

The reported conventional wisdom on hospitals reminds me of the conventional wisdom we constantly hear expressed about charities: that the vast majority of them do great work, and that it’s important to weed out the “frauds” but that distinguishing between “legitimate” charities amounts to nitpicking. Our instinct, however, is that charities are like hospitals, companies, and any other set of highly complex organizations that vary in their people and approach to difficult problems. Our instinct is that in charity as in most other things, the difference between “best” and “average” is at least as important as the difference between “legitimate” and “illegitimate” – and that starting to examine the differences, publicize them, and push for the best may lead to improvement across the board.

Emergency assistance for donors

In the wake of the cyclone in Myanmar, donors need help.

Google “Myanmar” and you’ll see a huge list of organizations advertising for donations. I don’t know whether they’re coordinating on the ground, but they’re certainly competing when it comes to raising money – and donors, including myself, have virtually nothing to go on in picking one.

Today’s conference call, hosted by Arabella Advisors, had so much interest that it ran out of phone lines (there were hundreds of people listening in). During the Q&A session, they announced that “The single most common question that’s coming in is ‘Tell us who to give money to.’” Their primary answer was to point to a list of InterAction.org list of “vetted” charities – a whopping 46 “recommended” charities, including practically every big name, alphabetically arranged, with small blurbs provided by each charity. (The vetting standards themselves are familiar for their emphasis on accounting and governance; these are important things, but there is absolutely no mention of, for example, charities’ track record in past disasters.) It’s a familiar sight: a generic, non-judgmental set of standards has been used to try and avoid the worst, not to help find the best.

Arabella also brought up the option of consulting “community foundations and other organizations you already trust.”

While I appreciated much of the content of the call, their way of handling this question sounded to me like “Donors, you’re on your own.” I’m guessing the reason the question was so popular is because donors don’t have “already trusted organizations”; they don’t know where to give. That’s certainly the case for our donors, who have been emailing me for advice and even using words like “helpless” and “desperate” to describe how they feel – wanting to help, deluged with appeals, and entirely without means to answer the simple question: “where should I donate?”

Right now, I believe that donors need emergency help. I don’t mean this the way that fundraisers sometimes mean it, i.e., as a plea to help donors feel better about themselves by providing emotional reassurance about their donations. I mean that we need to help well-meaning people help others, by understanding that they don’t have a pre-existing wealth of knowledge about Myanmar, that they don’t have a pre-existing commitment to and knowledge of the best aid organizations, by understanding that they just want to help in the best way possible – and, therefore, by giving substantive, well-supported, specific recommendations for where to give.

We’re looking into whether we’ll be able to provide such recommendations, in a relevant time frame. In the meantime, I haven’t found any philanthropic experts giving donors the help they need.

Cyclone relief: Recommendation and questions

I had a typical reaction to the disaster in Myanmar: wanting to do something. I have spent very little time looking into the area of disaster relief, so after a bit of Googling and discussion with Elie, I gave to Population Services International for two reasons:

  • PSI was the winner of our “saving lives” cause for 2007; we are extremely impressed with the organization as a whole, particularly its commitment to thorough self-monitoring. We don’t know much about their relief operations, but I would bet on PSI over any other international relief organization I know of just in terms of the extent to which it “runs a tight ship” with solid monitoring and oversight that allows accountability from the field to the top.
  • PSI has a major and long-established presence in Myanmar; I believe (based mostly on this article) that having a pre-existing presence is important, particularly in a situation like this where the idiosyncracies of the area and particularly government seem important. I’m most comfortable with an organization that is used to getting work done in this political and cultural environment.

This is an informal, personal recommendation; it is backed not by an in-depth research project, but by the quick heuristics above.

This also got me thinking, though, about the more general cause of “disaster relief.” We looked into this cause back in 2006 (when we were still a part-time group of volunteers) and found very little. We aren’t aware of any organizations that are exclusively committed to disaster relief; rather, it seems to us that most relief efforts come from large humanitarian organizations, such as PSI, the Red Cross, World Vision, CARE, Direct Relief International, etc. that spend most of their time and money on direct, day-to-day (not disaster-related) aid. This makes sense, since it means emergency aid efforts can be aided by already-on-the-ground presences.

However, it isn’t necessarily the case that the best “day-to-day” relief organization is the best disaster relief organization. The former may be best accomplished through meticulously planned long-term projects that rely on proven techniques to get the maximal dollar-for-dollar impact; by contrast, I would guess that a disaster presents problems that are unusually simple to solve (people who need basic supplies, but who don’t necessarily suffer from a host of interrelated physical, economic, and cultural obstacles), and that speed and efficiency are more important. I’d be very interested in a compiled summary of disaster relief efforts over the last 10 or so years – which organizations were first, and most instrumental, in each relief effort. It seems feasible that such a summary could be created by polling affected governments and citizens, but I’ve never seen one.

I also wonder whether there are cost-effective “disaster preparedness” measures that can aid particularly vulnerable areas in advance. I was shocked at the death toll from this particular disaster, and I wonder whether a similar storm in the U.S. could have been nearly as devastating. It’s possible that disaster preparedness comes mostly from widespread economic prosperity, and that nonprofits are ill-equipped to bring about the kinds of drastic changes that would be needed to improve preparedness (and/or that the areas least equipped for disasters also have other, more important problems). But it also seems possible to me that constructing some extra shelters – or equipping communication infrastructure to provide effective early warnings – could save lives far more effectively than focusing only on after-the-fact interventions.

Looking into these questions, as with just about any area of philanthropy I can think of, would take significant time and resources. I’m not sure whether we’ll get to do it anytime in the near future. But it seems likely to me that the costs of such investigation would be more than justified. When disaster strikes, a lot of people reach straight for their wallets, and give without having time to think about their different options. But the thinking could be done, centrally, in advance – imagine what a difference that would make.

Why scholarships disappoint?

We’ve wondered why scholarship programs don’t have a stronger impact on academic achievement, and have guessed that it’s because disadvantaged children are so far behind by age 5 that they need special schools, with a special approach, if they’re to have any hope of catching up.

The quote below, from an article in the Washington Monthly (h/t Kevin Carey), offers another possibility: the private schools students with vouchers attend may be little better than the public schools they leave. This is a report on the Milwaukee voucher project, not the New York programs we’ve focused on, but it makes me wonder if New York private schools could be as troubled.

In 2005, a team of reporters from the Milwaukee Journal Sentinel visited all but a handful of the private choice schools, and found that “the voucher schools feel, and look, surprisingly like schools in the Milwaukee Public Schools district. Both … are struggling in the same battle to educate low-income, minority students.” The Journal Sentinel also reported that the absence of oversight from the much-derided government bureaucracy had led to a significant waste of public funds, and even outright fraud. At least ten of the 125 private schools in the voucher program “appeared to lack the ability, resources, knowledge, or will to offer children even a mediocre education.” Most of those schools were led by individuals who had negligible experience and had no resources other than state payments.

Why nitpick?

In response to Elie’s recent series of posts on malnutrition, John J. comments:

“What specific nutrients are they missing…etc etc etc?”

Here’s a question for you: What possible use is this question to Givewell? Do you really need to get into such miniscule depth with regard to poor people who can’t afford enough food and who are malnourished as a result?

One might accuse you of wasting the time of non profit workers with such picky detail. Serously, I’m not just being cranky here–what was the point of these questions? It seems to me that common sense is enough here: people don’t get enough of the rights kinds of foods to eat, and helping them get enough is…helpful. Really, what was your point? And, additionally, what was the point of this post at all?

First of all, we don’t believe that food aid is necessarily helpful: we’ve seen plausible arguments that it can do more damage (by undermining local farmers’ business) than good. (See this critique from Philanthropy Action, co-maintained by Board member Tim Ogden, as well as this story on CARE’s decision to withdraw from the US government’s food aid program.) Broad enough outcomes data could mitigate this concern, as could clear information on the local food market in the region in question; without it, we’d still bet that food aid is a good thing on average, but could easily be wrong.

But the reason we ask such specific questions isn’t primarily to determine whether aid helps; it’s to find the aid that will help as much as possible (in ways that fit our philosophical goals). Like any donor, we choose between literally thousands of charities; a core idea of GiveWell is that under these circumstances, it doesn’t make sense to settle for “some” good accomplished.

Given the variety of different approaches to malnutrition, we expect that different charities vary wildly in both:

  • What kind of life change they’re bringing about. For example, Vitamin A deficiency may significantly increase the likelihood of death before age 5, while deficiencies in Iron and Iodine may lead to Anemia and reduced I.Q. These are fundamentally different benefits that can’t be reduced to the same terms – and that different donors will value differently. In order to understand our options, we need the details of what sorts of malnutrition are being addressed, and where.
  • How many lives they’re changing (i.e., cost-effectiveness). We find it possible that some charities are simply carrying better-conceived and -executed programs than others – that means more people helped, for the same funds. And even if different malnutrition programs turned out to be roughly comparable to each other, we’d still want to know how they compare to all the other health interventions out there, from hospitals and health centers to condoms and bednets.

These issues don’t matter very much if the only line you draw is between “donation was squandered” and “donation helped people.” But if you want to help people as much as you can, the lack of public answers to our questions is a real problem.