The GiveWell Blog

Some thoughts on the yellow fever vaccine

There’s news today that the Yellow Fever Initiative is facing a budget shortfall and may be unable to purchase needed vaccines in the near future (h/t Christine Gorman):

Emergency supplies of yellow fever vaccines are set to run out next year, and there is no funding to continue immunisation campaigns after that, World Health Organisation experts said on Tuesday.

The mosquito-borne yellow fever virus infects 206,000 people a year and kills 52,000, mainly in tropical regions of Africa and the Americas.

Recent outbreaks in Brazil, Central African Republic and elsewhere have drawn down the 6 million doses of yellow fever vaccine reserved for emergency response, and a $186 million shortfall has left the WHO unable to vaccinate high-risk people in Ghana and Nigeria as it had planned.

“For 2011, the Yellow Fever Initiative has no funding for either the emergency stockpile or the continued roll-out of preventive campaigns,” she told a news briefing in Geneva.

“As we look beyond 2009, we already see serious funding constraints,” Dr. William Perea, the WHO’s epidemic readiness and intervention coordinator, said in a statement after a two-day meeting of U.N. and aid groups.

Is there a real possibility of the program stopping because of lack of funds?

Is this the type of funding gap that will eventually be filled by donors (by governments or the Gates Foundation)? It seems like donors have a good deal of time before 2011 to give more money. Or, alternatively, can the WHO reallocate funds from a program that has adequate funds to the Yellow Fever Initiative which does not?

The history of Yellow Fever in Africa may shed some light on this:

Between the 1940s and 1960s, widespread mass vaccination campaigns in some African countries had resulted in the almost-complete disappearance of yellow fever. However, as immunization campaigns waned, a generation of people grew up with no immunity to the disease, and by the 1990s the number of annual cases had risen to an estimated 200 000 per year, with 30 000 deaths, and urban outbreaks were starting to occur.

Yellow fever had returned as a major scourge and, as urbanization progresses across Africa, the threat of a major epidemic looms ever larger. WHO estimates, for example, that this highly transmissible disease could infect around one third of the urban population, or up to 4.5 million people, in Lagos, Nigeria alone.

We’re interested in learning about programs that stopped because they just couldn’t raise enough money. Is that what happened with Yellow Fever? Are there other examples of this happening?

How can an individual donor support immunization programs?

I don’t know much about the Yellow Fever Initiative. How does it compare to GAVI or VillageReach (both on our list of top contenders to be a recommended charity in our upcoming report) as a means for donors to support expanded immunization programs, a proven, cost-effective method for improving health and saving lives in the developing world.

In 2007, GAVI supported the Yellow Fever Initiative with a grant of close to 60 million dollars. Is this grant subject to the same reporting and evaluation requirements of GAVI grants through its “regular” channels (which includes funding for Yellow Fever vaccines)?

There’s little information about the Yellow Fever Initiative online (its main page is here).

What can the developed world teach the developing world?

When we aim for something more ambitious than transferring our wealth to those in need, we’re often implicitly assuming that we have superior knowledge, compared to the people we’re trying to help. This seems to me to be the sort of thinking underlying this comment: “how does handing out cash build community, solve macro problems, provide a base for effective activism?”

One thing I believe the developed world can teach the developing world is facts about medicine. For example, many people in developing-world communities do not know as much as we do about how HIV/AIDS is transmitted, how diarrhea is contracted, and what to do about it. We can share and promote facts about these diseases that do not depend on local politics, customs, etc. (for example, wearing a condom drastically reduces the risk of transmitting HIV). So far, so good.

What else do we feel confident that we can teach the developing world?

Do we have superior knowledge of how to run a business? Within their political, cultural and economic environment?

Do we have superior knowledge of how to build a healthy civil society? Of how to run their community?

Before we insist on “teaching” others about these things, we have to ask why we think we have things to teach. I’m not convinced.

Pitfalls of the overhead ratio?

Good Intentions are not Enough gives some stunning examples of how charity can go wrong, and specifically points at the widespread emphasis on “low overhead” (which we have repeatedly criticized) as a culprit.

It’s worth noting that literal “administrative expenses” metric is often less harmful than the broader definition of “overhead.” For example, many evaluation and planning expenses can be and are classified as program expenses. However, the distinction is not always as clear to donors and even to charities as it is to accountants, so things that “feel like” overhead may be under-invested in even if they don’t affect the numbers on the Form 990.

Why not just give out cash?

Aid Watch raises an interesting question: why should nonprofits provide medical treatment, education, or anything else other than cash handouts to those in the greatest need?

I can only think of two reasons, both noted in the Aid Watch post.

Reason 1: perhaps charities can make better decisions on behalf of disadvantaged people than those people can make for themselves. It’s certainly possible – disadvantaged people may often be poorly informed or educated. Still, if this is the primary justification for a charity’s activities, doesn’t it seem like the burden of proof should be on the charity?

A standard evaluation of a program compares “participating in the program” to “not participating in the program.” Isn’t that too low of a bar? Year Up spends about $20,000 for every person receiving employment services. Wouldn’t an ideal test of Year Up be to hold a lottery where “winners” receive Year Up services and “losers” receive $20,000 each?

(As a side note, such an evaluation ought to see less attrition – if cash payouts are dependent on participation – and face fewer ethical objections as well.)

Reason 2: cash transfers would become a huge target for cheating. (I include the concern that “this approach puts women and children at a disadvantage, while men take and spend the cash” in this broad category – i.e., money failing to get to the people it is intended to benefit.) This is also a valid concern, but it does not apply only to cash transfers.

PeopleAid gives rickshaws to people in the developing world. Who in the area wouldn’t like to pick up and sell a free rickshaw? The same concern applies, in varying degrees, to a host of other goods provided by charities, including drugs, bednets, cellphones, spectacles, fertilizer, and credit. (Who wouldn’t want a loan with a below-market, donor-subsidized interest rate that could then be re-loaned for a profit?)

The more a charity’s goods/services are transferable, the more of a concern it becomes that anyone who can get their hands on one will want to … which, in turn, may mean that the people who benefit most from the charity’s services could be the ones with the greatest power, rather than the greatest need. If a charity is giving out transferable items – including loans – for free or at donor-subsidized prices, it’s essential to know how they control who has access to them.

The promise of cash transfers

These two concerns noted, giving out cash to low-income people does strike me as a potentially promising approach. If it could be done effectively and on a large scale – giving many disadvantaged people the funds to meet their own needs – then many other humanitarian organizations could get more of their funding from their clients, and less from their donors. Which would you bet on to get water to people in Kenya: an organization funded by wealthy Americans (motivated by guilt and the wish to display generosity, among other things), or an organization funded by Kenyan customers (motivated by a need for water)?

Why do cash handouts seem to be so rare in the charity world? Perhaps it’s because extensive experience and study have shown this approach to be inferior to others. Or perhaps it has more to do with the fact that giving out cash fundamentally puts the people, rather than the charity, in control.

Followup on Fryer/Dobbie study of “Harlem miracle”

I recently posted about a new, intriguing study on the Harlem Children’s Zone. It’s now been a little over a week since David Brooks’s op-ed brought the study some major attention, and I’ve been keeping up with the reaction of other blogs. Here’s a summary:

Methodology: unusually strong

I haven’t seen any major complaints about the study’s methodology (aside from a couple of authors who appear to have raised possible concerns without having fully read the study – concerns that I don’t believe apply to it). The Social Science Statistics Blog noted it as “a nice example of careful comparisons in a non-experimental situation providing useful knowledge.”

Many studies in this area – particularly those put out by charities – have major and glaring methodological flaws/alternative hypotheses (example). We feel that this one doesn’t, which is part of what makes it so unusual and interesting.

Significance: possibly oversold

David Brooks came under a lot of criticism for his optimistic presentation of the study, stating “We may have found a remedy for the achievement gap.” Thoughts on Education Policy gives a particularly thorough overview of reasons to be cautious, including questions about whether improved test scores really point to improved opportunities and about whether this result can be replicated (“Each school has an inordinate number of things that make it unique — the Promise Academy more so than most”).

Its “What should we learn from the Promise Academy?” series (begun today) looks interesting; it is elaborating on the latter point by highlighting all the different ways in which this school is unusual.

We feel that these concerns are valid, and expressed similar concerns ourselves (here and here). However, given the weak results from past rigorous studies of education, we still feel that the results of this study bear special attention (and possible replication attempts).

Teaching to the test?

Aaron Pallas’s post on Gotham Schools raises the most interesting and worrying concern that I’ve seen.

In the HCZ Annual Report for the 2007-08 school year submitted to the State Education Department, data are presented on not just the state ELA and math assessments, but also the Iowa Test of Basic Skills. Those eighth-graders who kicked ass on the state math test? They didn’t do so well on the low-stakes Iowa Tests. Curiously, only 2 of the 77 eighth-graders were absent on the ITBS reading test day in June, 2008, but 20 of these 77 were absent for the ITBS math test. For the 57 students who did take the ITBS math test, HCZ reported an average Normal Curve Equivalent (NCE) score of 41, which failed to meet the school’s objective of an average NCE of 50 for a cohort of students who have completed at least two consecutive years at HCZ Promise Academy. In fact, this same cohort had a slightly higher average NCE of 42 in June, 2007. [Note that the study shows a huge improvement on the high-stakes test over the same time period, 2007-2008.]

Normal Curve Equivalents (NCE’s) range from 1 to 99, and are scaled to have a mean of 50 and a standard deviation of 21.06. An NCE of 41 corresponds to roughly the 33rd percentile of the reference distribution, which for the ITBS would likely be a national sample of on-grade test-takers. Scoring at the 33rd percentile is no great success story.

One possible interpretation is that cheating occurred on the higher-stakes tests, but this seems unlikely since performance was similarly strong on lower-stakes practice tests (specifics here). Another possible interpretation is that Harlem Children’s Zone teachers focused so narrowly on the high-stakes tests that they did not teach transferable skills (as Mr. Pallas implies).

We haven’t worried much about the “teaching to the test” issue to date, if only because so few interventions have shown any impact on test scores; at the very least, raising achievement test scores doesn’t appear to be easy. But this is a major concern.

Another possible interpretation is that stronger students were absent on the day of the low-stakes test, for some irrelevant reason – or that Mr. Pallas is simply misinterpreting something (I’ve only read, not vetted, his critique).

Bottom line

We know that the Fryer/Dobbie study shows an unusually encouraging result with unusual rigor. We don’t know whether it’s found a replicable way to improve important skills for disadvantaged children.

We feel that the best response to success, in an area such as this one, is not to immediately celebrate and pour in funding; it’s to investigate further.

Related posts:

“Did it happen?” and “did it work?”

You donate some money to a charity in the hopes that it will (a) carry out a project that (b) improves people’s lives. In order to feel confident in your donation, you should feel confident in both of these.

In most areas of charity, we feel that people overfocus on “did it happen?” relative to “did it work?” People often worry about charities’ stealing their money, swallowing it up in overhead, etc., while assuming that if the charity ultimately uses the funds as it says it will, the result will be good. Yet improving lives is more complicated than charities generally make it sound (see this recent post of ours). This partial list of failed programs is made up entirely of programs that appear to have been carried out quite competently, and simply didn’t improve the lives of clients.

In international aid, the relative importance of “did it happen?” grows for a couple of reasons:

  • International charities work far away and often in many different countries at once. It often isn’t feasible for their main stakeholders (Board members, major donors, etc.) to check that projects are being carried out.
  • International charities are working within foreign political systems, cultures, etc. Materials can be stolen or misappropriated en route. Locals can take advantage of their superior knowledge and “game the system.”
  • Many of the activities international charities carry out are proven to work (though many are not as well). Using insecticide-treated nets will reduce risk of malaria (more); an appropriate drug regimen will cure tuberculosis (more); vaccinations will prevent deadly diseases (writeup forthcoming). These claims have been proven and are essentially not subject to debate. This is not the case in the developed world – most of the programs charities work on have not been shown to improve outcome measures of health, standard of living, etc. (See, for example, this guest blog post.)

“Did it happen?” is a question that can largely be answered by informal, qualitative spot-checks. That’s why we would like to see more and better qualitative evidence. By contrast, to know whether a program worked, you need to somehow compare what happened to clients with what would have happened without the program – something that is often hard to have confidence in without formal outcomes tracking and evaluation.

Therefore, we believe that the role of site visits, qualitative evidence, spot-checks, etc. is likely more important in international giving than in domestic giving. In international aid, delivering proven programs (particularly medical ones) is a large part of the battle. In the U.S., most reputable charities are probably doing what they say they’re doing; the question is whether what they’re doing is effective.