The GiveWell Blog

Can the Green Revolution be repeated in Africa?

In his Annual Letter, Bill Gates describes the “Green Revolution”:

Almost every country that has become wealthy started with a huge increase in farming productivity. Chart 4 shows the increase in output per acre for various grains, including wheat, corn, and rice, in the United States, India, China, and Africa since 1961. This dramatic increase in output—more than three times—is often called the Green Revolution.

The Green Revolution, and the Rockefeller Foundation’s role in the research that enabled it, is frequently cited as one of philanthropy’s great success stories(1) and a huge contributor to enormous reductions in poverty.(2) Yet as Gates continues, “Africa jumps out as the only case where this [revolution] has not taken place.” Why?

To Gates, the answer appears to come down to insufficient investment:

African countries have widely varying climate conditions, and there hasn’t been the same investment in creating the seeds that fit those conditions. Because agriculture is an essential part of economic growth for most African countries, we are working with others to fund a “Green Revolution for Africa” and other areas that could benefit from this kind of investment.

Gates gives the impression that bringing the Green Revolution in Africa is mostly a matter of repeating what’s worked elsewhere, which would make it an excellent fit for our priorities. But from the rough analysis we’ve done, it appears that there has been at least as much effort to bring about a Green Revolution in Africa as elsewhere, and the obstacles in this area are specific and significant.

CGIAR funding

The Consultative Group on International Agricultural Research appears to have been the main vehicle for philanthropic funding of relevant research.(3) Using data from its website, we put together the chart below showing how much has been spent at its centers in (sub-Saharan) Africa as opposed to its other centers. The proportion was consistently been between 25% and 30% from 1972-2003:(4)

For context, sub-Saharan Africa accounted for 20-30% of the world’s extremely poor in 2003, up from 14-19% in 1990.(5) It certainly seems from this rough cut that funding for relevant research in Africa was in line with funding for relevant research in the rest of the world.

Norman Borlaug

Norman Borlaug is often credited with (and won the 1970 Nobel Peace Prize for) a leading role in the research that made the Green Revolution possible.(6) The transcript of a 2006 Center for Global Development event implies the following about his relative efforts in different areas:

  • He started working in Mexico in 1944 (pg 4); “By the late 1950s the cooperative program had made such a contribution to Mexico’s food production that … Borlaug had succeeded in working himself out of a job” (page 3).
  • He entered India in 1967, and within 10 years India had gone from threats of famine to self-sufficiency (page 3; see also the account of the Green Revolution given by the Library of Congress’s Country Studies/Area Handbook Series).
  • He has been working on bringing similar benefits to Africa since 1985 (pages 6-7) – far longer than he worked in either Mexico or India.

Comparable efforts; disappointing results

There are many factors that may make a Green Revolution in Africa difficult or impossible to bring about, including:(7)

  • Agricultural prices have fallen drastically (in real terms) since the original Green Revolution. The benefits to increased crop production may therefore not be as great.
  • Africa’s environment, with its high disease burden and difficult climate, presents special – and possibly greater – challenges compared to other environments.
  • Much of Africa has low population density and extremely weak infrastructure (including railroads, irrigation and electricity).
  • African governments are not providing the sorts of subsidies that Asian governments used to encourage agricultural output.

None of this means that bringing the Green Revolution to Africa is necessarily impossible, or that aiming funding at this goal is necessarily futile. But it’s important to recognize that this goal is a formidable challenge for which no strong precedent exists.

The prospect of an African Green Revolution is extremely appealing, but we don’t feel that this sort of investment can ultimately be counted as “proven and scalable,” and we don’t feel it’s as well-suited for individual donors as many health interventions (which are both proven and likely repeatable, and have simply not been funded enough to reach full coverage – more on this in a future post).


(1) See, for example:

(2) “The green revolution, which accelerated growth from the 1960s, beginning in India and Indonesia, was a major factor reducing poverty in Asia, as documented by numerous studies (see, for example, Rosegrant and Hazell 2000; Timmer 2002; Lipton 2004; Datt and Ravallion 1998a, 1998b). ” From “Agriculture, Rural Development, and Pro-poor growth” (World Bank 2005) pg 15.

(3) Based on a reading of the Rockefeller Foundation’s history (note 1 gives two examples of the Rockefeller Foundation’s being credited with a primary role in the Green Revolution).

(4) Data, sources, and calculations available here (XLS).

(5) Based on the proportion of people living on $1/day or less and $2/day or less, as reported on page 60 of the World Bank’s 2007 Global Economic Prospects report.

(6) See Borlaug’s Nobel Peace Prize bio and the opening statements/summary of the Center for Global Development’s event on “The Prospects of Bringing a Green Revolution to Africa”.

(7)Sources:

Measurement is not as common as it should be. Why?

The idea that there should be more measurement appears to be one of the points of widest agreement in the literature on aid. But we believe that agreement in principle is unlikely to mean much until donors (both large and small) act on it. It isn’t enough to request better information; we need to reserve our funds for those who produce it.

This post has two sections. First we give a sample of quotes from a broad set of people and institutions, showing how widespread the call for better measurement is. We then discuss why agreement isn’t enough.

Widespread calls for more and better measurement

From what we’ve seen, the wish for more and better measurement is a near-universal theme in discussions of how to improve international aid. Below is a sample of relevant quotes.

Abhijit Banerjee, Director of the Poverty Action Lab (which used to employ one of our Board members), puts better evaluation at the heart of his 2007 book, Making Aid Work:

The reason [past success stories such as the eradication of smallpox] succeeded, I suspect, is that they started with a project that was narrowly defined and well founded. They were convinced it worked, they could convince others, and they could demonstrate and measure success. Contrast this with the current practice in development aid; as we have seen, what goes for best practice is often not particularly well founded. (pages 22-23)

William Easterly argues for the importance and centrality of evaluation in The White Man’s Burden: Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good:

[S]ome equally plausible interventions work and others don’t. Aid agencies must be constantly experimenting and searching for interventions that work, verifying what works with scientific evaluation. For learning to take place, there must be information. The aid agencies must carefully track the impact of their projects on poor people using the best scientific tools available, and using outside evaluators to avoid the self-interest of project managers. (page 374)

Think of the great potential for good if aid agencies probed and experimented their way toward effective interventions … Think of the positive feedback loop that could get started as success was rewarded with more resources and expanded further. (page 383)

Jeffrey Sachs, former director of the United Nations Millennium Project (and known for disagreeing with Easterly on many issues – a partial set of debates is available here), calls for more evaluation in The End of Poverty: Economic Possibilities for Our Time:

Much clearer targets of what is to be achieved must accompany a major increase of spending. Every [Millennium Development Goal]-based poverty reduction strategy should be supported by quantitative benchmarks tailored to national conditions, needs, and data availability … Right from the start, the … poverty reduction strategy should prepare to have the investments monitored and evaluated. Budgets and mechanisms for monitoring and evaluation should be essential parts of the strategies. (pages 278-9)

The Center for Global Development created a working group (with support from major foundations) specifically to examine why “very few programs benefit from studies that could determine whether or not they actually made a difference.” Its report provides further argument for why more evaluation is necessary:

Rigorous studies of conditional cash transfer programs, job training, and nutrition interventions in a few countries have guided policymakers to adopt more effective approaches, encouraged the introduction of such programs to other places, and protected large-scale programs from unjustified cuts. By contrast, a dearth of rigorous studies on teacher training, student retention, health financing approaches, methods for effectively conveying public health messages, microfinance programs, and many other important programs leave decisionmakers with good intentions and ideas, but little real evidence of how to effectively spend resources to reach worthy goals. (page 2)

The concern is not limited to researchers and think tanks: it’s one of the primary elements of the Paris Declaration on Aid Effectiveness, which emerged from a major meeting of “development officials and ministers from ninety one countries, twenty six donor organizations and partner countries, representatives of civil society organizations and the private sector.” The declaration states that aid recipients should “Endeavour to establish results-oriented reporting and assessment frameworks that monitor progress against key dimensions of the national and sector development strategies” (pg 8 ), and that donor governments should “Link country programming and resources to results” (pg 8 ). One of its 12 key indicators of progress is an increase in the “Number of countries with transparent and monitorable performance assessment frameworks” (page 10).

For our part, we strongly believe in the importance of measurement, particularly for public charities soliciting individual donations. Some of our reasoning echoes the arguments given above – that helping people is hard, that intuition is a poor guide to what works, and that measurement is necessary for improvement. We also feel the argument is even stronger for the particular area we’re focused on: helping people who have a few thousand dollars and a few hours, but ultimately know very little about the organizations they’re funding and the people they’re trying to help. Formal measurement is necessary for individual donors to hold charities accountable.

Why agreement doesn’t translate to action

The quotes above share a commitment not just to the general usefulness of measurement, but to the idea that there should be more of it than there is. This raises the question taken up by Lant Pritchett in “It pays to be ignorant: a simple political economy of rigorous program evaluation” (PDF): why is quality evaluation so rare, or as he puts it, “How can [the] combination of brilliant well-meaning people and ignorant organization be a stable equilibrium?”

Pritchett argues (pages 33-34) that the scarcity of evaluation can’t be explained by concerns such as expense, practical difficulty, or ethical concerns. He conjectures, instead, that the lack of evaluation is due to strategic behavior by those closest to the programs. These “advocates” tend to be strongly committed to the programs they work on, to the point where their behavior is guided more by trying to get more funding for these programs than trying to get new information on whether they work. According to Pritchett’s model, when donors do not demand rigorous evaluation, “advocates may choose ignorance over public knowledge of true program efficacy … even if it means they too must operate somewhat in the dark” (page 7).

In our view, the most compelling support for Pritchett’s model is his claim that

a huge number of evaluations are started and very few are finished, written up, and publicized. This evaluation attrition is too large to be consistently “bad planning” and is more likely strategic behavior.

This claim matches extremely well with our own observations of charities’ materials. The grant applications we’ve received (available online) frequently give reasonable-sounding plans for future evaluation, but rarely have results from past evaluations available.

We believe that foundations tend to focus on innovation as opposed to past results, while individual donors currently don’t have the information/ability/interest to hold agencies accountable in any way. We know less about the incentives and practices of large aid agencies (governments, the World Bank, etc.) but find it possible that they are driven more by politics and appearances by humanitarian results.

In other words, funders are not forcing evaluation to happen, and until they do, there’s little reason to expect improved/increased measurement. Agreement isn’t enough – if we want better information on what works, we need to commit with our dollars, not just our beliefs. As an individual donor, you have a key role – perhaps the key role – to play.

Review of The Life You Can Save, by Peter Singer

The Life You Can Save went on sale in the U.S. on Monday. First, disclosures: the book prominently features GiveWell, a portion of the book’s proceeds are being donated to GiveWell, and I was sent an advance copy. I have strong incentives to encourage people to read and buy the book.

So let me start with a reason not to read it: it will make you uncomfortable. It certainly made me uncomfortable. It started by asking me a simple question – would I sacrifice time and money to save a stranger’s life? If so, why don’t I give more of my income to charity? – and pounded away relentlessly, tearing apart every excuse I had until I was left with “I’m really selfish.”

I’ve appreciated many books for making me feel scared, or angry, or sad. Now there’s one to make me feel personally guilty. (How’s that for a blurb?)

Of course the goal of the book isn’t to make people feel guilty, it’s to get them to give a lot (even if not as much as they, strictly speaking, could). And unlike the IRS, Prof. Singer doesn’t see supporting the local museum as equivalent to saving children’s lives. He’s specifically advocating more giving to developing-world aid, a goal we strongly agree with (as our research agenda demonstrates). You could think of this book as an End of Poverty on a personal rather than global scale – instead of arguing that the international community has the power to end poverty, it argues that you have the power (and thus the responsibility) to save a life.

But can a donation really save a life?

As with The End of Poverty, the moral argument depends on factual questions, and meets some skepticism from William Easterly, who argues – partly from GiveWell’s experience trying to find great charities – that saving a life is not as simple as it’s often made to sound.

There is merit to this. We’ve put a lot of effort by now into finding charities you can be confident in, and we still consider it an open question whether a $1000 donation really translates to a saved life. We estimate that it can in PSI’s case, but there are all kinds of room for uncertainty.

For example. To me the biggest questions with PSI are, (1) Is it getting its subsidized life-saving materials (mostly condoms and insecticide-treated nets) to people who need them, rather than to people who don’t? (2) Are these people consistently and correctly using the materials? One of the reasons I really like PSI is it seems very concerned with these two questions, and attempts to collect data specifically on them; the data it makes available imply success. On the other hand, a lot of monitoring and evaluation isn’t getting done (see the research scorecard, which to its credit PSI makes public), and none of it appears externally audited. How reliable is this data? How representative is the information we have?

And that’s PSI, our current top-recommended charity. Even if $1000 can save a life, your $1000 isn’t unless it gets used well. There’s no charity that makes me even 90% confident this is happening, and with the “average” charity I’d bet that it isn’t.

We can do more – not just give more

However, the bottom line is that I don’t think these concerns mean that Prof. Singer’s challenge can be dismissed. For one thing, even if 90% of PSI’s activities accomplish nothing and the other 10% are in line with our impressions, that’s still $10,000 per life saved – enough for the moral argument to remain very relevant, in my opinion. Based on the limited information we have, it appears that donating to our recommended charities likely is saving lives at some relatively good rate. It might be more uncertain and probabilistic than pulling a drowning child out of the water, but it’s still a compelling value for your money.

And the other issue is that there are more charities out there to be examined, and more improvement to be had from holding them accountable. As Prof. Easterly acknowledges, there are many proven life-saving programs. There may not be infinite room to expand these programs; these programs may not be able to end poverty by themselves; but they can absorb at least a few million more dollars. And that does mean that nearly all of us could be doing more to save (or change) lives than we are.

It’s just that “doing more” has to mean more than “giving more.” Picking your charity – and doing your part in holding it accountable – is at least as important as giving generously. We’re trying to make this task easier for time-strapped donors: if you put credence in our analysis, it can mean simply basing your giving on our recommendations (informally, or formally via GiveWell Advance Donation).

Bottom line

Unlike many “give more” advocates that target only dollars spent, Prof. Singer recognizes the challenge of translating generosity into results (hence his interest in GiveWell, as well as J-PAL, which we’re big fans of). His book challenges you to give more and give better. Neither of these is easy … nor is reading The Life You Can Save. But they’re worth it, because even for an individual donor, saving a life is within reach.

The Center for High Impact Philanthropy

We’re very excited about the Center for High Impact Philanthropy, which recently released two reports: one focusing on increasing equality of opportunity in the United States and one on combating Malaria in the developing world. (H/t Alanna Shaikh)

We’ve read the full-length reports (available via email request to the Center) and we’re excited because:

  • Their reports focus on cost-effectiveness in human terms. Many global health reports focus heavily on the the DALY’s metric, which can make it hard for donors to see the human impact of their donations.
  • Their reports deal directly with issues of evidence of effectiveness, aiming to cite rigorous evidence where it exists.
  • They recommend specific charities and programs so that donors can act on the Center’s findings.

We’ve only briefly looked at these two reports, but based on what we’ve seen, we’re looking forward to their future publications and recommend that interested donors check out their work.

Preview report

Now linked from the front page of GiveWell.net is a preview of our 2008-2009 report. The main content of the report so far is a review of the Carter Center (discussed in previous blog posts here and here) as well as information on the track records of the programs it runs and the diseases it targets.

There is much more on the way, including the continuation of my series of overviews of general issues in developing-world aid (earlier entries here and here). For now, though, the review of the Carter Center (and accompanying materials) will give a strong sense of our basic structure, approach, and criteria, which have changed significantly since our 2007-2008 report. We are eager for feedback.

Preview of 2008-2009 Report