The GiveWell Blog

Village Phone: Another great story under the microscope

Ever since I heard about the Grameen Foundation’s Village Phone program, I’ve been optimistic. The program involves helping people in remote villages run pay-for-use cellphone services: they get their cellphone, and a loan to buy it, via Grameen, then charge other villagers to use it. It’s an approach to fighting poverty that’s (a) relatively new; (b) using a product that hasn’t been available for a long time but seems clearly useful to anyone doing business in remote areas; (c) utilizing a “franchise model” where people in the villages take a stake in the product.

It was near the top of my “Probably helping people, even though we don’t yet have systematic evaluation yet” list. Now Chris Blattman points to a discouraging evaluation that found “absolutely no impact of the phones on trading activity or availability of goods in local markets” and very small (non-significant) impacts on profits and measures of well-being (school enrollment, consumption of meat, etc.)

This bottom-line result does not, by itself, mean the program “doesn’t work.” It could work very differently in different contexts (discussed below), and there are some possible issues with the paper (which is very recent, and is not a randomized controlled trial). But one thing I like about the study is that it doesn’t just discuss impact – it examines many aspects of the program, and exposes assumptions that may otherwise have gone unquestioned.

Assumption 1: the phones are in high demand and operators easily cover their costs. In fact, usage of the phones was around 4 hours a month, or 8 minutes a day (pg 19). As a result, profits from the phones were not enough to keep up with loan payments (pg 19). Grameen reportedly has responded by changing loan and franchise terms (pg 30). Tuvugane (pg 5), a less sophisticated phone product that was already common in the villages, may have been good enough for most purposes.

Assumption 2: farmers who use the phones benefit from better pricing power. Even though farmers with access to the phones became much more likely to arrange their own transport to market, there was no apparent effect on the prices they received for their goods, possibly due to established relationships with buyers (pg 16).

Assumption 3: if someone chooses to become a cellphone operator, they’re going to benefit from it. In fact, there was a very strange pattern in the businesses of people who became phone operators. Their hours worked rose significantly both for their new phone business and for their already-existing businesses, but their profits and wages paid did not rise (pgs 17-18). A possible explanation is that operators wanted to be available for cellphone users and so stayed at their workplaces longer, but that the extra hours didn’t translate into extra profits. In any case, it’s a pattern that doesn’t seem encouraging, and seems to deserve further investigation.

Bottom line: a product that was supposed to be helpful and in high demand arguably ended up as a bad investment for the franchise operators. This doesn’t mean it shouldn’t have been tried, or that it shouldn’t be tried in the future. But it points to the importance of testing assumptions empirically, rather than scaling up a program as widely as possible based on an appealing story.

Helping farmers is harder than you’ve heard

Imagine that a charity is able to teach a farmer some basic, useful things about farming (like “crop rotation, dip irrigation and the planting of trees that enrich over worked soil” or “disease-resistant cassava replication, distribution and sale; crop diversification; soil conservation; and expanding market opportunities”). Such simple knowledge could last the farmer forever and be far more useful – especially for the cost – than cash or loans. It’s an often-sold story, and an appealing one.

What charities don’t tell you about “improved farming techniques and technology” is just how long the aid world has been trying to spread them, and how much it has struggled. The basic challenges:

Can agriculture programs reach enough farmers? The right farmers?

A 2006 World Bank paper examines the long history of “agricultural extension” programs and is frank about their problems. For traditional programs, it states that

The cost of reaching large, geographically dispersed and remote smallholder farmers is high, particularly given high levels of illiteracy, limited access to mass media, and high transport costs. Farming systems often entail several crops, livestock, and even within given geographical area, there are variations in soil, elevation, microclimate and farmers’ capabilities and access to resources. With such a large and diversified clientele, only a small fraction of farmers can be served directly (face-to-face) by extension, and agents tend to focus on the larger, better resourced and more innovative farmers. This reduces the potential for farmer-to-farmer diffusion. (Emphasis ours)

The “Training & Visit” model attempted to address these issues through a strong, clear set of hierarchies and responsibilities (see pgs 11-14), but its substantially higher costs – coupled with the fact that, as with previous programs, impact was hard to see – led to its essentially universal abandonment (see pgs 14-15 and pgs 22-23).

When World Vision or Save the Children speaks of spreading improved practices, is it using a “T&V” style intensive-but-costly approach, or a lighter touch that could fail to reach enough (and the right) farmers? It isn’t clear.

Do charities even know what to teach and what to change?

Another general problem cited by the World Bank paper is that “Weak accountability (linked to the inability to attribute impact) is reflected in low-quality and repetitive advice given to farmers, and in diminished effort to interact with farmers, and to learn from their experience.” (Emphasis ours.) In other words, those giving advice may not actually be giving the right advice.

It is hard to find honest and thorough descriptions of how such projects have actually played out in the past, but a couple of striking failure stories should make it clear just how badly outsiders can misjudge what farmers need to learn:

  • The DrumNet program in Kenya aimed, successfully, to transition farmers from growing “local crops” (i.e., crops for local/personal consumption) to growing “export crops” (i.e., crops to be sold on the export market). However, a year after the project evaluation was completed, the firm that had been buying the “export crops” stopped due to European regulations, leading to “the collapse of Drumnet as farmers were forced to undersell to middlemen, leaving sometimes a harvest of unsellable crops and thus defaulting on their loans.” (Details at this paper published on the Poverty Action Lab site (PDF).)
  • A development program in Lesotho aimed to help local people with crop and livestock management, as well as building roads so they could access markets. However, few of the people in the region were farmers, and conditions were not good for farming. Harsh weather destroyed pilot crop projects, and the roads allowed in competitors who drove the existing local farmers out of business. (From pgs 193-4 of White Man’s Burden)

These aren’t cases of minor missteps – they’re cases where those giving aid did not perceive essential and fundamental aspects of the local economy. That doesn’t mean they were incompetent – it means that understanding a local economy well enough to give truly useful advice may not be easy.

The long and murky history of agricultural assistance

Agricultural programs in Africa have struggled to produce tangible results, both at the micro level (little evidence about how programs have gone) and at the macro level (disappointing progress in Africa-wide crop yields over time).

A variety of approaches have been tried, including the “holistic” approach of simultaneously addressing health, transportation, credit, and agricultural knowledge. This approach was referred to as “Integrated Rural Development” in the 1970s and 1980s and appears to be acknowledged as a failure, although the basic idea behind it may be making a comeback in the “holistic” approach of the Millennium Villages Project and other large charities.

Details at our writeup on agriculture-focused aid.

Bottom line for donors: agricultural technology is not like medicine

Agriculture aid is often presented as a matter of extending the reach of proven technologies and methods. However, the track record of such programs is simply nothing like that of health programs, which often have track records including multiple highly rigorous studies and large-scale, demonstrable successes.

We feel that the burden of proof on agriculture programs is high, but outcomes tracking of any kind is extremely rare. The evaluations that are available tend to raise many concerns about whether results are “cherry-picked” and whether results even point to improved lives.

We recommend that donors be extremely wary of charities working heavily in this area, no matter how good their intentions. We have not identified any that we can have confidence in.

6 myths about microfinance charity that donors can do without

Is microfinance a good bet for a donor? We feel the answer is complicated, and that the many extreme exaggerations of microfinance’s impact get in the way of making an informed decision.

This post summarizes the differences between the stories you’ve probably heard and the reality according to available evidence.

Myth #1: the way microfinance charities help is by giving people loans to expand businesses. Success stories like Andrea’s, Lucas’s and Sophia’s are representative.

Reality: there isn’t much reliable information on how people are using loans, but the evidence there is suggests that “microloans” are often used for consumption purposes: food, visits to the doctor, etc. This isn’t a bad thing – the poorest people in the world face considerable financial uncertainty, and loans can empower them to manage their own lives.

So, however, can savings, which some scholars feel are more beneficial for the poor than loans. Funding institutions to help people save may not have the same sex appeal as “lending your money to help people grow their businesses,” but it might do more good.

For more, see:

Myth #2 The best way to support microfinance is to lend your money to specific individuals.
Reality: Choosing your own borrowers is not really possible or desirable. The recent debate over Kiva.org (summarized by GiveWell Board member Tim Ogden) makes clear that even when your donation is “officially” matched to a borrower, you’re really funding an institution. And as we discuss immediately below, this is likely a good thing.

Myth #3: a gift to a microfinance charity gets lent out again and again, making its impact essentially infinite.
Reality: Many of the most important challenges of microfinance (such as developing effective outreach, creating incentives for repayment, and helping people to save as well as borrow) involve significant institutional expenses. (See our discussion with David Roodman as well as any microfinance charity’s budget.) Update 5/2010: also see our rough estimate of the overall “cost-effectiveness” of microfinance, concluding that it is hard to argue that microfinance donations in general are more cost-effective than donations to top health programs.

Myth #4: microfinance has been shown to reduce poverty.
Reality: many studies on the impact of microfinance have been done, but most have serious and widely recognized flaws. The few – and recent – stronger studies show mixed effects. The most encouraging effects are for programs that don’t fit the traditional “lend to expand a business” story.

Details at our post on evidence of impact for microfinance charities.

Myth #5: a high repayment rate means that things are going well and clients are benefiting from loans.
Reality: the repayment rate can be both technically and conceptually misleading. See our post on why the repayment rate may not mean what you think it means.

Myth #6: microfinance works because of (a) the innovative “group lending” method; (b) targeting of women, who use loans more productively than men; (c) targeting of the poorest of the poor, who benefit most from loans.
Reality: all three of these claims are often repeated but (as far as we can tell) never backed up. The strongest available evidence is limited, but undermines all three claims.

Bottom line: should you give to a microfinance charity?

We feel that the marketing of microfinance is exaggerated, excessive, and full of unsupported myths – to a degree unusual even in the world of fundraising.

Once you put these myths out of your head, the fact remains that microfinance institutions are often working with people in extreme poverty and empowering them to better manage their own financial lives. The fact remains that high numbers of clients for a product that costs clients money (interest) – while not necessarily demonstrating positive impact – suggest that MFIs are offering something clients want. All in all, this is more than most charitable causes can say for themselves.

We feel that global health is a better area for a donor overall, especially because we have identified outstanding charities in global health that have far more to recommend them than any microfinance charity we’ve seen to date. We continue to search for an outstanding microfinance charity (through methods including our ongoing grant application). Make sure you’re signed up for updates (or following our blog or Twitter) and you’ll know if and when we find one.

Good news can create new challenges for donors

I was glad to read of a new $110 million initiative for insecticide-treated bednet distribution, which we find one of the better-established ways to spend money to improve lives.

But what does this mean for you if you’ve been giving to a malaria charity? Do independent bednet distributions now run the risk of being redundant with the new one? Has USAID provided enough funding that your donation is no longer as needed?

Unfortunately, we have no way of answering this question. While there are some attempts to coordinate government aid, we know of no one asking questions like “How much total room is there for funding distribution of bednets? How can we make sure that all the malaria organizations are on the same page? How can we track the extent to which individual donations are still needed?”

If donors focused on how to have real impact (as opposed to, say, fictions about where “their” money goes), such a question would be extremely important to them.

Agriculture charity evaluation: Incomes boosted are not the same as lives changed

What’s wrong with this “evidence of impact” for high-profile charities?

Among other possible problems, two major issues jump out:

1. No context on what “normal” variation in incomes looks like for poor farmers. Some years have more favorable weather – and local economic situations – than others. Enough that one year’s income or crop yield could be double another’s? 4x? 20x?

Unfortunately, one of the better pieces of “evidence” that jumps to mind is a 75-year-old novel, The Good Earth, whose farmer protagonist is comfortable one year and has literally zero income the next, for no other reason than the weather. If a given year’s yield were close enough to zero, the next year could be a huge increase (2x, 4x, 20x or more) simply by returning to normal.

I have seen little information on the local year-to-year volatility that poor farmers can experience, but I imagine that it (a) varies greatly from region to region and (b) could easily involve incomes falling and jumping by enormous amounts.

None of the above reports provide any context on this question, beyond qualitative statements about how favorable the rains were in each year examined. None of them employ any sort of “comparison group” of farmers (aside from one vague reference to “farms not using improved seeds and fertilizers” in the Malawi Millennium Village). Ultimately, none accomplish one of the most basic goals of an evaluation: giving a sense of how likely the “gains” they describe are to have arisen by pure chance.

With larger sample sizes, we might be able to use country-level volatility for context. But that brings me to the next problem.

2. We have no assurance that the described gains are representative, as opposed to “cherry-picked.”

All of the above organizations have reputations for consistent and thorough monitoring and evaluation, yet in all cases, we find ourselves looking for “impact” from a tiny subset of their projects.

Some ways to produce more compelling evidence of impact

  1. Be clear about what is being measured and what is being published, and when. It seems to us that in this area, charity evaluation lags far behind clinical trials, which are constantly registered before they are complete so people can track their progress. (The Poverty Action Lab is similarly transparent with its own ongoing projects.)
  2. More sample size; more context; use of comparison groups. Discussed above.
  3. Look for more sustained improvements in people’s lives. One measure I find superior to straight “income” or “crop yields” is asset accumulation. A jump in income could be temporary; if someone upgrades their roof or sanitation, it’s likely that at least they expect the gain to be a real and lasting one. The Village Enterprise Fund’s evaluation is one of the better charity evaluations I’ve seen in the area of economic empowerment, partly because it focuses on standard of living rather than a simple measure of income.

*It’s possible that the yields mentioned are for “clusters” of villages rather than individual villages; there are only 12 clusters. However, the source documents available for Sauri and Koraro appear to be at the village rather than the cluster level, and the details of how the measurements were made are unclear.

Are charities helping? We don’t know

In a recent debate, David Hunter’s article on the nonprofit sector has taken heat for its assertion that “While nonprofits work incredibly hard, with passion and dedication, and often in incredibly difficult circumstances to solve society’s most intractable problems, there is virtually no credible evidence that most nonprofit organizations actually produce any social value.”

We agree with the claim for the sectors we’ve examined, which we believe are similar to the sectors Mr. Hunter has examined: particularly thorny areas such as charities working to improve education and international charities addressing extreme poverty overseas. These are problems on which experts have struggled for decades to make any progress, and while we don’t necessarily agree that most charities are failing to produce value, we agree that most charities cannot produce any credible evidence that they are. This is different from the claim that Sean Stannard-Stockton attributes to Mr. Hunter (“most nonprofits and the social sector as a whole is not currently producing social value”), but it still means that it’s very hard for a donor to give with confidence.

The information we have

Our belief is based on two years of looking for this evidence; we’ve published the full details of our findings online, and you can see our summary of international charities (only 19 out of 320 examined publish any impact-related evaluation reports) and U.S. equality of opportunity charities (only 6 of 83 examined provide credible impact-related reports, and 2 of these show negative or no impact).

In addition, in a guest post on the GiveWell Blog, David Anderson of the Coalition for Evidence-Based Policy estimates that 75% of rigorous evaluations show weak effects, no effects, or negative effects.

More information needed

On the other hand, we also believe the criticism that Mr. Hunter doesn’t support his own claim with evidence has merit. We would like more clarity on which sectors Mr. Hunter has examined and is referring to, and information on where he has looked for evidence and what he has found.

In addition, we feel that examples of failing/harmful programs, such as “Well intentioned but ineptly run mentoring programs where failed matches reinforce in youngsters a sense of their low worth and poor prospects” (and the other items on the list on page 2 of the article) should be clearly referenced to summaries of evidence.

The truth is that we cannot have a very informed debate about how much value nonprofits create because we have so little evidence of any kind. Some people adamantly believe that nonprofits create enormous value; others are skeptical that they create any; and there is very little to go on, at least in the sectors under discussion.

Nonprofits that do have credible evidence of their social impact

The good news for donors is that they need not be in the dark if they give to the right charity. Our top-rated charities do produce credible evidence of their social impact. We encourage individual donors to expand and fund these charities until and unless others follow suit.