Imagine that a charity is able to teach a farmer some basic, useful things about farming (like “crop rotation, dip irrigation and the planting of trees that enrich over worked soil” or “disease-resistant cassava replication, distribution and sale; crop diversification; soil conservation; and expanding market opportunities”). Such simple knowledge could last the farmer forever and be far more useful – especially for the cost – than cash or loans. It’s an often-sold story, and an appealing one.
What charities don’t tell you about “improved farming techniques and technology” is just how long the aid world has been trying to spread them, and how much it has struggled. The basic challenges:
Can agriculture programs reach enough farmers? The right farmers?
A 2006 World Bank paper examines the long history of “agricultural extension” programs and is frank about their problems. For traditional programs, it states that
The cost of reaching large, geographically dispersed and remote smallholder farmers is high, particularly given high levels of illiteracy, limited access to mass media, and high transport costs. Farming systems often entail several crops, livestock, and even within given geographical area, there are variations in soil, elevation, microclimate and farmers’ capabilities and access to resources. With such a large and diversified clientele, only a small fraction of farmers can be served directly (face-to-face) by extension, and agents tend to focus on the larger, better resourced and more innovative farmers. This reduces the potential for farmer-to-farmer diffusion. (Emphasis ours)
The “Training & Visit” model attempted to address these issues through a strong, clear set of hierarchies and responsibilities (see pgs 11-14), but its substantially higher costs – coupled with the fact that, as with previous programs, impact was hard to see – led to its essentially universal abandonment (see pgs 14-15 and pgs 22-23).
When World Vision or Save the Children speaks of spreading improved practices, is it using a “T&V” style intensive-but-costly approach, or a lighter touch that could fail to reach enough (and the right) farmers? It isn’t clear.
Do charities even know what to teach and what to change?
Another general problem cited by the World Bank paper is that “Weak accountability (linked to the inability to attribute impact) is reflected in low-quality and repetitive advice given to farmers, and in diminished effort to interact with farmers, and to learn from their experience.” (Emphasis ours.) In other words, those giving advice may not actually be giving the right advice.
It is hard to find honest and thorough descriptions of how such projects have actually played out in the past, but a couple of striking failure stories should make it clear just how badly outsiders can misjudge what farmers need to learn:
- The DrumNet program in Kenya aimed, successfully, to transition farmers from growing “local crops” (i.e., crops for local/personal consumption) to growing “export crops” (i.e., crops to be sold on the export market). However, a year after the project evaluation was completed, the firm that had been buying the “export crops” stopped due to European regulations, leading to “the collapse of Drumnet as farmers were forced to undersell to middlemen, leaving sometimes a harvest of unsellable crops and thus defaulting on their loans.” (Details at this paper published on the Poverty Action Lab site (PDF).)
- A development program in Lesotho aimed to help local people with crop and livestock management, as well as building roads so they could access markets. However, few of the people in the region were farmers, and conditions were not good for farming. Harsh weather destroyed pilot crop projects, and the roads allowed in competitors who drove the existing local farmers out of business. (From pgs 193-4 of White Man’s Burden)
These aren’t cases of minor missteps – they’re cases where those giving aid did not perceive essential and fundamental aspects of the local economy. That doesn’t mean they were incompetent – it means that understanding a local economy well enough to give truly useful advice may not be easy.
The long and murky history of agricultural assistance
Agricultural programs in Africa have struggled to produce tangible results, both at the micro level (little evidence about how programs have gone) and at the macro level (disappointing progress in Africa-wide crop yields over time).
A variety of approaches have been tried, including the “holistic” approach of simultaneously addressing health, transportation, credit, and agricultural knowledge. This approach was referred to as “Integrated Rural Development” in the 1970s and 1980s and appears to be acknowledged as a failure, although the basic idea behind it may be making a comeback in the “holistic” approach of the Millennium Villages Project and other large charities.
Details at our writeup on agriculture-focused aid.
Bottom line for donors: agricultural technology is not like medicine
Agriculture aid is often presented as a matter of extending the reach of proven technologies and methods. However, the track record of such programs is simply nothing like that of health programs, which often have track records including multiple highly rigorous studies and large-scale, demonstrable successes.
We feel that the burden of proof on agriculture programs is high, but outcomes tracking of any kind is extremely rare. The evaluations that are available tend to raise many concerns about whether results are “cherry-picked” and whether results even point to improved lives.
We recommend that donors be extremely wary of charities working heavily in this area, no matter how good their intentions. We have not identified any that we can have confidence in.
Your points are well taken, but when you say you haven’t identified any charities in which you have confidence, one needs to know how many you’ve thoroughly evaluated. Without real analysis, your effectively turning potential donors entirely away from the ag. sector is unfounded and unprofessional.
Givewell has evaluated a total of 388 charities. You can have a look at the list here. Given the amount of research they have put into the problem, it’s now your turn to suggest more specifically what they may have missed.
To say that Givewell has “evaluated” 388 charities is simply not true. What Givewell has done is look at their websites to see if they have published a technical impact report on the site. If they haven’t they don’t go further (and this is the case for the vast majority of the 388 charities on their site). That is not an “evaluation”, that is just a Givewell search criteria. I understand this is their process, and appreciate their focus on true impact, but they need to be clear and just say that the organization does not make impact data available on its website and therefore they don’t meet Givewell’s criteria for further “evaluation”.
In fact, having just done a quick review of the charities on their site with the label “economic empowerment”, the ONLY charity that had any kind of “evaluation” was KIVA, which is in micro credit, not agriculture. Hard to draw a lot of conclusions about the entire field without more data being available. A lack of publicly available impact data does not necessarily correlate to no impact. It just means there is a lack of publicly available data (which could be for a whole variety of very legitimate reasons, but I suppose that is another topic)
I do believe that there is a strong (though not 100%) relationship between which charities have published evaluations and which charities have completed evaluations. I also believe it is appropriate, and even important, to say we have “evaluated” nearly 400 charities. More at our recent discussion of this topic.
I feel it is destructive to demand that an evaluator (i.e., GiveWell) either reach a conclusive stance on a charity or mark it as “not reviewed.” The reason is that coming to a conclusive stance is near-impossible in practice – charities make it near-impossible by not sharing information. It’s only the outstanding charities that are even possible to assess. If we only counted these outstanding charities among the “reviewed” (and issued ratings only to them), our website would present a seemingly arbitrary (though in fact outstanding) list of 20 charities, while offering no comment on the most popular but least transparent charities such as UNICEF and Smile Train. This would give entirely the wrong message.
In short, we feel that handing out negative reviews for opaque charities is appropriate; handing out reviews only for the charities transparent enough to give meaningful information about their impact would be unfair and unhelpful.
Finally, a point of clarification. The general statements we have been making about the dearth of evaluation within the “economic empowerment” sector are not based only on the “website scan” currently published on our website. They are also based on the more in-depth process we are in the middle of, but which so far has turned up very similar results to the website scan. We have a lower standard of support for our blog than for our website, so for now we are making these general statements on the blog with the understanding that we will later be substantiating them fully on our main website.
Note: these comments were originally posted in 12/1/2009, but I just reposted them because they were erroneously wiped out (technical issue).
Comments are closed.