The GiveWell Blog

Cost-effectiveness estimates: Inside the sausage factory

We’ve long had mixed feelings about cost-effectiveness estimates of charitable programs, i.e., attempts to figure out “how much good is accomplished per dollar donated.”

The advantages of these estimates are obvious. If you can calculate that program A can help much more people – with the same funds, and in the same terms – than program B, that creates a strong case (arguably even a moral imperative) for funding program A over program B. The problem is that by the time you get the impact of two different programs into comparable “per-dollar” terms, you’ve often made so many approximations, simplifications and assumptions that a comparison isn’t much more meaningful than a roll of the dice. In such cases, we believe there are almost always better ways to decide between charities.

This post focuses on the drawbacks of cost-effectiveness estimates. I’m going to go through the details of what we know about one of the best-known, most often-cited cost-effectiveness figures there is: the cost per disability-adjusted life-year (DALY) for deworming schoolchildren. This figure uses the disability-adjusted life-year (DALY) metric, probably the single most widely cited and accepted “standardized” measure of social impact within the unusually quantifiable area of health.

Note that various versions of this figure:

  • Occupy the “top spot” in the Disease Control Priorities Report‘s chart of “Cost-effectiveness of Interventions Related to Low-Burden Diseases” (see page 42 of the full report). (I’ll refer to this report as “DCP” for the rest of this post.)
  • Are featured in a policy briefcase by the Poverty Action Lab (which we are fans of), calling deworming a “best buy for education and health.”
  • Appear to be the primary factor in the decision by Giving What We Can
    (a group that promotes both more generous and more intelligent giving) to designate deworming-related interventions as its top priority (see the conclusion of its report on neglected tropical diseases), and charities focused on these interventions as its two top-tier charities.

I don’t feel that all the above uses of this figure are necessarily inappropriate (details in the conclusion of this post). But I do feel that they point to the worthiness of inspecting this figure closely, and it is important to be aware of the following issues.

  1. The estimate is likely based on successful, thoroughly observed programs and may not be representative of what one would expect from an “average” deworming program.
  2. The estimate appears to rely on an assumption of continued successful treatment over time, an assumption which could easily be problematic in certain cases.
  3. A major input into the estimate is the prevalence of worm infections. In general, prevalence data is itself is the product of yet more estimations and approximations.
  4. Many factors in cost-effectiveness, positive and negative, appear to be ignored in the estimate simply because they cannot be quantified.
  5. Different estimates of the same program’s cost-effectiveness appear to strongly contradict each other.

Details follow.

Issue 1: the estimate is likely based on successful, thoroughly observed programs.

The Poverty Action Lab estimate of $5 per DALY is based on a 2003 study by Miguel and Kremer of a randomized controlled trial in Kenya. As the subject of an unusually rigorous evaluation, this program likely had an unusual amount of scrutiny throughout (and may also have been picked in the first place partly for its likelihood of succeeding). In addition, this program was carried out by a partnership between the Kenyan government and a nonprofit, ICS (pg 165), that has figured prominently in numerous past evaluations (for example, see this 2003 review of rigorous studies on education interventions).

In this sense, it seems reasonable to view its results as “high-end/optimistic” rather than “representative of what would one expect on average from a large-scale government rollout.”

Note also that the program included a significant educational component (169). The quality of hygiene education, in particular, might be much higher in a closely supervised experiment than in a large-scale rollout.

It is less clear whether the same issue applies to the DCP estimate, because the details and sources for the estimate are not disclosed (see box on page 476). However,

  • The other studies referenced throughout the chapter appear to be additional “micro-level” evaluations – i.e., carefully controlled and studied programs – as opposed to large-scale government-operated programs.
  • The DCP’s cost-effectiveness estimate for combination deworming (the program most closely resembling the program discussed in Miguel & Kremer) is very close to the Miguel & Kremer estimate of $5 per DALY. (There is some ambiguity on this point – more on this under Issue 5 below.)

Issue 2: the estimate appears to rely on an assumption of continued successful treatment over time, an assumption which could easily be problematic in certain cases.

Miguel & Kremer states:

single-dose oral therapies can kill the worms, reducing … infections by 99 percent … Reinfection is rapid, however, with worm burden often returning to eighty percent or more of its original level within a year … and hence geohelminth drugs must be taken every six months and schistosomiasis drugs must be taken annually. (pg 161)

Miguel & Kremer emphasizes the importance of externalities (i.e., the fact that eliminating some infections slows the overall transmission rate) in cost-effectiveness (pg 204), and it therefore seems important to ask whether the “$5 per DALY” estimate is made (a) assuming that periodic treatment will be sustained over time; (b) assuming that it won’t be.

Miguel & Kremer doesn’t explicitly spell out the answer, but it seems fairly clear that (a) is in fact the assumption. The study states that the program averted 649 DALYs (pg 204) over two years (pg 165), of which 99% could be attributed to aversion of moderate-to-heavy schistosomiasis infections (pg 204). Such infections have a disability weight of 0.006 per year, so this is presumably equivalent to averting over 100,000 years ((649*99%)/0.006) of schistosomiasis infection – even though well under 10,000 children were even loosely included in the project (including control groups and including pupils near but not included in the program – see pg 167). Even if a higher-than-standard disability weight was used, it seems fairly clear that many years of “averted infection” were assumed per child.

In my view, this is the right assumption to make in creating the cost-effectiveness estimate … as long as the estimate is used appropriately, i.e., as an estimate of how cost-effective a deworming program would be if carried out in an near-ideal way, including a sustained commitment over time.

However, it must be noted that sustaining a program over time is far from a given, especially for organizations hoping for substantial and increasing government buy-in over time. As we will discuss in a future post, one of the major deworming organizations appears to have aimed to pass its activities to the government, with unclear/possibly mixed results. And as we have discussed before, there are vivid examples of excellent, demonstrably effective projects failing to achieve sustainability in the past.

Does the DCP’s version of the estimate make a similar assumption? Again, we do not have the details of the estimate, but the DCP chapter – like the Miguel & Kremer paper – stresses the importance of “Regular chemotherapy at regular intervals” (pg 472).

One more concern along these lines: even if a program is sustained over time, there may be “diminished efficacy with frequent and repeated use … possibly because of anthelmintic resistance” (pg 472).

Extrapolation from a short-term trial to long-term effects is probably necessary to produce an estimate, but it further increases the uncertainty.

Issue 3: cost-effectiveness appears to rely on disease incidence/prevalence data that itself is the product of yet more estimations and approximations.

The Miguel & Kremer study took place in an area with extremely high rates of infection: 80% prevalence of schistosomiasis (where schistosomiasis treatment was applied), and 40-80% prevalence of three other infections (see pg 168). The DCP emphasizes the importance of carrying out the intervention in high-prevalence areas (for example, see the box on page 476). Presumably, the program should be carried out in as high-prevalence areas as possible for maximum cost-effectiveness.

The problem is that prevalence data may not be easy to come by. The Global Burden of Disease report reports using a variety of elaborate methods to estimate prevalence, using “environmental data derived from satellite remote sensing” as well as mathematical modeling (see pg 80). Though I don’t have a source for this statement, I recall either a conversation or a paper making a fairly strong case that data on neglected tropical diseases is particularly spotty and unreliable, likely because it is harder to measure morbidity than mortality (the latter can be collected from death records; the former requires more involved examinations and/or judgment calls and/or estimates).

Issue 4: many factors in cost-effectiveness appear to be ignored in the estimate simply because they cannot be quantified.

Both positive and negative factors have likely been ignored in the estimate, including:

  • Possible negative health effects of the deworming drugs themselves (DCP pg 479). (Negative impact on cost-effectiveness)
  • Possible development of resistance to the drugs, and thus diminishing efficacy, over time (mentioned above). (Negative impact on cost-effectiveness)
  • Possible interactions between worm infections and other diseases including HIV/AIDS (DCP pg 479), which may increase the cost-effectiveness of deworming. (Positive impact on cost-effectiveness)
  • The question of whether improving some people’s health leads them to contribute back to their families, communities, etc. and improve others’ lives. This question applies to any health intervention, but not necessarily to the same degree, since different programs affect different types of people. From what I’ve seen, there is very little available basis for making any sorts of estimates of such differences.

Issue 5: different estimates of the same program’s cost-effectiveness appear to strongly contradict each other.

The DCP’s summary of cost-effectiveness alone (box on pg 476) raises considerable confusion:

the cost per DALY averted is estimated at US $3.41 for STH infections [the type of infection treated with albendazole] … The estimate of cost per DALY is higher for schistosomiasis relative to STH infections because of higher drug costs and lower disability weights … the cost per DALY averted ranges from US$3.36 to US$6.92. However, in combination, treatment with both albendazole and PZQ proves to be extremely cost-effective, in the range of US$8 to US$19 per DALY averted.

The language seems to strongly imply that the combination program is more effective than treating schistosomiasis alone, but the numbers given imply the opposite. Our guess is actually that the numbers are inadvertently switched. To one taking the numbers too literally, the expected “cost-effectiveness” of a donation could be off by a factor of 2-5 depending on this question of copy editing.

Comparing this statement with the Miguel & Kremer study adds more confusion. The DCP estimates albendazole-only treatment at $3.41 per DALY, which appears to be better than (or at least at the better end of the range for) the combination program. However, Miguel & Kremer estimates that albendazole-only treatment is far less effective than the combination program, at $280 per DALY (pg 204).

Perhaps the DCP envisions albendazole treatment carried out in a different way or in a different type of environment. But given that the Miguel & Kremer study appears to be examining a fairly suitable environment for albendazole-only treatment (see above comments about high infection prevalence and strong program execution), this would indicate that cost-effectiveness is extremely sensitive to subtle changes in the environment or execution.

Bottom line

There is a lot of uncertainty in this estimate, and this uncertainty isn’t necessarily “symmetrical.” Estimates of different programs’ cost-effectiveness, in fact, could be colored by very different degrees of optimistic assumptions.

Despite all of the above issues, I don’t find the cost-effectiveness estimate discussed here to be meaningless or useless.

Researchers’ best guesses put the cost-effectiveness of deworming in the same ballpark as that of other high-priority interventions such as vaccines, tuberculosis treatment, etc. (I do note that many of these appear to have more robust evidence bases behind their cost-effectiveness – for example, estimated effects of large-scale government programs are sometimes available, giving an extra degree of context.)

I think it is appropriate to say that available evidence suggests that deworming can be as cost-effective as any other health intervention.

I think it is appropriate to call deworming a “best buy,” as the Poverty Action Lab does.

I do not think it is appropriate to conclude that deworming is more cost-effective than vaccinations, tuberculosis treatment, etc. I think it is especially inappropriate to conclude that deworming is several times more cost-effective than vaccinations, tuberculosis treatment, etc.

Most of all, I do not think it is appropriate to expect results in line with this estimate just because you donate to a deworming charity. I believe cost-effectiveness estimates usually represent “what you can achieve if the program goes well” more than they represent “what a program will achieve on average.”

In my view, the greatest factor behind the realized cost-effectiveness of a program is the specifics of who carries it out and how.

Thoughts from my visits to Small Enterprise Foundation (South Africa) and VillageReach (Mozambique), part III

Continued from Part I and Part II, these are my thoughts from my recent visit to two of our top charities in Africa.

Some of what I saw and discussed prompted me to rethink our frameworks for evaluating certain kinds of programs:

  • Vaccinations. We’ve taken the “vaccination coverage rate” as a reasonable proxy for lives changed, since the evidence base for vaccines is so strong. But of course, “vaccination coverage rate” describes how many children received vaccines, not how many received functional and correctly administered vaccines. I was somewhat concerned that VillageReach staff found several vaccines in refrigerators that had “gone bad,” and I was glad to hear that VillageReach is considering adding an indicator to its information system to track how often this happens. The strong macro-level track record of vaccines (causing major drops in mortality at the country level, not just in carefully controlled trials) is some comfort here.
  • Microfinance. We’ve been concerned about the possibility that clients are taking out loans against their own best interests, and have largely pictured “coercive” versions of this problem: loan officers pressuring clients to borrow more than they should, clients getting themselves into debt cycles, etc. A very interesting anecdote from a staffer raised a more subtle version of this concern: clients may be losing money on their loans without knowing it. The anecdote given was about a particular woman who was literally selling goods for the same price she had bought them for, making the problem obvious. It could, however, be a much more subtle problem for other clients – given high interest rates, potentially transportation costs, etc., it could take quite a bit of calculation and careful accounting even to know whether the business one is running with a loan is in fact operating at a profit or a loss. (And since many families may have several sources of income, a loss might not be noticed if accounting isn’t careful.)
  • Cash transfers.. Our position has been that cash transfers can be assumed to be doing some good if they are successfully targeted to poor people in an area, something that may be difficult. It struck me that in certain areas (such as the village I saw with VillageReach), poverty targeting may not be much of a challenge at all (since everyone anywhere near the area is extremely poor); on the other hand, in these kinds of areas gifts of cash or livestock may be of very limited use (note the missionaries’ claim that village people receiving pensions for military service were largely spending them on alcohol).
  • Social business. I was impressed that I constantly saw Vidagas canisters throughout my trip – in hotels, stores, even the missionaries’ truck. (Vidagas is a “social business” started by VillageReach; it delivers gas, and was started in order to address the challenge of consistently powering refrigerators to keep vaccines at the appropriate temperature.)Our position on social business has been that such a business should not be considered a success until it has demonstrated either an actual profit (not just sales covering unit costs) or demonstrable social impact along the lines of what we look for from nonprofits. On reflection, I think that in certain cases there is room for more middle ground here. There are certain areas where the mere fact of selling something for a non-trivial price would seem to indicate a certain success in filling a need, even if not all costs are covered. Of course, it all depends on the area – subsidized sales may make a lot of sense where infrastructure and access to markets is poor, but in urban areas it could serve simply to “crowd out” private supply and/or enrich middlemen.

    I don’t regret our skepticism of social business to date. It has always been more important to us to avoid “false positives” (i.e., recommendations of organizations that are not impactful) than to avoid “false negatives” (i.e., failures to recommend organizations that are impactful). And I have not seen any “social enterprise investment” fund put together the case I’d need to see, even using the “middle ground” roughly sketched out above. But I do want to keep thinking about how to recognize the good social businesses may be accomplishing without being overly credulous.

These visits very much made the activities of our top charities feel more “real” to me.

To this point, the work we’ve done on international aid has felt very abstract. That’s not a reason not to act/give based on it, but in many ways the situations we’re analyzing are so different from what I see every day that it can be hard to believe that the charities are helping real people in the way our analysis suggests they are.

Much of what I saw on the trip was, in fact, consistent with what I expected. To a large degree, it made the research “come to life.” I saw people and areas that really are at a level of poverty that I’ve never seen in the U.S.; I talked to staff about the details of what they’re doing, and to some degree saw them doing it; and I felt, very tangibly, how the work they’re doing can make a difference.

(As an aside, I’ve had the opposite experience with site visits to U.S. charities. I’m not sure why. The U.S. visits were definitely more “staged” while the international visits had a lot of wandering and improvisation; in addition, the U.S. charities tend to address less tangible problems, and it was often hard to connect the charities’ theories of their own value-added with what I was seeing.)

It was frustrating to say “no” to kids rubbing their stomachs and asking for money, and to see so many people who seem like they could benefit greatly from things that are pretty basic – though not necessarily easy to deliver. The bottom line is that while I’ve pushed to make my actions consistent with my beliefs, my beliefs about the importance of international aid carry a little more emotional weight now, and I feel more emotionally motivated to give and to give well. I would recommend a similar trip to anyone who intellectually accepts the importance of international aid, but is having trouble getting behind it emotionally.

Thoughts from my visits to Small Enterprise Foundation (South Africa) and VillageReach (Mozambique), part II

Continued from Part I, these are my thoughts from my recent visit to two of our top charities in Africa.

Diverting skilled labor looks like a real concern.

The COO of SEF stressed that one of SEF’s biggest challenges is human resources (i.e., continually finding good people to staff it). I can easily see how this would be. As I mentioned in Part I, I found that the nonprofits I visited were employing capable, impressive people with a combination of local background and well-above-average educational credentials and command of English.

On one hand, seeing these staffers made me feel good about the organizations we were recommending. At the same time, it highlighted one of the most universal and hardest to evaluate concerns we have about nonprofit work: diversion of skilled labor from other potentially productive pursuits.

Adding to this concern was a general impression I got (reinforced by Leah from VillageReach) that nonprofit jobs are among the best-paying and most prestigious jobs for African locals. It looks like we have a situation where:

  • Many of the people hired by nonprofits could also be potentially very helpful to their communities if they were doing for-profit work.
  • They work instead for nonprofits, partly because nonprofits are out-bidding the for-profits for their services.
  • Within a for-profit framework, there is often (not always and never perfectly) a connection between the value of a job and the salary, which creates a (imperfect) tendency for talented people to end up in roles where they can do more good.
  • I have no sense of how (or whether) nonprofits are attempting to calibrate salaries and value, and I fear that they could be “overpaying for” (and thus misusing) local talent simply because they want the best people available and they have the donor-supplied funds to get them.

More on this idea in a future post. Though we have no great methods for quantifying the losses from “diversion of labor,” we do believe that this concern reinforces the importance of demanding that nonprofits be accomplishing as much good as possible and not merely some good.

Getting basic info about people’s standard of living seemed fairly straightforward.

I understand that estimating people’s incomes can be a very complex endeavor, but in the areas I visited, it seemed possible to get a sense very quickly for how “poor” one area was relative to another. I asked basic questions at the village level: where the nearest water source was, who was responsible for maintaining it, where the nearest school was, what the school fees were, etc. I walked around and observed how many of the dwellings were made of mud vs. concrete . And when talking to individual clients, I asked straightforward questions like “Do you have a TV?”, “Do you have electricity?”, “What do you eat?” and “When was the last time you had a fever and what did you do about it?” Answers were fairly consistent in a given area, but varied dramatically across charities (more below).

Throughout our investigations into international aid, I’ve been frustrated by the fact that most charities seem either unable or unwilling to produce data on clients’ standards of living. Because I don’t tend to trust stylized stories, and I haven’t had what I consider credible data on standards of living, I’ve constantly felt very unclear on who is being helped and how. I now find it less likely that this problem stems from prohibitive costs of data collection; I find it more likely that it stems from (a) the fact that donors rarely (if ever) ask for data on clients’ standards of living; (b) the possibility that some charities may not want to reveal that their clients are anyone but the “poorest of the poor” (even when their clients are still quite poor).

The three areas I visited were very different in terms of standards of living.

  • Small Enterprise Foundation (SEF) clients: I visited two villages, one in the Microcredit program (SEF’s original program) and one in the Tšhomišano Credit Program (targeted more directly at the poorer people in a village). In both villages, at least half the buildings I saw were made of concrete, and everyone I spoke to reported convenient access to running water, electricity, a fairly well-stocked local market, and public transportation to larger cities. Living spaces appeared fairly cramped (they were larger than in the other areas I visited, but when I asked who slept where it quickly became clear that there wasn’t much space per person); clients reported eating meat “only when they could afford it.”
  • VillageReach clients: infrastructure was much, much worse in these areas. The town of Macomia, where we spent the night, had no running water and no electricity except for generators; it took hours to reach (in a truck) from Pemba, which I believe was the closest area with reliable electricity and running water. The one village we visited took over an hour (of alert driving on very bad roads) to reach from Macomia, and the only concrete structures I saw there were the health center, a closed shop, and the school. I was told that other nearby villages were even harder to reach (in some cases impossible in a truck) and that access to water was a major problem. In terms of both standard of living and life opportunities, these areas appeared fundamentally worse than SEF areas.
  • Soweto: I took a quick tour through a poor area of Soweto (urban). It was generally filthy (literally strewn with trash) and extremely crowded, with tiny steel shacks next to each other. It seemed to me like a much more unpleasant place to live than either of the other two areas, although on the flip side, people in Soweto appeared to have access to public transportation, electricity, good schools, etc. as they were very close to much wealthier residences.

One of the reasons Small Enterprise Foundation stood out to us is that it appears more diligent about targeting the poor than other organizations. Even so, its clients – while poor – appear to be substantially better off (in fundamental infrastructure-related ways, not ways that can be attributed to program effects) than VillageReach’s clients. This doesn’t make me less supportive of SEF (it’s largely consistent with my existing suspicion that microfinance clients are rarely if ever the poorest of the poor), but it’s an important thing to keep in mind that I feel better informed about now than before.

Are you looking to help people in the worst situation, and with the most basic needs, possible? Or are you interested in helping people who are better off to begin with, in the hopes that a little assistance might go a longer way with them? To me there’s no clear right answer, but it’s a decision donors are likely making constantly without knowing it.

More thoughts coming in Part III.

Thoughts from my visits to Small Enterprise Foundation (South Africa) and VillageReach (Mozambique), part I

I previously posted “raw data” (pictures, audio, notes) from my recent visit to two of our top charities in Africa. The next few posts will give my thoughts from the trip.

First, a note on representativeness. I was only in Africa for two weeks; I was a complete outsider; I certainly don’t think that anything I saw “proves” anything about the programs or areas I was looking at. In many cases what I saw (and what I discussed with staff) prompted me to discuss and think harder about issues I’d already thought about a little. So as I share thoughts from the trip, think of these as thoughts that were partly inspired by what I saw and discussed, not as “things I’ve learned.”

In fact, I was hesitant to visit the field too early because I was afraid that I would form a vivid picture of how things work based on what I saw, and that it would be difficult to imagine how differently things could work in other settings (and even on other days). From this point on, I am definitely going to have a little trouble thinking about microfinance without picturing what I saw at Small Enterprise Foundation (SEF), for example. I think that when dealing with multinational charities that work in a huge variety of settings, it is best to get most of our information by reading the observations and analysis of others.

With that said, here are some thoughts.

I was impressed with the staff of the two nonprofits I visited.

In many of my conversations with nonprofit staff, I feel like I’m being sold a story, people are telling me what they think I want to hear, etc., which makes me instinctively somewhat distrustful. I can honestly say that I felt none of this during my interactions with SEF and VR staff, and that includes the lower-level staff. They were straightforward with me about challenges and concerns. Most acknowledged that there are reasons to worry about whether they’re being effective, and did not seem interested in downplaying concerns or exaggerating successes. And most seemed to me to be quite intelligent, knowledgeable, and reasonable about the work they were doing.

These two organizations had already been identified as outstanding before I visited, and I would have ranked them among the very best organizations in terms of “straightforward, no-nonsense interactions with staff” even before I went (the other nonprofits I’d put in this category are Against Malaria Foundation, Population Services International, and Stop Tuberculosis Partnership). However, it’s possible that people who are working in the field in program roles tend to be better (more direct) communicators with GiveWell than people in fundraising roles, and I’m very curious as to what impression I would have come away with if I had done a similar visit to a charity we have a lower opinion of.

Getting pictures, audio and video was not a problem.

People I spoke to never objected to being recorded and were usually (with some exceptions) happy to have their pictures taken, sometimes even insisting on it. Children particularly enjoyed being photographed (for example, see this video of me taking pictures as well as the photos from my trip to the village).

This surprised me somewhat (arguably it shouldn’t have) only because I feel like I’ve seen relatively little use of multimedia to monitor, evaluate, and report on programs. For example, I’ve been told many times that I “have to see a program in action” to be sold on its effectiveness; yet now I wonder why the charities that feel this way aren’t posting large amounts of real-time, unedited footage to give America-based donors as much of the experience as possible.

Charities do often produce heavily edited videos and photos, but we see little value in such productions as evidence (or as monitoring/evaluation tools) because it is so difficult to distinguish observation from editorial.

We’ve written before that we see a lot of potential value in “qualitative evidence” that is presented systematically and transparently, but we rarely see this happening.

Connecting with clients – culturally and even linguistically – appeared to be a fairly significant challenge, and the way I believe most nonprofits deal with it points to the importance of systematic monitoring and evaluation.

Any American charity working in the developing world ultimately has to connect people (donors and clients) who speak different languages and come from very different cultures. If the charity is even of moderate size (i.e., working in more than a few village, as even the relatively small charities I visited do), it also has to manage operations beyond what upper management can observe directly. It seems to me that the usual approach to this challenge is to have several degrees of separation between upper management and the people doing work in the field.

  • The CEO of the Small Enterprise Foundation does not speak the local languages, and the COO (originally from Croatia) says he has learned to understand quite a bit but still cannot speak them. The lowest-level staff, development facilitators (similar to “loan officers”), tend to have similar backgrounds to the clients and to speak the local languages well, but this of course means that their background is very different from that of upper management and donors (note that the development facilitator I spoke with has very limited English). In in-between roles, there are some employees who are better able to “bridge the gap” (such as the staffer who translated for me on day two). Given this situation, it isn’t surprising that SEF’s management process is heavily dependent on systematic collection, auditing and analysis of key metrics (as I discussed with the COO).
  • VillageReach doesn’t employ as many people, but it also has people in major roles who are American and need help from translators to communicate with local staff (during my visit, we had a translator traveling with us partly to help with communications between Leah, from the Seattle office, and Durao, a local VillageReach employee).
  • I did see one instance of an alternate approach: literally sending Americans to live among clients, learn their languages, etc. This was the approach taken by the missionaries who helped us out when VillageReach’s vehicle broke down in Mozambique. However, my impression (which they confirmed) is that this approach (which I imagine presents problems and challenges of its own) is relatively unusual even among missionaries and is essentially unheard of among nonprofits focused on humanitarian aid.

None of the above observations should come as a surprise, but to me they highlight the importance of formal, systematic monitoring/auditing/evaluation. We often focus on the benefits of monitoring/evaluation for donors, but in situations like those described above it also seems like they are essential for conducting any kind of meaningful organizational management. I have trouble seeing how an organization that conducts no formal data collection and auditing can even run a program of any meaningful size. I would certainly be curious to see how operations work within some of the charities we have found to be less data-oriented.

I think it’s also important to note how difficult translation and communication can be. For example, during my first day with SEF, my communication with clients required several steps: I would ask a question in English, SEF’s CEO would rephrase it so that the development facilitator could understand it, the development facilitator would ask the question in the local language and relay the answer back in English, and finally the CEO had to rephrase the English again so I could follow it. On my second day, I was with someone who was very strong in both languages, but you can hear how much work he put into translating my fairly short and basic-seeming questions. He explained that, in addition to culture-based difficulties with translation, he was being very careful with wording because clients very much want to tell donors what they think the donors want to hear.

We have long felt that survey data is most useful for extremely concrete, factual questions: “What did you eat yesterday?” is more useful than “What do you normally eat?” is more useful than “Did this program help you?” is more useful than “How much did this program help you?” More on this idea at this post on Philanthropy Action (co-maintained by GiveWell Board member Tim Ogden).

More thoughts from the trip coming in Part II.

Nothing wrong with selfish giving – just don’t call it philanthropy

Tactical Philanthropy has an interesting discussion of “non-optimized giving”: New Philanthropy Capital CEO Martin Brookes “confesses” to “wasting charitable funds” on a cause he doesn’t believe is the best, and Sean responds that “Under your logic, we should all feel guilty about all of our giving that does not go to the single best charity in the world … Be proud of yourself, Martin. You’re a great philanthropist.”

I don’t think it’s wrong to make gifts that aren’t “optimized for pure social impact.” Personally, I’ve made “gifts” with many motivations: because friends asked, because I wanted to support a resource I personally benefit from, etc. I’ve stopped giving to my alma mater (which I suspect has all the funding it can productively use) and I’ve never made a gift just to “tell myself a nice story,” but in both cases I can understand why one would.

Giving money for selfish reasons, in and of itself, seems no more wrong than unnecessary personal consumption (entertainment, restaurants, etc.), which I and everyone else I know does plenty of. The point at which it becomes a problem, to me, is when you “count it” toward your charitable/philanthropic giving for the year.

My personal approach is to designate a certain percentage of my annual income for pure altruistic giving (most recently to the Stop Tuberculosis Partnership). When a friend asks me to give to a charity they’re “running for”, I give a small amount and think of it in the same bucket as holiday gifts – it doesn’t affect the size of my annual altruistic gift.

I believe that the world’s wealthy should make gifts that are aimed at nothing but making the world a better place for others. We should challenge ourselves to make these gifts as big as possible. We should not tell ourselves that we are philanthropists while making no gifts that are really aimed at making the world better.

But this philosophy doesn’t forbid you from spending your money in ways that make you feel good. It just asks that you don’t let those expenditures lower the amount you give toward really helping others.

Pictures/audio/notes from my visits to Small Enterprise Foundation (South Africa) and VillageReach (Mozambique)

Between 2/10 and 2/23, I visited two of our recommended charities: The Small Enterprise Foundation (our top-rated microfinance organization and one of two winners of our recent Economic Empowerment Grant – note that our review is not yet available but has been drafted and will be published shortly) and VillageReach (our current top-rated charity overall).

(Note that the posts I authored on our self-review and plan, which ran while I was away, were all written and scheduled before I left.)

This trip was my first time in Africa; it was also an opportunity to have more in-depth conversations with the staff of these two charities (going beyond our usual key questions about cost-effectiveness, evidence of impact and room for more funding). In future posts, I’ll be sharing some thoughts I came away with.

For now, we’ve posted as much as possible of the “raw data” from my trip: pictures, video, and audio. I recorded most interviews with clients and staff and took pictures of most of what I saw. We’ve also included summary notes of each “episode” with the pictures/video/audio. You can see it all at this link:

Notes and multimedia from Holden’s 2/2010 visit to Small Enterprise Foundation and VillageReach