The GiveWell Blog

TakeAction @ GuideStar

We are thrilled to announce a partnership with GuideStar via its new feature, TakeAction @ GuideStar.

TakeAction allows donors to select a cause that interests them and see general information as well as recommended charities, with content provided by GiveWell, Philanthropedia, and Great Nonprofits.

To date, GuideStar has provided only basic information, and only information at the level of “charities,” not “causes.” A donor searching for a particular charity could find financial information and (in some cases) open-participation reviews from Great Nonprofits, but could not find (a) information about the charities’ likely effectiveness & impact; (b) information to help get basic grounding in a cause, and choose the best charities within it.

We are particularly excited about the latter change. We believe strongly in “active giving” (finding the best charity possible in your cause of interest) as opposed to “passive giving” (starting with a particular charity in mind and giving as long as it raises no red flags). We believe that GuideStar’s new content is a major and important step toward helping its users give to the best possible organizations, and create healthier incentives in the sector.

TakeAction is new, and we expect that it has a lot of progress yet to make. (We believe that all three of the partner organizations have a lot of progress of our own to make, hopefully leading to coverage of more causes among other things). Today’s launch is only a preliminary step. GuideStar recognizes this and is eager for feedback, so please send it using the “Feedback” links on their various pages.

But for a sector that has been dominated by flawed, superficial metrics and giving focused on donors’ needs, rather than recipients’, this is an enormously encouraging development. One of the oldest, best-established donor resources is partnering with new ventures so that it can make real progress in what it enables donors to do. We hope this is just the beginning.

A good response re: diversion of skilled labor

I previously expressed concerns about diversion of skilled labor: the possibility that nonprofits are outbidding the local private sector for top local talent. John de Wit of the Small Enterprise Foundation emailed us with a response:

On the question of whether we divert skilled labour from other potentially productive pursuits let me try to comment on this by referring to our actual practice:

  • At the moment I can only think of two staff members, both current or past, who were working with any other agency which focussed on poverty alleviation or microcredit or some other development field … Both worked for different NGOs which did water provision. Neither came to us because of better pay. The one came because of being somewhat frustrated at the former employer and we wanted them to do work which was more in line with their personal professional interests. The other came because they were keen to work more on a part-time basis.
  • The vast majority of our employees were unemployed when they came to work for SEF and it is only in the “professional” posts that we have employed people who already had employment. These worked for general commercial companies e.g. a producer of canned goods, a commercial farm, a legal firm etc.
  • In general we are only able to pay at the 25-percentile level versus the general market. i.e. we grade all posts in the organisation using a standard grading system which is employed by commercial firms, non-profits and some government agencies. We then obtain national survey data for what employers are paying at each grade level. We try to pay at least at the 25-percentile level. This means that 75% of employers pay more than us and 25% pay less than us. (Paying at this level is not a good position to be in.) To illustrate this please see the attached file [confidential]
  • Generally when a job candidate is already employed then we offer about 10% more than their current package.
  • Besides from pay and our location another major challenge we have in attracting and retaining good people is that most people would rather work for a large, high profile company than some small “unknown” NGO in a small “unknown” town. The big companies have a very good public image and appear to offer extensive career opportunities.

I asked for clarification on what he meant by “unemployed” (chronic or transitional?) and “professional positions.” He sent me a complete breakdown of what he knew about the previous employment of major staff. I am not sharing the breakdown (due to the presence of individual names and salary information), but here is a summary:

Employment situation prior to joining SEF  
Started as DF/admin (not “professional”) 64%
Previously employed 11%
Unclear 8%
Employed directly out of school 8%
Volunteer 5%
Chronically unemployed 4%

The question of “diversion of skilled labor” is one that we haven’t even asked charities, because we’ve felt it is so unlikely that they will have substantive information to address it. SEF’s email suggests otherwise for at least one case.

A proposal for donors interested in causes/charities we haven’t covered

I was talking with a friend of mine recently about how he decides which charities to support, and he said:

I really like the GiveWell approach, but there are two reasons it’s not practical for me to base most of my charitable decisions on it. First, you just haven’t covered a lot of the areas I care about. I want to give to support food banks, but you haven’t covered food banks. Second, a lot of the time, I get requests from friends or solicitations from charities (referred by friends), and I need information on a specific charity — that’s not something GiveWell provides.

These two issues — GiveWell’s lack of breadth in coverage of different causes and specific charities — are probably the most common points a lot of donors make when they think about using our research.

Here, I want to make a proposal that I think solves the problem for donors like my friend. If you agree with GiveWell’s philosophy about giving, do the following:

  • First, when a charity (or friend) solicits you to support their cause, list a set of important questions you’d need them to answer to give your confidence that their approach is working. This is the approach GiveWell generally takes. (For example, see our questions for surgery charities, water charities and microlending charities.)

    If you need help creating a list of questions, email us and we’ll send you our thoughts. If you have your own, send them to us, so we can publish the questions that donors are using, and others can rely on the questions that have already been created.

  • If they can answer your questions compellingly, and using specifics and facts rather than generalities and stories, great! Write them a check. (Unfortunately, this result has been unusual in my experience.)
  • If they can’t answer your questions, write a check to a donor-advised fund and tell them that when they can answer your questions, you’ll recommend a grant to them from your account.

Here’s an example of how this would work.

A charity approaches you and asks for a donation. Let’s say it’s a food bank. The charity says, “People are hungry. Giving to us will help provide poor individuals with the food they need to survive. And, our approach is to pick up food that’s going to be thrown out by local stores and restaurants, so your donation is leveraged and will help a lot of people.”

Instead of just writing a check, ask the charity the following (these are just a few questions that come to mind when thinking about this issue):

  1. Is using donations to pick up food the only program you run, or do you run other programs as well? What portion of your overall budget does each program account for?
  2. Who are the people that your food bank serves? What type of food-needs do they have? (You may be surprised.)
  3. Is money, specifically, a bottleneck to providing more people more food? (This is part of the room-for-more-funding question that we think is essential to investigate.) That is, it seems plausible that the bottleneck to providing more food is the supply of “leftover food,” not funds.
  4. How much more food can you commit to provide if you receive another $100,000? $1 million?
  5. Is the food you’re providing safe? Healthy? What type of food do you provide? Have you ever needed to discard food because it had spoiled? What rules do you follow to make decisions to discard food? How does your organization’s senior management know that the food delivered is high quality?

The beauty of this approach is that (a) you force yourself to give charitably when asked — you’re not just ignoring charities or friends; (b) you help to create good incentives for charities by only rewarding those that can make a convincing case for strong results; (c) you’ll help us create a repository of questions to ask charities working on different causes; and (d) you’ll still get a tax deduction.

Cost-effectiveness estimates: Inside the sausage factory

We’ve long had mixed feelings about cost-effectiveness estimates of charitable programs, i.e., attempts to figure out “how much good is accomplished per dollar donated.”

The advantages of these estimates are obvious. If you can calculate that program A can help much more people – with the same funds, and in the same terms – than program B, that creates a strong case (arguably even a moral imperative) for funding program A over program B. The problem is that by the time you get the impact of two different programs into comparable “per-dollar” terms, you’ve often made so many approximations, simplifications and assumptions that a comparison isn’t much more meaningful than a roll of the dice. In such cases, we believe there are almost always better ways to decide between charities.

This post focuses on the drawbacks of cost-effectiveness estimates. I’m going to go through the details of what we know about one of the best-known, most often-cited cost-effectiveness figures there is: the cost per disability-adjusted life-year (DALY) for deworming schoolchildren. This figure uses the disability-adjusted life-year (DALY) metric, probably the single most widely cited and accepted “standardized” measure of social impact within the unusually quantifiable area of health.

Note that various versions of this figure:

  • Occupy the “top spot” in the Disease Control Priorities Report‘s chart of “Cost-effectiveness of Interventions Related to Low-Burden Diseases” (see page 42 of the full report). (I’ll refer to this report as “DCP” for the rest of this post.)
  • Are featured in a policy briefcase by the Poverty Action Lab (which we are fans of), calling deworming a “best buy for education and health.”
  • Appear to be the primary factor in the decision by Giving What We Can
    (a group that promotes both more generous and more intelligent giving) to designate deworming-related interventions as its top priority (see the conclusion of its report on neglected tropical diseases), and charities focused on these interventions as its two top-tier charities.

I don’t feel that all the above uses of this figure are necessarily inappropriate (details in the conclusion of this post). But I do feel that they point to the worthiness of inspecting this figure closely, and it is important to be aware of the following issues.

  1. The estimate is likely based on successful, thoroughly observed programs and may not be representative of what one would expect from an “average” deworming program.
  2. The estimate appears to rely on an assumption of continued successful treatment over time, an assumption which could easily be problematic in certain cases.
  3. A major input into the estimate is the prevalence of worm infections. In general, prevalence data is itself is the product of yet more estimations and approximations.
  4. Many factors in cost-effectiveness, positive and negative, appear to be ignored in the estimate simply because they cannot be quantified.
  5. Different estimates of the same program’s cost-effectiveness appear to strongly contradict each other.

Details follow.

Issue 1: the estimate is likely based on successful, thoroughly observed programs.

The Poverty Action Lab estimate of $5 per DALY is based on a 2003 study by Miguel and Kremer of a randomized controlled trial in Kenya. As the subject of an unusually rigorous evaluation, this program likely had an unusual amount of scrutiny throughout (and may also have been picked in the first place partly for its likelihood of succeeding). In addition, this program was carried out by a partnership between the Kenyan government and a nonprofit, ICS (pg 165), that has figured prominently in numerous past evaluations (for example, see this 2003 review of rigorous studies on education interventions).

In this sense, it seems reasonable to view its results as “high-end/optimistic” rather than “representative of what would one expect on average from a large-scale government rollout.”

Note also that the program included a significant educational component (169). The quality of hygiene education, in particular, might be much higher in a closely supervised experiment than in a large-scale rollout.

It is less clear whether the same issue applies to the DCP estimate, because the details and sources for the estimate are not disclosed (see box on page 476). However,

  • The other studies referenced throughout the chapter appear to be additional “micro-level” evaluations – i.e., carefully controlled and studied programs – as opposed to large-scale government-operated programs.
  • The DCP’s cost-effectiveness estimate for combination deworming (the program most closely resembling the program discussed in Miguel & Kremer) is very close to the Miguel & Kremer estimate of $5 per DALY. (There is some ambiguity on this point – more on this under Issue 5 below.)

Issue 2: the estimate appears to rely on an assumption of continued successful treatment over time, an assumption which could easily be problematic in certain cases.

Miguel & Kremer states:

single-dose oral therapies can kill the worms, reducing … infections by 99 percent … Reinfection is rapid, however, with worm burden often returning to eighty percent or more of its original level within a year … and hence geohelminth drugs must be taken every six months and schistosomiasis drugs must be taken annually. (pg 161)

Miguel & Kremer emphasizes the importance of externalities (i.e., the fact that eliminating some infections slows the overall transmission rate) in cost-effectiveness (pg 204), and it therefore seems important to ask whether the “$5 per DALY” estimate is made (a) assuming that periodic treatment will be sustained over time; (b) assuming that it won’t be.

Miguel & Kremer doesn’t explicitly spell out the answer, but it seems fairly clear that (a) is in fact the assumption. The study states that the program averted 649 DALYs (pg 204) over two years (pg 165), of which 99% could be attributed to aversion of moderate-to-heavy schistosomiasis infections (pg 204). Such infections have a disability weight of 0.006 per year, so this is presumably equivalent to averting over 100,000 years ((649*99%)/0.006) of schistosomiasis infection – even though well under 10,000 children were even loosely included in the project (including control groups and including pupils near but not included in the program – see pg 167). Even if a higher-than-standard disability weight was used, it seems fairly clear that many years of “averted infection” were assumed per child.

In my view, this is the right assumption to make in creating the cost-effectiveness estimate … as long as the estimate is used appropriately, i.e., as an estimate of how cost-effective a deworming program would be if carried out in an near-ideal way, including a sustained commitment over time.

However, it must be noted that sustaining a program over time is far from a given, especially for organizations hoping for substantial and increasing government buy-in over time. As we will discuss in a future post, one of the major deworming organizations appears to have aimed to pass its activities to the government, with unclear/possibly mixed results. And as we have discussed before, there are vivid examples of excellent, demonstrably effective projects failing to achieve sustainability in the past.

Does the DCP’s version of the estimate make a similar assumption? Again, we do not have the details of the estimate, but the DCP chapter – like the Miguel & Kremer paper – stresses the importance of “Regular chemotherapy at regular intervals” (pg 472).

One more concern along these lines: even if a program is sustained over time, there may be “diminished efficacy with frequent and repeated use … possibly because of anthelmintic resistance” (pg 472).

Extrapolation from a short-term trial to long-term effects is probably necessary to produce an estimate, but it further increases the uncertainty.

Issue 3: cost-effectiveness appears to rely on disease incidence/prevalence data that itself is the product of yet more estimations and approximations.

The Miguel & Kremer study took place in an area with extremely high rates of infection: 80% prevalence of schistosomiasis (where schistosomiasis treatment was applied), and 40-80% prevalence of three other infections (see pg 168). The DCP emphasizes the importance of carrying out the intervention in high-prevalence areas (for example, see the box on page 476). Presumably, the program should be carried out in as high-prevalence areas as possible for maximum cost-effectiveness.

The problem is that prevalence data may not be easy to come by. The Global Burden of Disease report reports using a variety of elaborate methods to estimate prevalence, using “environmental data derived from satellite remote sensing” as well as mathematical modeling (see pg 80). Though I don’t have a source for this statement, I recall either a conversation or a paper making a fairly strong case that data on neglected tropical diseases is particularly spotty and unreliable, likely because it is harder to measure morbidity than mortality (the latter can be collected from death records; the former requires more involved examinations and/or judgment calls and/or estimates).

Issue 4: many factors in cost-effectiveness appear to be ignored in the estimate simply because they cannot be quantified.

Both positive and negative factors have likely been ignored in the estimate, including:

  • Possible negative health effects of the deworming drugs themselves (DCP pg 479). (Negative impact on cost-effectiveness)
  • Possible development of resistance to the drugs, and thus diminishing efficacy, over time (mentioned above). (Negative impact on cost-effectiveness)
  • Possible interactions between worm infections and other diseases including HIV/AIDS (DCP pg 479), which may increase the cost-effectiveness of deworming. (Positive impact on cost-effectiveness)
  • The question of whether improving some people’s health leads them to contribute back to their families, communities, etc. and improve others’ lives. This question applies to any health intervention, but not necessarily to the same degree, since different programs affect different types of people. From what I’ve seen, there is very little available basis for making any sorts of estimates of such differences.

Issue 5: different estimates of the same program’s cost-effectiveness appear to strongly contradict each other.

The DCP’s summary of cost-effectiveness alone (box on pg 476) raises considerable confusion:

the cost per DALY averted is estimated at US $3.41 for STH infections [the type of infection treated with albendazole] … The estimate of cost per DALY is higher for schistosomiasis relative to STH infections because of higher drug costs and lower disability weights … the cost per DALY averted ranges from US$3.36 to US$6.92. However, in combination, treatment with both albendazole and PZQ proves to be extremely cost-effective, in the range of US$8 to US$19 per DALY averted.

The language seems to strongly imply that the combination program is more effective than treating schistosomiasis alone, but the numbers given imply the opposite. Our guess is actually that the numbers are inadvertently switched. To one taking the numbers too literally, the expected “cost-effectiveness” of a donation could be off by a factor of 2-5 depending on this question of copy editing.

Comparing this statement with the Miguel & Kremer study adds more confusion. The DCP estimates albendazole-only treatment at $3.41 per DALY, which appears to be better than (or at least at the better end of the range for) the combination program. However, Miguel & Kremer estimates that albendazole-only treatment is far less effective than the combination program, at $280 per DALY (pg 204).

Perhaps the DCP envisions albendazole treatment carried out in a different way or in a different type of environment. But given that the Miguel & Kremer study appears to be examining a fairly suitable environment for albendazole-only treatment (see above comments about high infection prevalence and strong program execution), this would indicate that cost-effectiveness is extremely sensitive to subtle changes in the environment or execution.

Bottom line

There is a lot of uncertainty in this estimate, and this uncertainty isn’t necessarily “symmetrical.” Estimates of different programs’ cost-effectiveness, in fact, could be colored by very different degrees of optimistic assumptions.

Despite all of the above issues, I don’t find the cost-effectiveness estimate discussed here to be meaningless or useless.

Researchers’ best guesses put the cost-effectiveness of deworming in the same ballpark as that of other high-priority interventions such as vaccines, tuberculosis treatment, etc. (I do note that many of these appear to have more robust evidence bases behind their cost-effectiveness – for example, estimated effects of large-scale government programs are sometimes available, giving an extra degree of context.)

I think it is appropriate to say that available evidence suggests that deworming can be as cost-effective as any other health intervention.

I think it is appropriate to call deworming a “best buy,” as the Poverty Action Lab does.

I do not think it is appropriate to conclude that deworming is more cost-effective than vaccinations, tuberculosis treatment, etc. I think it is especially inappropriate to conclude that deworming is several times more cost-effective than vaccinations, tuberculosis treatment, etc.

Most of all, I do not think it is appropriate to expect results in line with this estimate just because you donate to a deworming charity. I believe cost-effectiveness estimates usually represent “what you can achieve if the program goes well” more than they represent “what a program will achieve on average.”

In my view, the greatest factor behind the realized cost-effectiveness of a program is the specifics of who carries it out and how.

Thoughts from my visits to Small Enterprise Foundation (South Africa) and VillageReach (Mozambique), part III

Continued from Part I and Part II, these are my thoughts from my recent visit to two of our top charities in Africa.

Some of what I saw and discussed prompted me to rethink our frameworks for evaluating certain kinds of programs:

  • Vaccinations. We’ve taken the “vaccination coverage rate” as a reasonable proxy for lives changed, since the evidence base for vaccines is so strong. But of course, “vaccination coverage rate” describes how many children received vaccines, not how many received functional and correctly administered vaccines. I was somewhat concerned that VillageReach staff found several vaccines in refrigerators that had “gone bad,” and I was glad to hear that VillageReach is considering adding an indicator to its information system to track how often this happens. The strong macro-level track record of vaccines (causing major drops in mortality at the country level, not just in carefully controlled trials) is some comfort here.
  • Microfinance. We’ve been concerned about the possibility that clients are taking out loans against their own best interests, and have largely pictured “coercive” versions of this problem: loan officers pressuring clients to borrow more than they should, clients getting themselves into debt cycles, etc. A very interesting anecdote from a staffer raised a more subtle version of this concern: clients may be losing money on their loans without knowing it. The anecdote given was about a particular woman who was literally selling goods for the same price she had bought them for, making the problem obvious. It could, however, be a much more subtle problem for other clients – given high interest rates, potentially transportation costs, etc., it could take quite a bit of calculation and careful accounting even to know whether the business one is running with a loan is in fact operating at a profit or a loss. (And since many families may have several sources of income, a loss might not be noticed if accounting isn’t careful.)
  • Cash transfers.. Our position has been that cash transfers can be assumed to be doing some good if they are successfully targeted to poor people in an area, something that may be difficult. It struck me that in certain areas (such as the village I saw with VillageReach), poverty targeting may not be much of a challenge at all (since everyone anywhere near the area is extremely poor); on the other hand, in these kinds of areas gifts of cash or livestock may be of very limited use (note the missionaries’ claim that village people receiving pensions for military service were largely spending them on alcohol).
  • Social business. I was impressed that I constantly saw Vidagas canisters throughout my trip – in hotels, stores, even the missionaries’ truck. (Vidagas is a “social business” started by VillageReach; it delivers gas, and was started in order to address the challenge of consistently powering refrigerators to keep vaccines at the appropriate temperature.)Our position on social business has been that such a business should not be considered a success until it has demonstrated either an actual profit (not just sales covering unit costs) or demonstrable social impact along the lines of what we look for from nonprofits. On reflection, I think that in certain cases there is room for more middle ground here. There are certain areas where the mere fact of selling something for a non-trivial price would seem to indicate a certain success in filling a need, even if not all costs are covered. Of course, it all depends on the area – subsidized sales may make a lot of sense where infrastructure and access to markets is poor, but in urban areas it could serve simply to “crowd out” private supply and/or enrich middlemen.

    I don’t regret our skepticism of social business to date. It has always been more important to us to avoid “false positives” (i.e., recommendations of organizations that are not impactful) than to avoid “false negatives” (i.e., failures to recommend organizations that are impactful). And I have not seen any “social enterprise investment” fund put together the case I’d need to see, even using the “middle ground” roughly sketched out above. But I do want to keep thinking about how to recognize the good social businesses may be accomplishing without being overly credulous.

These visits very much made the activities of our top charities feel more “real” to me.

To this point, the work we’ve done on international aid has felt very abstract. That’s not a reason not to act/give based on it, but in many ways the situations we’re analyzing are so different from what I see every day that it can be hard to believe that the charities are helping real people in the way our analysis suggests they are.

Much of what I saw on the trip was, in fact, consistent with what I expected. To a large degree, it made the research “come to life.” I saw people and areas that really are at a level of poverty that I’ve never seen in the U.S.; I talked to staff about the details of what they’re doing, and to some degree saw them doing it; and I felt, very tangibly, how the work they’re doing can make a difference.

(As an aside, I’ve had the opposite experience with site visits to U.S. charities. I’m not sure why. The U.S. visits were definitely more “staged” while the international visits had a lot of wandering and improvisation; in addition, the U.S. charities tend to address less tangible problems, and it was often hard to connect the charities’ theories of their own value-added with what I was seeing.)

It was frustrating to say “no” to kids rubbing their stomachs and asking for money, and to see so many people who seem like they could benefit greatly from things that are pretty basic – though not necessarily easy to deliver. The bottom line is that while I’ve pushed to make my actions consistent with my beliefs, my beliefs about the importance of international aid carry a little more emotional weight now, and I feel more emotionally motivated to give and to give well. I would recommend a similar trip to anyone who intellectually accepts the importance of international aid, but is having trouble getting behind it emotionally.

Thoughts from my visits to Small Enterprise Foundation (South Africa) and VillageReach (Mozambique), part II

Continued from Part I, these are my thoughts from my recent visit to two of our top charities in Africa.

Diverting skilled labor looks like a real concern.

The COO of SEF stressed that one of SEF’s biggest challenges is human resources (i.e., continually finding good people to staff it). I can easily see how this would be. As I mentioned in Part I, I found that the nonprofits I visited were employing capable, impressive people with a combination of local background and well-above-average educational credentials and command of English.

On one hand, seeing these staffers made me feel good about the organizations we were recommending. At the same time, it highlighted one of the most universal and hardest to evaluate concerns we have about nonprofit work: diversion of skilled labor from other potentially productive pursuits.

Adding to this concern was a general impression I got (reinforced by Leah from VillageReach) that nonprofit jobs are among the best-paying and most prestigious jobs for African locals. It looks like we have a situation where:

  • Many of the people hired by nonprofits could also be potentially very helpful to their communities if they were doing for-profit work.
  • They work instead for nonprofits, partly because nonprofits are out-bidding the for-profits for their services.
  • Within a for-profit framework, there is often (not always and never perfectly) a connection between the value of a job and the salary, which creates a (imperfect) tendency for talented people to end up in roles where they can do more good.
  • I have no sense of how (or whether) nonprofits are attempting to calibrate salaries and value, and I fear that they could be “overpaying for” (and thus misusing) local talent simply because they want the best people available and they have the donor-supplied funds to get them.

More on this idea in a future post. Though we have no great methods for quantifying the losses from “diversion of labor,” we do believe that this concern reinforces the importance of demanding that nonprofits be accomplishing as much good as possible and not merely some good.

Getting basic info about people’s standard of living seemed fairly straightforward.

I understand that estimating people’s incomes can be a very complex endeavor, but in the areas I visited, it seemed possible to get a sense very quickly for how “poor” one area was relative to another. I asked basic questions at the village level: where the nearest water source was, who was responsible for maintaining it, where the nearest school was, what the school fees were, etc. I walked around and observed how many of the dwellings were made of mud vs. concrete . And when talking to individual clients, I asked straightforward questions like “Do you have a TV?”, “Do you have electricity?”, “What do you eat?” and “When was the last time you had a fever and what did you do about it?” Answers were fairly consistent in a given area, but varied dramatically across charities (more below).

Throughout our investigations into international aid, I’ve been frustrated by the fact that most charities seem either unable or unwilling to produce data on clients’ standards of living. Because I don’t tend to trust stylized stories, and I haven’t had what I consider credible data on standards of living, I’ve constantly felt very unclear on who is being helped and how. I now find it less likely that this problem stems from prohibitive costs of data collection; I find it more likely that it stems from (a) the fact that donors rarely (if ever) ask for data on clients’ standards of living; (b) the possibility that some charities may not want to reveal that their clients are anyone but the “poorest of the poor” (even when their clients are still quite poor).

The three areas I visited were very different in terms of standards of living.

  • Small Enterprise Foundation (SEF) clients: I visited two villages, one in the Microcredit program (SEF’s original program) and one in the Tšhomišano Credit Program (targeted more directly at the poorer people in a village). In both villages, at least half the buildings I saw were made of concrete, and everyone I spoke to reported convenient access to running water, electricity, a fairly well-stocked local market, and public transportation to larger cities. Living spaces appeared fairly cramped (they were larger than in the other areas I visited, but when I asked who slept where it quickly became clear that there wasn’t much space per person); clients reported eating meat “only when they could afford it.”
  • VillageReach clients: infrastructure was much, much worse in these areas. The town of Macomia, where we spent the night, had no running water and no electricity except for generators; it took hours to reach (in a truck) from Pemba, which I believe was the closest area with reliable electricity and running water. The one village we visited took over an hour (of alert driving on very bad roads) to reach from Macomia, and the only concrete structures I saw there were the health center, a closed shop, and the school. I was told that other nearby villages were even harder to reach (in some cases impossible in a truck) and that access to water was a major problem. In terms of both standard of living and life opportunities, these areas appeared fundamentally worse than SEF areas.
  • Soweto: I took a quick tour through a poor area of Soweto (urban). It was generally filthy (literally strewn with trash) and extremely crowded, with tiny steel shacks next to each other. It seemed to me like a much more unpleasant place to live than either of the other two areas, although on the flip side, people in Soweto appeared to have access to public transportation, electricity, good schools, etc. as they were very close to much wealthier residences.

One of the reasons Small Enterprise Foundation stood out to us is that it appears more diligent about targeting the poor than other organizations. Even so, its clients – while poor – appear to be substantially better off (in fundamental infrastructure-related ways, not ways that can be attributed to program effects) than VillageReach’s clients. This doesn’t make me less supportive of SEF (it’s largely consistent with my existing suspicion that microfinance clients are rarely if ever the poorest of the poor), but it’s an important thing to keep in mind that I feel better informed about now than before.

Are you looking to help people in the worst situation, and with the most basic needs, possible? Or are you interested in helping people who are better off to begin with, in the hopes that a little assistance might go a longer way with them? To me there’s no clear right answer, but it’s a decision donors are likely making constantly without knowing it.

More thoughts coming in Part III.