The GiveWell Blog

LAPO: Case study on due diligence by microfinance funders

Updated 10/19/10 to reflect new information, submitted by a Grameen Foundation representative, regarding encouraging developments on LAPO since mid-year. To be clear, we stand by the main message of this post, which is not about LAPO’s current situation but about its funders’ and partners’ behavior over the last several years, prior to the public controversy that occurred in late 2009 and early 2010.

We’ve recently released research aiming to identify microfinance institutions (MFIs) with a strong focus on social impact. We have chosen to focus on finding individual MFIs largely because of our concerns about large microfinance-funding charities – specifically, that their due diligence seems focused on financial performance to the exclusion of social impact – i.e., on scale and revenue rather than effects on borrowers’ lives.

A controversy from earlier this year, over a Nigerian MFI called Lift Above Poverty Organization (LAPO), provides a good example of what we’re concerned about. LAPO has been funded and celebrated by many of the big names in microfinance, yet for years there have been many causes for concern about its social (as opposed to financial) performance. From what we’ve seen, it is not clear that these concerns have been on the radar screen of LAPO’s funders and partners.

LAPO’s funders/partners

LAPO’s funders/partners have included:

Controversy and reaction

In August 2009, MicroRate was the first to hint at concerns about LAPO, stating in a press release: “MicroRate notes that the integrity of the information provided to it by LAPO, as well as LAPO’s financial disclosures since the rating, have come into question. As a result, MicroRate’s rating of LAPO is no longer valid.” (MicroRate 2009)

In December 2009, Planet Rating released a “C+” rating report for LAPO that raised substantial concerns about LAPO’s legal licensing, governance, and data integrity, as well as noting an effective annual interest rate in excess of 100%. (Details below.)

In April 2010, The New York Times published an article citing the Planet Rating report on LAPO’s licensing issues and interest rates, while also noting the expiration of the MicroRate rating. Within weeks of this article’s running, both Kiva and MicroPlace had suspended their relationship with LAPO. (See Kiva’s page on LAPO, which discusses the suspension – Kiva’s loans through LAPO appear to have ended in early May – and Microplace’s discussion.) However, the Schwab Foundation award came after the article.

Update 10/19/10: between May 2010 and the present, several encouraging changes at LAPO are reported by a Grameen Foundation representative (via this comment and a followup email on the specifics of dates):

  • May 2010: LAPO “hired Deloitte and Touche to audit its 2010 financials and review the audit that was conducted in 2009.”
  • June 2010: LAPO “received its license from the Nigerian Central Bank and also hired a new Chief Financial Officer, with extensive experience in microfinance management, through the UNDP Africa Management Services Company (AMSCO).”
  • June-September 2010: LAPO “reconstituted its Board of Directors, which now comprises seasoned microfinance, banking and economics professionals from Nigeria and Benin.”
  • October 2010: LAPO “retained the services of consulting firm MicroFinance Transparency (headed by noted expert Chuck Waterfield) to review its interest rates and related policies.”

Below we discuss some details of the concerns, and why we feel they are relevant to our concerns about the due diligence done by LAPO’s funders/partners.

Forced savings and savings without the appropriate license

These issues were a major focus of the Times article, which stated:

    LAPO, considered the leading microfinance institution in Nigeria, engages in a contentious industry practice sometimes referred to as “forced savings.” Under it, the lender keeps a portion of the loan. Proponents argue that it helps the poor learn to save, while critics call it exploitation since borrowers do not get the entire amount up front but pay interest on the full loan.
    LAPO collected these so-called savings from its borrowers without a legal permit to do so, according to a Planet Rating report. “It was known to everybody that they did not have the right license,” Ms. Javoy said.

It appears to us that LAPO has been putting off getting the appropriate license for several years and that its funders have not held it accountable in this regard (though to be clear, it seems possible to us that this failure did not literally constitute breaking the law – we are not sure based on the information we have).

  • The 2005 MicroRate report stated that LAPO was planning to (and should be planning to) become licensed as a Microfinance Bank: “by law, [Community banks] will have to transform into a Microfinance Bank (‘MFB’) by December 2007 … As yet there is no deadline for the transformation of NGOs. However pressure from the central bank is expected and LAPO will have to transform sooner rather than later … The MFI is fully committed to doing so and plans are in place to convert into a private company by the proposed deadline” (page 3). Thus, at this point LAPO appears to have been targeting December 2007 for obtaining its license.

 

  • A letter from the Calvert Foundation concerning its investment in LAPO says the license is hoped for by year-end 2009: “we have been working with the Creditor Taskforce to encourage the transformation of LAPO into a depository institution regulated by the Central Bank as soon as possible. LAPO has received initial approval by the Central Bank for their application to transform into a ‘Microfinance Bank.’ Their goal is to secure the banking license by year-end 2009.
  • The Planet Rating report, in December 2009, is clear that LAPO still did not have the license at that time, and that it planned to get one in January 2010, a plan that Planet Rating did not find realistic (Planet Rating 2009, Pg 7). Planet Rating stated that “LAPO does not have the appropriate legal structure to … disburse credit or collect savings … Although illegal, this has been so far tolerated by the [Central Bank of Nigeria]” (Pg 7).
  • LAPO still apparently did not have the license as of Kiva’s April 2010 update. This is the most recent discussion we can find of this issue.High interest rates

    The other point emphasized in the Times article is the high rates of interest charged by LAPO, which seem to contradict the stated goals of some of its partners:

      Under outside pressure, LAPO announced in 2009 that it was decreasing its monthly interest rate, Planet Rating noted, but at the same time compulsory savings were quietly raised to 20 percent of the loan from 10 percent. So, the effective interest rate for some clients actually leapt to nearly 126 percent annually from 114 percent, the report said. The average for all LAPO clients was nearly 74 percent in interest and fees, the report found.

    Until recently, Microplace, which is part of eBay, was promoting LAPO to individual investors, even though the Web site says the lenders it features have interest rates between 18 and 60 percent, considerably less than what LAPO customers typically pay.

    At Kiva, which promises on its Web site that it “will not partner with an organization that charges exorbitant interest rates,” the interest rate and fees for LAPO was recently advertised as 57 percent, the average rate from 2007. After The Times called to inquire, Kiva changed it to 83 percent.

    We don’t have much to add on this point. The Planet Rating report specified an effective annual interest rate of 123.9% (Planet Rating 2009, Pg 6).
    We have argued against reading too much into high interest rates, but funders and partners ought to be clear on what these rates are and whether the rates are consistent with their own values. We feel it is very important that anyone funding or partnering with an MFI do the full due diligence required to understand the true effective interest rate, from the beginning of the relationship.

    The two issues raised by the Times article – concern over LAPO’s license and over its interest rates – are both valid issues, and both issues could be easily identified years before the controversy came up.

    Other concerns

    We note other concerns about LAPO’s impact not mentioned in the Times article:

    • Integrity of governance and audits may be compromised by family relationships and other issues. The 2009 Planet Rating report states,
      • “Although the Memorandum of Association states that BOD [Board of Directors] members are to be reelected every year renewed every two years (three years for the chairman), all BOD members have been in the BOD for at least four years” (Pg 4)
      • “One of the [Board of Directors] members is related to the external auditor, creating a risk of lack of transparency. Family relations within the management team create another conflict of interests that have not yet been mitigated by appropriate policies” (Pg 7)
      • “External auditors are not sufficiently independent and do not have enough knowledge on the risks specific to microfinance” (Pg 10)
    • Data is unreliable.
      • “Loan tracking and accounting systems are not integrated and the system is prone to error” (MicroRate 2005, Pg 2)
      • The Planet Rating report stated that information management left room for mistakes and manipulation (Pg 8-9), and that “A sample of six branches by Planet Rating resulted in inconsistencies of up to a 6% difference in the amounts of PAR, arrears and number of clients (as of September 2009)” (Footnote 22). The report warned that “Due to insufficient data reliability, Planet Rating’s opinion on LAPO’s credit risk and credit risk coverage is subject to reserves” (Pg 11).
    • LAPO may lack the tools to assess, and create incentives based on, its social as opposed to financial performance. The Planet Rating report states:

      Group discipline is generally sufficiently ensured. However, for Regular Loans, the evaluation of the borrower’s capacity is not always complete and the actual use of the loan rarely formally monitored. Moreover, LAPO has not defined clear rules for the use of identification papers, which will be necessary to prevent multiple lending as the microfinance market matures and given the multiplication of MFBs … Moreover, the incentive system for Credit Officers mostly relies on their caseload, which creates a risk of excessive disbursements at the expense of portfolio quality. (Page 11)

     

  • High dropout rates. This is the issue that most worried us when we expressed concern about LAPO in December 2009. We cited its 49% dropout rate; as early as 2005, MicroRate stated, “client attrition remains unacceptably high at around 27%” (MicroRate 2005, Pg 5).Bottom line

    We aren’t sure whether/to what extent

    • LAPO’s funders/partners have been largely unaware of/indifferent to the concerns raised above (in some cases, possibly due to prioritizing financial over social returns).
    • LAPO’s funders/partners have been aware and concerned, but have had other, positive information on LAPO’s social impact that they have felt outweighs the concerns.
    • LAPO’s funders/partners have been aware and concerned, but have made a strategic decision to prioritize building sustainable, profitable financial institutions over focusing directly on social impact.

    We feel there is at least some evidence for the first possibility. Two partnerships were suspended in the immediate wake of the Times article, whose major concerns could easily have been identified years ago; and the only public record of due diligence we’re aware of, USAID’s discussion from 2007 (see page 5), discusses only financial/”efficiency” indicators, with no mention of concerns like those listed above.

    The possibility that social performance is essentially being overlooked seems strong and worrisome enough to us that, for the time being, we are more comfortable with the idea of giving directly to MFIs that are clearly focused on their social performance. We are open to changing this view, if and when major microfinance organizations become more open about what factors and concerns they are weighing and how they are conducting their due diligence.

    Sources

 

More on charity ratings and GiveWell’s mission

Last week we wrote that we are likely going to stop giving zero-to-three-star ratings to charities. Some praised the decision, while others offered suggestions for how to resolve the problems we listed with ratings (we responded to the suggestions in the thread).

While we went into a lot of detail on the problems with ratings, I’ve realized that we didn’t talk enough about the benefits of ratings – and how those benefits don’t fit with GiveWell’s core audience and mission as we’re thinking about it now.

As we mentioned last week, we aren’t getting rid of our charity evaluations, or recommendations, or rankings of recommendations. We’re still going to disclose which charities we like best, and in what order. We’re still going to say what we think of each charity we’ve looked at, whether extensively or briefly.

What we’re getting rid of is the ability to get a quick, quantified rating of any charity we’ve looked at. In some ways that’s a big thing to lose; it’s arguably Charity Navigator‘s greatest strength in attracting broad interest and attention. A lot of the people I’ve talked to about GiveWell aren’t particularly interested in a charity recommendation; they want to know “how good” is the charity their friend is running a marathon for, or the charity that sends them mail. As the Money for Good study puts it, they’re seeking “to validate their donation, not to choose between organizations” (main study, page 40). And sometimes, they also seem to be excited by the possibility of a scandal, i.e., revealing that a particular charity is “bad.”

GiveWell’s mission has never been about serving these people. We seek to drive money to outstanding charities, and in so doing, to change incentives and allow some charities to raise money by doing demonstrably great work (instead of just by telling a great story).

We feel that if all you want from a charity evaluator is to check whether the charity that contacted you is “bad or OK,” you’ve already thrown away most of the opportunity to do as much good as possible with your donation.

For these reasons, our work has always been a poor fit for what a lot of donors want, and there has always been a tension between fulfilling our mission directly (which means focusing obsessively on charities we find promising and ignoring other charities) and doing things that might attract more attention and broaden our audience (such as publishing the star ratings that many want, or “digging up dirt” on big-name charities).

We originally introduced star ratings because we thought they would broaden our reach. The reactions we’ve gotten, particularly regarding how we rate the vast majority of charities that share no substantive information (the toughest aspect of designing an appropriate ratings system), have implied to us that there is actually very little to gain in this way. People who come to us for validation on their existing choice of charity are usually going to see that we don’t find the charity promising. As a result, rather than become interested in the work we do, they’re generally going to be disappointed and even upset.

In many ways, we’re better off making it clear from the beginning that we don’t have what these people are looking for, and focusing exclusively on the donors who do fit what we’re about: coming to their donation decisions with a lack of pre-commitments and an intent to give to the best charity possible.

Why charity ratings don’t work (as of now)

For a little over a year, GiveWell has assigned zero- to three-star ratings to all charities we’ve examined. We’ve done so in response to constant requests from our fans and followers. We’ve been told that people want easily digested, unambiguous “bottom line” information that can help them make a decision in a hurry and with a clean conscience. We understand this argument. But right now, we feel that the costs of ratings outweigh the benefits, and we’re likely on the brink of getting rid of our ratings.

To be clear, we are not going to stop comparing, evaluating, and recommending charities. As we did for our first couple of years of existence, we will rank and promote a number of recommended charities, while sharing the reasons why we do not recommend other charities. What we are going to stop doing is boiling down our view on a each charity examined into a single quantifiable data point. We’re going to go back to “bottom lines” that are qualified and sometimes difficult to interpret without reading further (for example, instead of “zero stars,” our bottom line will say something more like “Did not pass heuristics to qualify for further investigation”). We know we’ll be sacrificing the kind of simplicity that appeals to many, and we still think it’s worth it.

In trying to provide star ratings, we’ve run into fundamental questions that we don’t have good answers to:

  • Should we rate charities in an “absolute” sense (based on our confidence that they have positive impact) or in a “relative” sense (based on how they compare to other charities working on similar issues)?
  • How should we deal with charities that we feel do excellent work, but have limited or no room for more funding? Should we rate them above or below charities that do less excellent work but have more definite needs? Should our ratings reflect our opinion of organizations’ work or our opinion of whether undecided donors should give to them?
  • The vast majority of charities share no substantive information on their effectiveness, making it impossible to evaluate their effectiveness. Should such charities receive “no rating” (in which case we would rate very few charities, and may provide incentives for charities with low effectiveness to remain opaque) or our lowest rating (which creates considerable offense and confusion among those who feel we have judged their work ineffective)?

Each of these issues involve an ambiguity in what precisely star ratings mean, and we need ways to resolve the ambiguity in a very clear, easily digested, instantly understood way or we lose the benefit we were hoping to gain from handing out ratings in the first place. At this point we cannot construct a system that accomplishes this.

We believe that these issues are unavoidable when assessing charities based on their impact. We believe that nobody else has yet run into these problems because nobody else has yet tried to rate charities based on the case for their impact, i.e., their effects on the people and communities they serve.

Problem 1: are ratings “absolute” or “relative to a cause?”

How does Doctors Without Borders rate? The answer depends partly on whether you’re looking at it as a global health organization or as a disaster relief organization. Compared to other global health organizations, its transparency and documented effectiveness do not seem top-notch (though they are better than average). Compared to other disaster relief organizations (based on our preliminary and subject-to-change impressions), it stands out.

An organization may be top-notch compared to other water organizations, while mediocre in terms of proven health impact. Our view of a charter school organization depends on whether we’re comparing it to other education groups or to U.S. equality of opportunity organizations of all kinds. The more one tries to accommodate wishes like fighting a specific disease or attacking a problem in a specific way – i.e., the more one explores and subdivides different causes – the more these difficult questions come up.

We have been rating each organization “relative to” the cause in which it seems to fit most intuitively. However, this is confusing for donors who don’t have strong cause-based preferences and take a broad view of charity as “helping people in general.” (Usually these are the donors who are a particularly good fit for what we provide.) Alternately, we could rate each organization using an “absolute” scale (taking the cause into account), but if we did this we’d rank even relatively mediocre international aid charities above the outstanding Nurse-Family Partnership, and that would create considerable confusion among people who didn’t agree with our (highly debatable) view on international vs. domestic aid.

In the end we don’t feel comfortable rating Nurse-Family Partnership higher than Small Enterprise Foundation … or lower … or the same. They’re too different; your decision on which to give to is going to come down to judgment calls and personal values.

It is possible for ratings systems to deal effectively with “apples and oranges” comparisons. Consumer websites (e.g., Amazon) provide ratings for products in many different categories; consumers generally seem to understand that the ratings capture something like “how the product performs relative to expectations,” and expect to supplement the ratings with their own thoughts about what sort of product and what features they want. However, in this domain I feel that consumers generally have a good feel for what different product categories and features consist of (for example, I know what to expect from a laser vs. inkjet printer, and don’t assume that this issue is captured in the rating). In the charity world, there is often just as little to go on regarding “what can be expected from an education charity?” as there is regarding “which education charity is best of the bunch?” So there is ambiguity regarding the extent to which a rating includes our view of the charity’s general cause.

While this problem isn’t a fatal one for charity ratings, it brings some complexity and confusion that is compounded by the issues below.

Problem 2: do ratings incorporate whether a group has room for more funding?

We’ve argued before that the issue of room for more funding is drastically underappreciated and under-discussed, and it creates major challenges for a ratings system.

The question is how to rate an organization such as Aravind Eye Care System, AMK or (arguably) Nurse-Family Partnership – an organization that we largely think is doing excellent work, but has limited room for more funding. On one hand, we need donors to know that their money may be more needed/productive elsewhere; giving a top-notch organization a top-notch rating does not communicate this. On the other hand, if we were to lower Nurse-Family Partnership’s rating, that would imply to many that we do not have as high an opinion of their work, and may even result in reduced support from existing donors, something we definitely don’t want to see happening.

Then there are organizations which we do not investigate, even though they are promising and pass our initial heuristics, because it comes out early in the process that they have no room for more funding. We therefore have no view of these organizations’ work, one way or the other; we simply know that they are not a good fit for the donors using our recommendations.

The ambiguity here is regarding whether ratings represent our view of an organization’s work or our view of it as an opportunity for non-preexisting donors.

Problem 3: how should we rate the vast majority of charities that share no substantive information?

If a charity doesn’t collect, and share, substantive information on its effectiveness, there is no way of gauging its effectiveness. From what we’ve seen, the vast majority of charities do not both collect and share substantive information on their effectiveness. This gives us two unattractive options:

1. Give ratings only to charities that share enough information to make it possible to gauge their impact. If we did this, we would have a tiny set of rated charities, with all the rest (including some of the largest and least transparent charities such as UNICEF) marked as “Not rated.” Our lowest-rated charities would in fact be among the most transparent and accountable charities; we would effectively be punishing charities for sharing more information; people who wanted to know our view of UNICEF would wrongly conclude that we had none.

2. Give our lowest rating to any charity that shares no substantive information. This is the approach we have taken. This results in the vast majority of our ratings being “zero stars,” something that makes many donors and charities uncomfortable and leads to widespread confusion. Many people think that a “zero star” rating indicates that we have determined that a group is doing bad work, when in fact we simply don’t have the information to determine its effectiveness one way or the other. We have tried to reduce confusion by modeling our ratings on the zero-to-three-star Michelin ratings (where the default is zero stars and even a single star is a positive mark of distinction) rather than on the more common one-to-five-star system (where one star is a mark of shame), but confusion persists.

Bottom line

All of the above issues involve ambiguity in how our ratings should be read. Any of them might be resolvable, on its own, with some creative presentation. However, we have not been able to come up with any system for addressing all of these issues that remains simple and easily digestible, as ratings should be. So we prefer, at least for the time being, to sacrifice simplicity and digestibility in favor of clear communication. Rather than giving out star ratings, we will provide more complex and ambiguous bottom lines that link to our full reviews.

We understand that this is a debatable decision. We wish to identify outstanding charities and drive money to them; we wish to have a reputation for reliability and integrity among thoughtful donors; the goal of giving large numbers of people a bottom-line rating on the charity of their choice is less important to us. We know that other organizations may make the tradeoff differently and don’t feel it is wrong to do so.

Note that we haven’t finalized this decision. We welcome feedback on how to resolve the tensions above with a simple, easily understood ratings system.

Thoughts from Dharavi tour

Last week the staff of GiveWell went on a tour of the Dharavi slum, organized by Reality Tours. Consistent with the tour’s policy, we took no pictures, but here are some thoughts:

  • In some ways (and consistent with our understanding of relevant data), the standard of living seems below anything I’ve seen in the U.S., outside of being literally homeless (and not in a shelter). Many of the residences consist of a single 150-square-foot room, at the top of a narrow ladder, housing an entire family. The paths to the homes we saw are so narrow that we had to walk in single file. These residences (according to our guide) command rent of 1500-2000 INR (~$32-$43) per month, with a required deposit of 25,000-30,000 INR ($545.50-$654.60), and do not actually house the poorest people in the slums; the poorest are the factory workers, who live in what seem like health-risk-prone conditions in the slum’s factories (plastic, textiles, etc.)
  • Despite this, the slum is said (again by our guide) to be something of a destination, and not just a last resort.
    • People come from far outside Bombay in order to work in the slum’s factories and send money home. With stable incomes of 100-200 INR ($2.15-$4.30, not adjusted for purchasing power parity) per day plus lodging, these people may be in the category of the global “middle class”.
    • Many of those living in the slums could easily afford to move out, but choose to stay for the community. The guide told us about a friend of his who had become an airline stewardess and still spent most of her time living in Dharavi despite owning a relatively expensive flat; he also told us that many of those living in slums work in call centers (working in a call center is considered a relatively desirable and high-paying job). We ran into one young man who reported having a bachelor’s degree in physics and a job at a call center, and spoke excellent English.
  • Living in Dharavi does seem like a much better situation than that of many people I see living in shelters (or no shelters) on the street. According to the guide, many of the homes are legally protected against demolition (if the government demolishes them it must provide compensation), and receive electricity and water.
  • One of the things we’re very interested in, but have not come across any data on, is what job opportunities look like in different parts of the world, i.e., how much one can hope to make with different qualifications/skills/connections. This question has strong consequences for the what sort of education is helpful in different areas. Some notes on our guide’s responses to our queries:
    • The factory jobs in Dharavi are plentiful and require little other than a willingness/ability to do manual labor, which is why many come from outside Bombay for these jobs.
    • Some jobs in textiles (tanning leather; making clothing, paid by the garment) require more skill and pay upwards of double what the unskilled jobs pay.
    • The jobs that many people in this area hope to get are call center jobs, which pay relatively well and require a college degree. Still better-paying are accounting jobs, which require specific university-acquired training.
    • Overall the picture is very different from the picture I got on my trip to Africa, where nonprofit jobs seem to be seen as most promising and few/no options exist for those without the appropriate level of education for these jobs.
  • The guide mentioned that workers clean containers by dipping them in hot water, and have to be careful not to burn themselves. Natalie asked why they don’t wear gloves, and the guide responded (paraphrasing) “They are used to this way of working. You give them gloves and they stop using them after one day.”
  • Near the end of the tour, we visited a kindergarten run by Reality Gives, the sister nonprofit of Reality Tours. The children were participating in a spirited celebration of the Ganesh festival. There were 10 teachers present for 20 children; we were told that this was because we were in the transition from morning to afternoon classes, but even if there had only been half as many teachers present it still would have been far more than I’m accustomed to seeing in a kindergarten.

Nurse-Family Partnership and room for more funding

We are currently updating our review of Nurse-Family Partnership National Service Office (NFP NSO) (one of our top-rated charities). We did our main review of NFP NSO in 2008 and since then we have continued to develop our research process, and in particular our approach to assessing room for more funding, i.e., how much more money a charity can productively use. At this point we feel that NFP NSO has room for more funding only over the long term, and that potential donors should take this into account.

This conclusion is not final, but it seems like an observation worth sharing now. We plan to publish our updated review of NFP NSO, including our final take on its need for individuals’ donations, later this year.

Details: In 2007, NFP NSO launched a campaign to raise money so that NFP NSO could become, over a ten-year period, self-sustaining on the fees it collects from local NFP programs. In 2007, NFP NSO successfully got commitments of approximately $50 million for this purpose, the full amount it sought (see Annual Report 2007 (PDF), page 31, and our phone conversation with NFP NSO (DOC)).

Since then, NFP NSO has revised its cash flow projections, making the projections less optimistic in light of the weak economy. It has shared these cash flow projections for our eyes only. The projections anticipate that donations will be needed for several years to cover the gap between earned revenues (from local NFP programs) and expenses, and that it will take until 2021 to get to the point where earned revenues cover 98% of all expenses.

From these projections, it appears to us that existing commitments can sustain NFP NSO through 2015, at which point the organization will likely need more donations in order to continue operating. It also seems likely to us that any additional donations in the meantime will be essentially “held for a rainy day,” i.e., saved for the point at which they are needed to cover this gap. Because NFP NSO’s goal is to become self-sustaining on earned revenue, it seems unlikely that it would use more donations to directly increase the reach of its program (e.g., through providing its services to local NFP offices for free or reduced prices).

We feel that NFP NSO is an outstanding organization, with a stronger case for its effectiveness than any other organization we know of doing work on U.S. equality of opportunity. Therefore, we very much hope that it raises the funds that are necessary to continue operating, and in plenty of time. However, it seems important to note that its need for more funds – and ability to translate them into more outcomes – is fairly far off, when compared to that of (for example) VillageReach. (Note that we don’t mean to compare NFP NSO to VillageReach in terms of outcomes, or in general. We’re simply contrasting longer-term vs. shorter-term room for more funding.)

Note: NFP NSO reviewed and approved this post prior to publication.

Should I give out cash in Mumbai?

We mentioned before that we were planning a trip to Mumbai (also known as Bombay, in India). At this we have been here for a few weeks. We will be coming back to the U.S. between mid-November and mid-December.

From a GiveWell perspective, one of the things that is very different about being here vs. in the U.S. is that here we are in close proximity to extreme poverty. We have written before that we see promise in giving cash directly to the poor; here, more than in NYC, I could arguably carry out a mini “cash transfer” program on my own. The question is whether I should.

Below I lay out a few possible options. My interest is not in whether these options are better than giving nothing, but whether they are better than reserving the same funds for my annual donation to a GiveWell top-rated charity (last year I gave to Stop TB Partnership).

Option 1: give to the children who chase after me.

I pass people asking for spare change in NYC, but in Mumbai I am chased after by children, which is a very different (and more emotionally difficult) experience. It seems pretty clear that these children are legitimately poor, and I’m tempted to give to them.

However, I think this option is clearly inferior to Option 2 below.

  • These children, poor though they may be, are probably better off – and bringing in more money every day – than the children deep in the slums who are not venturing out to the nicest parts of town to chase after Westerners. (When we walk around in Churchgate, an upscale area, children run after us. When we walked along Juhu beach and ended up in a slum, people just asked if us we were lost, though I’d guess that they are at least as poor as the children we see daily.)
  • There is also an incentive problem: I’d rather minimize the degree to which my gifts turn begging into a profitable operation. It’s possible that parents are keeping their children out of school to beg, or even that the children are essentially “employed” by someone in far less need; I don’t want to contribute to that dynamic.

Option 2: walk deep into the slums and give out cash more or less at random (or to people who “look busy”)

This is the approach apparently favored by Tyler Cowen. It has the advantage that it seems more likely to reach the people most in need, and that it seems less likely to contribute to bad incentives.

I still find myself hesitating to do this, and the primary reason is that cash transfer programs are so rare among nonprofit organizations. (I believe a nonprofit, while not giving out cash “at random,” could still find designs that minimize the negative effect on incentives, such as requiring proof of both low income and employment and using an EITC-like scheme). We have in the past vigorously questioned the fact that nonprofits don’t tend to give out cash, and we think it’s possible that this has more to do with self-serving attitudes toward their own value than with a considered judgment that such programs are not promising. Still, in the end I think it’s more likely that there’s just something I’m missing.

Perhaps the risks of money being used on alcohol and similar purchases are too high. Perhaps the recipient of the cash will incite jealousy or even get robbed (see the comment by Tom Womack on Marginal Revolution’s post on the subject). Perhaps highly unpredictable cash transfers creates another kind of bad incentive, encouraging people to focus on trying to manipulate their luck (for example, via superstition).

I’m ready to discuss, but not ready to execute on, an activity that I don’t see being carried out by anyone who clearly knows what they’re doing, has seen the effects up close over years, has seen unexpected consequences and learned how to deal with them, etc.

Option 3: give to local nonprofits.

This option is pretty far from the original idea of handing cash to the poor, but it’s the one that appeals to me most of the three. It seems that there are vast numbers of relatively small nonprofits here, focused on working directly and tangibly with a small group of people rather than on trying to run large-scale bureaucratic operations. Most of the people we’ve met have at least one such nonprofit they recommend, and the recommendations overlap to produce several nonprofits that I would bet pretty strongly are spending money responsibly and being as helpful as they know how to be with people they know fairly well. This seems to me to be a pretty reasonable alternative/equivalent to handing out cash.

My biggest concern with these organizations is room for more funding, an issue that has been raised even by the people recommending the organizations. The advantage of an organization’s staying small is that the people running the organization stay very directly connected to their work and its results; the disadvantage is that they aren’t built to scale, and it’s unclear how much good an outsider like myself can really do with an extra one-time donation.

What are your thoughts? Would you take any of these options or just save the money for my annual gift?