The GiveWell Blog

More on the microfinance “repayment rate”

We are concerned about the way repayment rates are often reported. We’ve written about this issue before, arguing that different delinquency indicators can easily be misleading and pointing to one example we found where a microfinance institution’s reported repayment rate substantially obscures the portion of its borrowers that have repaid loans.

Following the links from David Roodman’s recent post about Richard Rosenberg, we found another paper Mr. Rosenberg authored making all the same points, much better than we did. The paper is Richard Rosenberg’s. “Measuring microcredit delinquency: ratios can be harmful to your health.” CGAP Occasional Paper #3. 1999. Available online here (pdf).

Relevant quotes from Mr. Rosenberg’s paper

The importance of using the “right” delinquency measure:

MFIs use dozens of ratios to measure delinquency. Depending on which of them is being used, a “98 percent recovery rate” could describe a safe portfolio or one on the brink of meltdown. (Pg 1)

The measure we’ve been asking for seems to be equivalent to what he calls the “collection rate.”

Most of the discussion will be devoted to three broad types of delinquency indicators: (a) Collection rates measure amounts actually paid against amounts that have fallen due. (b) Arrears rates measure overdue amounts against total loan amounts. (c) Portfolio at risk rates measure the outstanding balance of loans that are not being paid on time against the outstanding balance of total loans. (Pg 2)

It’s essential to not only know which measure is being used, but precisely how an MFI calculates its version of the measure:

But the reader must be warned that there is no internationally consistent terminology for portfolio quality measures—for instance, what this paper calls a “collection rate” may be called a “recovery rate,” a “repayment rate,” or “loan recuperation” in other settings. No matter what name is used, the important point is that we can’t interpret what a measure is telling us unless we understand precisely the numerator and the denominator of the fraction. (Pg 2)

Mr. Rosenberg describes different tests to which MFIs should subject various delinquency measures to determine which is most appropriate. For GiveWell’s purposes, one of the key tests is the “smoke and mirrors” test:

Can the delinquency measure be made to look better through inappropriate rescheduling or refinancing of loans, or manipulation of accounting policies? This is our smoke and mirrors test. (Pg 3)

The practice of rescheduling and renegotiating loans:

When a borrower runs into repayment problems, an MFI will often renegotiate the loan, either rescheduling it (that is, stretching out its original payment terms) or refinancing it (that is, replacing it—even though the client hasn’t really repaid it—with a new loan to the same client). These practices complicate the process of using a collection rate to estimate an annual loan loss rate. Before exploring those complications and suggesting alternative solutions for dealing with them, the author needs to issue a warning: any reader looking for a perfect solution will be disappointed. The suggested approaches all have drawbacks. It is important to recognize that heavy use of rescheduling or refinancing can cloud the MFI’s ability to judge its loan loss rate. This is one of many reasons why renegotiation of problem loans should be kept to a minimum—some MFIs simply prohibit the practice. (Pg 10)

The strengths of PAR (“portfolio at risk”) as a measure:

The international standard for measuring bank loan delinquency is portfolio at risk (PAR). This measure compares apples with apples. Both the numerator and the denominator of the ratio are outstanding balances. The numerator is the unpaid balance of loans with late payments, while the denominator is the unpaid balance on all loans The PAR uses the same kind of denominator as an arrears rate, but its numerator captures all the amounts that are placed at increased risk by the delinquency. (Pg 13)

And its weaknesses:

Like many other delinquency measures, the PAR can be distorted by improper handling of renegotiated loans. MFIs sometimes reschedule—that is, amend the terms of—a problem loan, capitalizing unpaid interest and set- ting a new, longer repayment schedule. Or they may refinance a problem loan, issuing the client a new loan whose proceeds are used to pay off the old one. In both cases the delinquency is eliminated as a legal matter, but the resulting loan is clearly at higher risk than a normal loan. Thus a PAR report must age renegotiated loans separately, and provision such loans more aggressively. If this is not done, the PAR is subject to smoke and mirrors distortion: management can be tempted to give its portfolio an artificial facelift by inappropriate renegotiation. (Pg 16)

PAR can also be misleading in a situation where an MFI is growing rapidly (a key argument of our past posts):

Another potential distortion in PAR measures is worth mentioning. Arguably the PAR denominator should include only loans on which at least one payment has fallen due, so that late loans in the numerator are compared only to loans that have had a chance to be late. Nevertheless, it is customary to use the total outstanding loan balance for the denominator. The distortion involved is usually not large for MFIs, because the period before the first payment is a small fraction of the life of their loans. For instance, for a stable portfolio of loans paid in 16 weekly installments with no grace period, a PAR of 5.0 percent measured with the customary denominator (total outstanding portfolio) would rise only to 5.3 percent using the more precise denominator (excluding loans on which no payment has yet come due.) However, if a portfolio is growing very fast, or if there is a grace period or other long interval before the first payment is due, then the customary PAR denominator can seriously understate risk. Pg 17

Table 6 on Pg 19 summarizes the strengths of weaknesses of different measures:

Why is this important?

Given how complicated this all is, we think that MFIs need to be clear and transparent about (a) which measures they use and (b) precisely how they calculate them.

However, this isn’t the case. For example, we aren’t confident that most MFIs normally report rescheduled and renegotiated loans as at-risk in PAR measures.

On the one hand, Commenter Ben writes, “Best practice is to treat all loans that have been rescheduled as PAR.” (This is consistent with MixMarket’s glossary, which indicates that, “[A PAR measure] also includes loans that have been restructured or rescheduled.”

Nevertheless, “best practice” may not correlate with “in practice.”

  • This Kiva document (its “Partnership Application”) is explicit in the definition of PAR 30: “The value of loans outstanding that have one or more repayments past due more than 30 days. This includes the entire unpaid balance of the loan, including both past due and future installments, but not accrued interest or renegotiated loans.” (emphasis mine) Note that, to Kiva’s credit, it explicitly asks for renegotiated loans separately in the application.
  • As Holden recently commented, “At least one MFI has indicated to us that it does not report [renegotiated loans in its PAR measures].”

The definition you read today isn’t necessarily the one that MFIs are using.

What measure do we use and why?

We’ve written before that our preferred measure is what the paper discussed above calls the collection rate. While the collection rate measure fails to provide a warning to MFIs that their portfolio is in danger, it is the strongest on Mr. Rosenberg’s “Bottom-line” test because it simply and clearly measures failed repayments. It’s therefore less susceptible to obfuscation and manipulation.

For GiveWell’s purposes, we need a delinquency measure that most clearly reports borrowers’ situations. While PAR measures provide information, it’s clear that PAR measures are more valuable to evaluating the risk of an MFI’s portfolio, which while relevant is not our key concern.

Haiti earthquake donations

Update: see our official page on Haiti earthquake relief, which consolidates advice from us and a few other sources we have high opinions of.

Reader Brigid writes:

    “I would love to hear any thoughts you have on contributions in light of the crisis in Haiti. My sense is that now of all times is when people give significantly without due diligence into a charity’s impact and that donor have more illusions than generally (i.e., My gift is going directly to a hurt Haitian). It seems inevitable the news piece several months after the event when donors are surprised/angry to learn their gifts were not used as they believed they were.
    Is there any way for an average donor to help the crisis in Haiti right now? Is there a way to “capture” the generosity that these events inspire while still focusing on impact? Specifically to GiveWell: would your team consider focusing quick efforts on analyzing charities that are addressing the crisis in Haiti (i.e. would you shift your mission at this moment)? Or, would you say: despite the current crisis in Haiti, any contribution an individual donor wants to make will still impact more people if, for example, given to fighting tuberculosis through Stop TB Partnership.”

A few notes:

Some stats on GiveWell’s web traffic and influence on donations

Before we start giving our answers to the questions of this post, I wanted to share some raw data that we look at to gauge how things are going.

The charts/tables below cover the following:

  • “Money moved,” i.e., donations made to GiveWell-recommended charities due to GiveWell’s research
  • Website traffic.

This is just a subset of the information we have. We’ll be releasing a more complete set of charts/tables/data shortly.

The table below shows the support each of our recommended charities received in 2009. Update, 1/8/2010: VillageReach sent us an updated file that includes donors through the end of 2009. The updated table is below. Note: VillageReach’s total fell as we discovered that we had erroneously double-counted some funds.

You can view the original table we posted here.

  • “Pledgers” refers to people who made GiveWell Pledges (advance commitments to give based on our research) in 2008, before our recent report was completed, and followed through on these commitments in 2009.
  • “Large gifts” refers to donors who made large gifts, and directly told us (and the charities they were giving to) that GiveWell’s research had been the key factor in where they gave.
  • “Economic empowerment grant” refers to a grant made directly by GiveWell, with funds from a single donor.
  • “Through website” refers to gifts made through the “Donate Now” buttons on GiveWell.net (some through Google and some through Network for Good (NFG)).
  • “Grants” refers to grants made directly by GiveWell, mostly with funds that were restricted by donors for regranting (we also granted just under $15,000 in unrestricted funds).
  • “Direct to charity” refers to donations that VillageReach received, not through GiveWell’s website, and believes it can confidently attribute to GiveWell (this is due to the fact that VillageReach is a relatively small organization that does not get many donations from unfamiliar individuals). We are still awaiting data from 12/22/09-12/31/09, so we expect the final version of this number be higher.

The following two charts show the amount donated and number of donors through the GiveWell site, comparing 2007-2009 (and the beginning of 2010).


Finally, we show monthly web traffic to the GiveWell site and blog. Two notes: (1) we unfortunately lost tracking for much of 2008 — that explains the lack of data during that period. (2) We had an immense spike on 12/20/2007 due to media coverage; we’ve purposefully set the left-axis as it is to make it easier to view the rest of the chart.

GiveWell’s self-evaluation and plan

Our current top priority is assessing the state of GiveWell: what we’ve accomplished, where we stand, and where we should focus our limited resources next. Over the coming weeks, we’ll be trying to examine ourselves as dispassionately and critically as possible, and sharing our self-review in something close to real time via this blog.

GiveWell’s mission is to find outstanding charities and publish the full details of our analysis to help donors decide where to give. The ultimate goal is to have significant impact on the flow of donations, moving toward a world in which donors reward charities for success in improving lives. The major questions about GiveWell, as I see them, are as follows.

Questions for “customers,” i.e., people considering using GiveWell’s research to decide which charities to support

  • Does GiveWell provide quality research that highlights truly outstanding charities in the areas it has covered?
  • Is it practical for donors to evaluate and use GiveWell’s research in the areas it has covered?
  • Has GiveWell covered enough areas to be useful?

Additional questions for stakeholders, i.e., people considering giving their time, money and other support directly to GiveWell (these include the GiveWell Board and staff)

  • Is GiveWell’s research process “robust,” i.e., can it be continued & maintained without relying on the co-Founders?
  • Does GiveWell present its research in a way that is likely to be persuasive and impactful (i.e., is GiveWell succeeding at “packaging” its research)?
  • Does GiveWell reach a lot of potential customers (i.e., is GiveWell succeeding at “marketing” its research)?
  • Is GiveWell a healthy organization with an active Board, staff in appropriate roles, appropriate policies and procedures, etc.?
  • What is GiveWell’s overall impact, particularly in terms of donations influenced? Does it justify the expense of running GiveWell?

For all of these questions, we intend to discuss

  • The progress we’ve made since November 2008 (when we last laid out a business plan)
  • Where we stand today, relative to where we need/hope to be to consider GiveWell a success
  • What we can do to improve

Our self-review won’t be entirely comparable to the reviews we perform of other organizations. The latter tend to be focused on the “end product,” as we stay agnostic on progress other organizations have made and how they can improve. When evaluating ourselves, it is essential that we examine “intermediate indicators” as well as our ultimate impact, and think critically about the different paths we can take to improve.

Follow up re: Philanthropedia

Philanthropedia has responded to our take on its microfinance report:

We find the response to be encouragingly straightforward. For the most part, it agrees with the concerns we have raised and commits to addressing them. Good Intentions are Not Enough argues that the response raises more questions than it answers, and we agree in substance, though to us the most important part of the response is an honest recognition of shortcomings and an expressed intent to improve.

A couple of responses on points where there is some disagreement:

Re: incentives. Our original post argued that Philanthropedia, if it became highly influential under its current model, would create bad incentives for experts (allowing them to continue to keep their thoughts under wraps) and charities (encouraging them to win over experts in ways unrelated to improving their social impact).

  • Philanthropedia states, “Even if only 31 experts agreed to publish their bios and stand behind the results, that is exactly 31 more than before.” We disagree that this is a good thing. The experts have not been linked with recommendations (as far as we can tell) – all we know is that they shared their votes, and we know the aggregate result. The problem here is that no one is individually accountable for their own recommendations and reasoning. If it became influential, this model would allow experts to have the benefits of transparency (i.e., influence over individual donations) without the accountability, which we feel would be a net negative impact on the incentives for experts to be transparent.
  • Philanthropedia states that it intends to rigorously guard against “gaming of the system.” But the dynamics we are concerned about are less related to “rule-breaking” or outright conflict of interest (for example, a charity pays an expert for a positive rating) than to distortion of ratings in a softer sense. Becoming “popular in a certain crowd” can be accomplished in a lot of ways that have nothing to do with improving impact. That’s why it’s so important that people who recommend charities put as much as possible of their reasoning out in public, where others can ask, “Are these reasons strong enough to explain this person’s support?”

Should donors use Philanthropedia’s current report?

In its current form, we feel that Philanthropedia amounts to a set of recommendations unlinked to either people or reasoning. That’s essentially asking donors to trust anonymous people, which we feel is a bad idea and not helpful for impact-focused donors. (And if donors need a “Who’s Who in microfinance,” that information is already available.)

For context: we are generally sympathetic to the “It’s not perfect, but it’s better than nothing” argument. We are acutely aware that donors have very limited resources today, and we ourselves have generally erred on the side of publishing/sharing our research as quickly as possible. Our research process and our own knowledge have enormous room for improvement (and this was more true when we first published research in November 2007). But we have not gone as far as publishing recommendations while in the middle of our process – we wanted to make sure we could clearly present our criteria and the charities considered, so that people could check our process and hold us accountable if they wished to do so.

There are good arguments for Philanthropedia’s sharing/promoting its research even in its current state, and it’s ultimately Philanthropedia that makes that call.

Roundup of recent blogging

Now that giving season has ended, we will be shifting our priorities and slowing down the pace of blog posts. Here’s a quick overview of the highlights from our last few months: