The GiveWell Blog

Should I give out cash in Mumbai?

We mentioned before that we were planning a trip to Mumbai (also known as Bombay, in India). At this we have been here for a few weeks. We will be coming back to the U.S. between mid-November and mid-December.

From a GiveWell perspective, one of the things that is very different about being here vs. in the U.S. is that here we are in close proximity to extreme poverty. We have written before that we see promise in giving cash directly to the poor; here, more than in NYC, I could arguably carry out a mini “cash transfer” program on my own. The question is whether I should.

Below I lay out a few possible options. My interest is not in whether these options are better than giving nothing, but whether they are better than reserving the same funds for my annual donation to a GiveWell top-rated charity (last year I gave to Stop TB Partnership).

Option 1: give to the children who chase after me.

I pass people asking for spare change in NYC, but in Mumbai I am chased after by children, which is a very different (and more emotionally difficult) experience. It seems pretty clear that these children are legitimately poor, and I’m tempted to give to them.

However, I think this option is clearly inferior to Option 2 below.

  • These children, poor though they may be, are probably better off – and bringing in more money every day – than the children deep in the slums who are not venturing out to the nicest parts of town to chase after Westerners. (When we walk around in Churchgate, an upscale area, children run after us. When we walked along Juhu beach and ended up in a slum, people just asked if us we were lost, though I’d guess that they are at least as poor as the children we see daily.)
  • There is also an incentive problem: I’d rather minimize the degree to which my gifts turn begging into a profitable operation. It’s possible that parents are keeping their children out of school to beg, or even that the children are essentially “employed” by someone in far less need; I don’t want to contribute to that dynamic.

Option 2: walk deep into the slums and give out cash more or less at random (or to people who “look busy”)

This is the approach apparently favored by Tyler Cowen. It has the advantage that it seems more likely to reach the people most in need, and that it seems less likely to contribute to bad incentives.

I still find myself hesitating to do this, and the primary reason is that cash transfer programs are so rare among nonprofit organizations. (I believe a nonprofit, while not giving out cash “at random,” could still find designs that minimize the negative effect on incentives, such as requiring proof of both low income and employment and using an EITC-like scheme). We have in the past vigorously questioned the fact that nonprofits don’t tend to give out cash, and we think it’s possible that this has more to do with self-serving attitudes toward their own value than with a considered judgment that such programs are not promising. Still, in the end I think it’s more likely that there’s just something I’m missing.

Perhaps the risks of money being used on alcohol and similar purchases are too high. Perhaps the recipient of the cash will incite jealousy or even get robbed (see the comment by Tom Womack on Marginal Revolution’s post on the subject). Perhaps highly unpredictable cash transfers creates another kind of bad incentive, encouraging people to focus on trying to manipulate their luck (for example, via superstition).

I’m ready to discuss, but not ready to execute on, an activity that I don’t see being carried out by anyone who clearly knows what they’re doing, has seen the effects up close over years, has seen unexpected consequences and learned how to deal with them, etc.

Option 3: give to local nonprofits.

This option is pretty far from the original idea of handing cash to the poor, but it’s the one that appeals to me most of the three. It seems that there are vast numbers of relatively small nonprofits here, focused on working directly and tangibly with a small group of people rather than on trying to run large-scale bureaucratic operations. Most of the people we’ve met have at least one such nonprofit they recommend, and the recommendations overlap to produce several nonprofits that I would bet pretty strongly are spending money responsibly and being as helpful as they know how to be with people they know fairly well. This seems to me to be a pretty reasonable alternative/equivalent to handing out cash.

My biggest concern with these organizations is room for more funding, an issue that has been raised even by the people recommending the organizations. The advantage of an organization’s staying small is that the people running the organization stay very directly connected to their work and its results; the disadvantage is that they aren’t built to scale, and it’s unclear how much good an outsider like myself can really do with an extra one-time donation.

What are your thoughts? Would you take any of these options or just save the money for my annual gift?

New research and recommendations for microfinance

Over the past few months, we’ve been continuing our search for outstanding microfinance organizations (in addition to the one we’ve already identified). Below are the results.

Overview of our process and key questions

In brief, our take on microfinance is that offering credit and other financial services is likely an effective way to improve people’s lives in the developing world. At the same time, providing credit carries with it the risk of causing harm to clients. Donors therefore should carefully choose the microfinance institutions (MFIs) they choose to support, focusing in particular on an MFI’s demonstrated focus on (a) effectively providing credit while (b) assessing clients’ well-being and avoiding causing harm.

When we contact an MFI, we ask them a set of questions to evaluate them on these criteria. In particular, we assess:

  • Focus on social impact. The primary issue we ask MFIs about is whether and to what degree they track clients who drop out of the program (i.e., complete a loan cycle and choose not to take out subsequent loans). As we’ve written before, high dropout rates may be a sign that clients are having bad experiences and/or finding that the benefits of loans don’t compensate for the (often high) interest rates. We try to determine an organization’s degree of focus on dropouts by asking about (a) the dropout rate and how it’s calculated, (b) how the dropout rate is used in internal evaluation (e.g., is it used to inform employee compensation? branch-level performance?), and (c) whether the organization performs in-depth surveys that focus on the reasons why borrowers drop out. We believe that MFIs who thoroughly track those who choose to leave the program are most likely to identify and address problems clients have with the MFI’s services.

    We don’t only ask about the dropout rate. Some MFIs take other measures to determine whether they’re causing clients problems – for example, MFIs may attempt to ascertain whether clients are borrowing from multiple MFIs (e.g., taking on too much debt), or they may conduct regular surveys of clients’ satisfaction.

  • Interest rates. Borrowers at MFIs pay interest rates that most of us would consider unthinkably high. “Normal” rates tend to be in the 40-100% range (that’s the annualized equivalent in the terms used in the U.S.); and we’ve seen rates as high as 150% annualized. Because the way MFIs report interest rates varies — some require clients to save to effectively create collateral in the event they default; others add fees on the front of loans which may not be included in the headline rates — we’ve asked all the MFIs we’ve considered to provide us with enough detail to calculate their APR and EIR so that we can provide donors with information about the rates borrowers are paying at each institution.
  • Room for more funding. As with any organization we look at, we assess whether the institution can effectively utilize additional funds and how those funds will be used. In many cases, we’ve found MFIs that can support continuing operations with revenues and don’t require donations to maintain or expand their operations.
  • Repayment rate and clients’ standard of living. We seek evidence that clients are repaying their loans consistently and that MFIs are generally serving people who have low incomes. Most of the MFIs we’ve contacted can provide reasonable evidence that the people they’re serving are poor and that those who borrow generally repay their loans (note, however, that one of our major criteria for contacting MFIs was that they report collecting evidence on clients’ standards of living, so it isn’t necessarily the case that most MFIs in general meet this criterion).

Results

We chose to contact MFIs listed on Mix Market that we thought would have a good chance of answering our questions well. For more detail on how we chose MFIs to contact and which MFIs we contacted and spoke to, see the page explaining our process for finding microfinance charities. In all, we’ve contacted 43 MFIs; we were able to speak with 18, and 11 provided us with enough information to complete an in-depth review.

The first table below shows each MFI’s answers to our key questions. The asterisks represent the quality of the information we received: *** = high quality information; ** = medium quality; * = low quality. The table also links to our review pages for each MFI in cases where the review is complete and we have permission to publish it. We haven’t yet completed our review of AMK.

Answers to GiveWell questions

Organization Focus on dropout Interest rates (monthly/APR/EIR) Repayment rate (Collection rate/PAR>30/Write-off) Clients’ standard of living Room for more funds
Small Enterprise Foundation Excellent 7% / 84% / 126%*** 99%*** / 1% / 1% Very poor** $1.1m for lending programs
Chamroeun Above average 4-5% / 51-61% / 65-81%*** 99%*** / <1% / <1% Poor*** $564k for lending and non-lending
CUMO Above average 13% / 156% / 354%*** N/A / 3% / 0% Poor* Possible for lending programs
MicroLoan Foundation Moderate 12% / 144-149% / 304-326%** 98%*** / <1% / 1% Very poor* $600k for lending programs
ID-Ghana Limited Not asked (see note below) N/A / 4% / 27% Very poor** For lending programs
AMK Strong 3% / 30-37% / 34-45%*** 97%* / 3% / 0% Poor** Likely does not need additional donations
DAMEN Moderate 3% / 35% / 41%** N/A / 5% / 2% Less poor* $520k for lending programs
FMFB Limited Insufficient information N/A / 1% / 1% Less poor* $1m for lending and non-lending
FINCA Peru Moderate 69-80% “effective” annual interest* N/A / 2% / 1% Less poor* Possible for non-lending programs
Fundación Paraguaya Moderate Insufficient information N/A / 6% / 3% Less poor* Not for lending programs
Progresar Unknown 10-13% / 128-151% / 237-341%* N/A / 5% / 2% We have not seen information on this $101,000 for lending programs

Notes:

  • PAR>30 and write-off ratios are not given quality ratings because they are all taken directly from Mix Market, and thus we are not aware of any variation in quality. They are for the most recent year for which data is available (2008 or 2009). They do not describe the current portfolio of any MFI.
  • For more information on what we mean by a “collection rate,” see our blog post, “More on the microfinance repayment rate.”
  • For more information on different methods for calculating interest rates, see our post, “Microfinance interest rates.”
  • For more information on the standard of living information we used for each MFI, see this excel file.
  • We didn’t ask ID-Ghana for information on their interest rates. At the time we reviewed them (late-2009), interest rates were not a key step in our process.

Based on this information, there are certain MFIs that we think stand out for the purposes of an individual donor seeking a group with a strong focus on social impact.

Bottom line

Organization Country Summary Rating for microfinance
Small Enterprise Foundation South Africa Strong answers to all questions Recommended
Chamroeun Cambodia Strong answers to all questions Recommended
MicroLoan Foundation Malawi Strong answers to most questions Notable
ID-Ghana Ghana Notable for transparency regarding repayment rate Notable
CUMO Malawi Strong answers to most questions Notable

Note: AMK appears strong on all factors we investigated (to the extent we investigated them), but informed us that it was recently sold to an equity fund, and it is therefore unclear to us what role donations will play in AMK’s operations in the future. Note that AMK is listed as one of Kiva’s largest partners, and likely “effectively” receives donations through that vehicle (since it charges substantial interest while not paying interest in Kiva loans).

Responses to blog comments

We’ve had lots of thoughtful comments on the blog lately, and we haven’t had a chance to respond because we’ve been in the process of moving to Mumbai. So I wanted to give a heads up that I’ve now had a chance to respond as appropriate to all comments; see “Recent comments” on the left for my responses.

Our advice re: donations for Pakistan flood

We’ve been researching the cause of disaster relief, with the goal of doing a better job than we have in the past serving the donors who come to us for help in the wake of a crisis. At this point our research is still in progress, but we can offer some basic advice to donors interested in helping as effectively as possible:

  1. Give money; don’t give anything else. This has been one of the strongest and most agreed-upon recommendations of the “smart giving” community in general, and we join the broad consensus. Money enables organizations to provide what’s most needed. By contrast, “in-kind donations” need to be transported overseas; then agencies need to sort what’s useful from what isn’t; finally, they need to deal with non-useful supplies. This can worsen already-formidable logistical challenges, and in the end the costs of transportation and allocation can be greater than the value of the goods themselves.

    For more, see our argument against in-kind donations from earlier this year (including a citation of USAID’s statement that in-kind donations are “most often inappropriate”), Alanna Shaikh’s discussion of in-kind donations on Aid Watch, and Saundra Schimmelpfenig’s 32 posts on the topic.

  2. Don’t give to an organization you’ve never heard of or an organization that calls you on the phone. This is common sense, a matter of being proactive with your giving (seeking to do as much good as possible) rather than reactive (giving to whoever approaches you and thus making yourself an easy potential victim for scams). We think it is especially risky to give over the phone, or in direct response to a mailing.
  3. Consider the following key issues for an organization you’re donating to: (a) transparency and accountability – giving details on how much they seek, how much they’ve raised, how much they’ve spent, plans for any excess funds, and as much detail as possible on how they’ve spent funds and what they’ve done; (b) response capacity – having significant staff on the ground in relevant areas prior to the disaster striking; (c) quality of response – doing competent work that is well-matched to local needs; (d) quality of everyday activities – since your donation may effectively fund non-disaster-relief efforts, we think it’s important that an organization disclose information about what its other activities are and how they are evaluated.
  4. Consider that disaster relief may not be the best use of your donation. We have argued before that disaster relief may be less cost-effective than everyday international aid, especially when the disaster in question is a heavily publicized one (and thus one that may have money pouring in past the point of diminishing returns). Preliminarily, it appears that the Pakistan effort has been much less well-funded than the Haiti effort, but it’s worth keeping an eye on the numbers, and it’s always worth considering giving to an outstanding organization that is helping people in need on a day-to-day basis, without the media coverage that comes with a disaster.

Our recommended organizations

Our key questions for organizations are listed above. Generally, we’ve found that most large, reputable organizations score fairly well on two of our criteria: they are fairly strong on the transparency/accountability front, and they often have existing field presences in at-risk regions. The level of disclosure about non-disaster-relief activities varies widely but is often weak; we have not yet found a good way of determining the quality of aid. With that in mind, the organizations that have stood out to us so far (very much subject to change) are:

  • Population Services International (PSI). PSI is one of our top charities for its everyday work; its level of transparency about its activities and the evaluation of them is outstanding. (See our review for details.) It has been in Pakistan for over 20 years (source).
  • Medecins Sans Frontieres (MSF). We have been impressed with MSF’s past transparency about its limited need for funds, something we haven’t seen in any other organization. Its activity reports give a fairly clear picture of its activities around the world, and we are impressed with its public site publishing field research, something we’ve seen from few other large/diverse international aid organizations (PSI and CARE are others). We find its field news to be more detailed and specific than the press releases of most other organizations (a notable exception is the Red Cross, discussed immediately below).
  • Red Cross. The International Federation of the Red Cross and Red Crescent Societies seems to freely provide the most specifics on exactly how much money it has sought and has spent and exactly what it has done. See its country list for links to all of its many past reports. Donating to the Red Cross (whether the American Red Cross or the Red Cross in another country) may be an “obvious” choice, but we think it is also a very defensible one; the Red Cross probably receives more scrutiny, and pressure to be clear about what it is doing, than anyone else, and because of its size and name recognition it may also be particularly well-positioned to carry out a lot of relief while staying coordinated with the government.

These are only preliminary impressions – much more is coming on the topic, and we may change our conclusions about which organizations are best to give to – but as there is a disaster unfolding now, we thought we’d share what we’re thinking.

High-quality study of Head Start early childhood care program

Early this year, the U.S. Department of Health and Human Services released by far the most high-quality study to date of the Head Start childhood care program. I’ve had a chance to review this study, and I feel the results are very interesting.

  • The study’s quality is outstanding, in terms of design and analysis (as well as scale). If I were trying to give an example of a good study that can be held up as a model, this would now be one of the first that would come to mind.
  • The impact observed is generally positive but small, and fades heavily over time.

The study’s quality is outstanding.

This study has almost all the qualities I look for in a meaningful study of a program’s impact:

  • Impact-isolating, selection-bias-avoiding design. Many impact studies fall prey to selection bias, and may end up saying less about the program’s effects than about pre-existing differences between participants and non-participants. This study uses randomization (see pages 2-3) to separate a “treatment group” and “control group” that are essentially equivalent in all measured respects to begin with (see page 2-12), and follows both over time to determine the effects of Head Start itself.
  • Large sample size; long-term followup. The study is an ambitious attempt to get truly representative, long-term data on impact. “The nationally representative study sample, spread over 23 different states, consisted of a total of 84 randomly selected grantees/delegate agencies, 383 randomly selected Head Start centers, and a total of 4667 newly entering children: 2559 3-year-olds and 2108 4-year-olds” (xviii). Children were followed from entry into Head Start at ages 3 and 4 through the end of first grade, a total of 3-4 years (xix). Follow-up will continue through the third grade (xxxviii).
  • Meaningful and clearly described measures. Researchers used a variety of different measures to determine the impact of Head Start on children’s cognitive abilities, social/emotional development, health status, and treatment by parents. These measures are clearly described starting on page 2-15. The vast majority were designed around existing tools that seem (to me) to be focused on collecting factual, reliable information. For example, the “Social skills and positive approaches to learning” dimension assessed children by asking parents whether their child “Makes friends easily,” “Comforts or helps others,” “Accepts friends’ ideas in sharing and playing,” “Enjoys learning,” “Likes to try new things,” and “Shows imagination in work and play” (2-32). While subjective, such a tool seems much more reliable (and less loaded) to me than a less specified question like “Have your child’s social skills improved?”
  • Attempts to avoid and address “publication bias.” We have written before about “publication bias,” the concern that bad news is systematically suppressed in favor of good news. This study contains common-sense measures to reduce such a risk:
    • Public disclosure of many study details before impact-related data was collected. We have known this study was ongoing for a long time; baseline data was released in 2005, giving a good idea of the measures and design being used and making it harder for researchers to “fit the data to the hoped-for conclusions” after collection.
    • Explicit analysis of whether results are reliable in aggregate. This study examined a very large number of measures; it was very likely to find “statistically significant” effects on some purely by chance, just because so many were collected. However, unlike in many other studies we’ve seen, the authors address this issue explicitly, and (in the main body of the paper, not the executive summary) clearly mark the difference between effects that may be an artifact of chance (even though “statistically significant,” finding some effects of comparable size was quite likely due to the large number of measures examined) and effects that are much less likely to be an artifact of chance. (See page 2-52)

  • Explicit distinction between “confirmatory” analysis (looking at the whole sample; testing the original hypotheses) and “exploratory” analysis (looking at effects on subgroups; looking to generate new hypotheses). Many studies present the apparent impact of a program on “subgroups” of the population (for example, effects on African-Americans or effects on higher-risk families; without hypotheses laid out in advance, it is often unclear just how the different subgroups are defined and to what extent subgroup analysis reflects publication bias rather than real impacts. This paper is explicit that the only effects that should be taken as a true test of the program are the ones applying to the full population; while subgroup analysis is presented, it is explicitly in the interest of generating new ideas to be tested in the future. (See page xvi)
  • Charts. Showing charts over time often elucidates the shape and nature of effects in a way that raw numbers cannot. See page 4-16 for an example (discussed more below).

The least encouraging aspect of the study’s quality is response rates, which are in the 70%-90% range (2-19).

In my experience, it’s very rare for an evaluation of a social program – coming from academia or the nonprofit sector – to have even a few of the above positive qualities.

Some of these qualities can only be achieved for certain kinds of studies (for example, randomization is not always feasible), and/or can only be achieved with massive funding (a sample this large and diverse is out of reach for most). However, for many of the qualities above (particularly those related to publication bias), it seems to me that they could be present in almost any impact study, yet rarely are.

I find it interesting that this exemplary study comes not from a major foundation or nonprofit, but from the U.S. government. Years ago, I speculated that government work is superior in some respects to private philanthropic work; if true, I believe this is largely an indictment of the state of philanthropy.

The impact observed is positive, but small and fading heavily over time.

First off, the study appears meaningful in terms of assessing the effects of Head Start and quality child care. It largely succeeded in separating initially similar (see page 2-12) children such that the “treatment” group had significantly more participation in Head Start (and out-of-home child care overall) than the “control” group (see chart on page xx). The authors write that the “treatment” group ended up with meaningfully better child care, measured in terms of teacher qualifications, teacher-child ratios, and other measures of the care environment (page xxi). (Note that the program only examined the effects of one year of Head Start: as page xx shows, “treatment” 3-year-olds had much more Head Start participation than “control” 3-year-olds, but the next year the two groups had similar participation.)

The impacts themselves are best summarized by the tables on pages 4-10, 4-21, 5-4, 5-8, 6-3, 6-6. Unlike in the executive summary, these tables make clear which impacts are clearly distinguished from randomness (these are the ones in bold) and those that are technically “statistically significant” but could just be an artifact of the fact that so many different measures were examined (“*” means “statistically significant at p=0.1”; “**” means “statistically significant at p=0.05”; “***” means “statistically significant at p=0.01” and all *** effects also appear to be in bold).

The basic picture that emerges from these tables is that

  • Impact appeared encouraging at the end of the first year, i.e., immediately after participation in Head Start. Both 4-year-olds and 3-year-olds saw “bold” impact on many different measures of cognitive skills, as well as on the likelihood of receiving dental care.
  • That said, even at this point, effects on other measures of child health, social/emotional development, and parent behavior were more iffy. And all effects appear small in the context of later child development – for example, see the charts on page 4-16 (similar charts follow each table of impacts).
  • Impact appeared to fade out sharply after a year, and stay “faded out” through the first grade. Very few statistically significant effects of any kind, and fewer “bold” ones, can be seen at any point after the first year in the program. The charts following each table, tracking overall progress over time, make impact appear essentially invisible in context.
  • I don’t think it would be fair to claim that impact “faded out entirely” or that Head Start had “no effects.” Positive impacts far outnumber negative ones, even if these impacts are small and rarely statistically significant. It should also be kept in mind that this many of the families who had been lotteried out of Head Start itself had found other sources of early child care (xv); because it was comparing Head Start to alternative (though apparently inferior, as noted above) care, rather than to no care at all, effects should not necessarily be expected to be huge.

Takeaways

The impact of Head Start shown here is highly disappointing compared to many of its advocates’ hopes and promises. It is much weaker than the impact of projects like the Perry Preschool program and the Carolina Abecedarian program, which have been used in the past to estimate the social returns to early childhood care. It is much weaker than the impact that has been imputed from past lower-quality studies of Head Start. It provides strong evidence for the importance of high-quality studies and the Stainless Steel Law of Evaluation, as well as for “fading impacts” as a potential problem.

I don’t believe any of this makes it appropriate to call Head Start a “failure,” or even to reduce its government funding. As noted above, the small impacts noted were consistently more positive than negative, even several years after the program; it seems clear that Head Start is resulting in improved early childhood care and is accomplishing something positive for children.

I largely feel that anyone disappointed by this study must have an unrealistic picture of just how much a single year in a federal social program is likely to change a person. The U.S. achievement gap is complex and not well understood. From a government funding perspective, I’m happy to see a program at this level of effectiveness continued. When it comes to my giving, I continue to personally prefer developing-world aid, where a single intervention really can make huge, demonstrable, lasting differences in people’s lives (such as literally saving them) for not much money.

Needed from major funders: More great organizations

In the wake of the recent Giving Pledges, we’ve been discussing what advice we’d give a major philanthropist (aside from our usual plea to conduct evaluations and share them publicly).

For the most part, our recommendations and criteria are aimed at individual donors, not major philanthropists. We stress the value of given to proven, cost-effective, scalable organizations rather than funding experiments, but we don’t feel that this advice applies to major philanthropists – taking risks with small, untested organizations and approaches makes a great deal of sense when you have the time and funds to follow their work closely, hold them accountable, and perform the evaluation that will hopefully show you (and possibly/eventually the world) how things arae going. However, we do have some thoughts on the kind of risk that’s worth taking.

One of our biggest frustrations in trying to help individual donors has been the difficulty of finding organizations, as opposed to programs or projects, we can be confident in. As we have discussed in our series on room for more funding, we feel that donors can’t take “restricted gifts” at face value, and that they must ultimately either find an organization they can be confident in as a whole or one with a clear and publicly disclosed agenda for it would do with more funding. Such organizations have proven very difficult to find.

  • In the area of developing-world aid, we’ve found many organizations with activities so diverse that it’s impossible for us, or for them, to provide any kind of bird’s-eye view of their activities.
  • Meanwhile, we’ve also seen very promising intervention categories that we can’t support simply because we can’t match them to strong, focused organizations. See our past discussion of community-led total sanitation; we have similar issues with salt iodization.
  • In more informal investigations into other causes, we’ve found a multitude of organizations that seem to act as “umbrellas” for a cause, seemingly doing “many things related to the cause” rather than pursuing narrower, targeted agendas. For an example, see our discussion of anti-cancer organizations.
  • For another example, see the organizations listed at Philanthropedia’s report on global warming, which are mostly not focused solely on specific anti-global-warming strategies, but rather extremely broad environmental organizations simultaneously carrying out all manner of global-warming-related activities (forest conservation, political advocacy, research into new energy sources and more), as well as non-global-warming-related activities such as endangered species protection.

Of course, it could make sense for an organization to have varied activities, if there are synergies between them and a clear strategy underlying them. But in all the cases discussed above, that doesn’t appear to be what’s happening. In fact, my impression from the conversations I’ve had with major funders is that most large organizations are essentially loose coalitions of separate offices and projects, some excellent, some poor. Two major funders have stated to me, off the record, that one major international nonprofit does great work in some areas but that they would never endorse a contribution to it. One has stated to me that (paraphrasing) “I don’t think about what organization to fund – it all comes down to which people are good, and people move around a lot.” From scrutinizing nearly any major funder’s list of grants, or from examining the work of the Center for High-Impact Philanthropy at University of Pennsylvania (which aims to advise larger donors), it seems clear that the typical approach of a major funder is to evaluate projects and people, not organizations.

Unfortunately, this attitude is somewhat self-fulfilling. As long as major funders treat organizations as contractors to carry out their projects of choice, organizations will remain loose coalitions; successful projects will be isolated events. We’ll see none of the gains that come with organization-level culture, knowledge and training built around core competencies. And people giving smaller amounts will have no way to know what they’re really giving to.

We’ve argued before that great organizations are born, not made. Rather than trying to wrench existing organizations into their preferred projects, we’d like to see more major funders trying to “birth” great organizations, so that there’s something left over when they move on.