The GiveWell Blog

Preview: Cause 5

Elie and I have just finished drafting our reviews for Cause 5: help disadvantaged adults become self supporting. We can’t make them public yet because we need to give our applicants a chance to look and point out mistakes (and write any responses they want to write). But here’s a quick story about what we’ve been doing.

First, the moral of the story: deciding where to give is hard. Elie and I have gone through 3-4 completely different approaches, before finding one we’re pretty happy with.

First we tried a pretty quantitative approach: look at how many people each finalist placed “sustainably” in a job (i.e., 12-month retention or above), then estimate how many people would likely have gotten similar jobs on their own, by slicing and dicing Census data to simulate the target population. The difference is “lives changed,” and lives changed divided by expenses should yield “lives changed per dollar,” which can generate a rough ranking. We gave up on this pretty quickly as we realized that many of the differences between our applicants’ populations can’t be captured in any way by the Census (differences in motivation, substance abuse history, etc.)

I then had the bright idea of clumping applicants together when their clients appeared similar. The HOPE Program and Catholic Charities both serve severely disadvantaged adults, similar in most of the ways we have data on; Vocational Foundation and Covenant House both serve disconnected (not employed or in school) youth. I created a big writeup putting the pairs side by side, and arguing that HOPE’s results are so much better than Catholic Charities’ (and VFI’s so much better than Covenant’s) as to imply true “program effects.”

I finished it around 8 this morning, at which point I went to sleep and Elie got up, took a look, and called BS. CCCS takes referrals from the govt; HOPE is working with people who want to work. Covenant House’s clients are over 50% homeless; not so VFI’s. You just can’t compare them like this. The fact is that while we know how many people each charity placed in jobs, we have no way of knowing how a comparable population would do without help. We’ve got to go with what makes sense to us.

And what makes sense to us is that it’s really hard for a 3-12 month program to fundamentally change a person. HOPE’s numbers are strong enough (relative to CCCS’s) to make us think it might be happening, but not enough to blow us away. In the end, everyone’s numbers are consistent with the hypothesis that employment programs can’t help everyone, or even most people; those who are getting jobs are likely the more motivated ones. That doesn’t mean it’s impossible to help people – they might have the willingness, but benefit from picking up specific skills, certifications, or just help with knowing where to look.

So which would you bet on? A program trying to “reform” homeless people at great cost, placing about 30% of them, or a program that finds people who are already willing and able to be a Nurse’s Aide – or Environmental Remediation Technician – and gets them the certification they need? In the end, we answered the latter. The certification model is simple, cost-effective, and makes sense. If I had to bet my life on whether getting people who want to be Nurse’s Aides certified as Nurse’s Aides is helping them, I’d say yes. If I had to bet my life on a 6-month course turning a person around, I’d need a lot more convincing data.

Right now we think the strongest two applicants are St. Nick’s Community Preservation Corp. and Highbridge Community Life Foundation, which follow exactly this model. Both see the vast majority of their clients take the jobs they’re trained for and hold onto these jobs. Both spend relatively little to accomplish this. Both do a million activities we have next to no information about, and both leave us wondering whether their clients could get similar jobs without help.

We prefer St. Nick’s, very slightly, because of the greater variety of jobs it trains for, some of which have much higher pay. A couple other organizations are still falling into our “recommended” category because they have strong numbers, and models that at least plausibly could be responsible for major life change (the HOPE Program is one of these).

We’re using a combination of intuition (our feeling about certification vs. general training), outcomes (we’re not recommending anyone if they don’t have retention numbers to back up the idea that they’re successfully placing clients), and calculations (a rough look at “cost per person placed sustainably” backs up our intuition that certification programs will be most cost-effective). There’s no one magic formula or metric that we’re hanging this decision on, and we know that we’ve made debatable leaps in judgment. But when I read over what we’ve written, and ask myself, “Holden, would you bet on this? If you were responsible for your donors’ karma, would this be your best shot at keeping them safe from lightning?” my answer is yes.

That said, I’ll feel a lot better about it once we put it out there and see what others think (should be within a couple weeks). I seriously cannot believe that other foundation people make these kinds of decisions talking to no one but each other. Does that really happen? That’s crazy.

Just say something, anything

Smarter Spending on AIDS: How the Big Funders Can Do Better. When I saw that title, linked here, I quickly opened the link expecting a report critically evaluating which strategies work in the fight against HIV/AIDS.

Should we fund condom distribution or programs promoting monogamy? Is ARV distribution enough or do non-profits need to follow-up with clients to make sure each takes their medication? What progress has been made on an AIDS vaccine – does that need more funding? Instead, I found a report full of corporate gobbledygook, which endorsed the following best practices – “working with the government; building local capacity; keeping funding flexible; selecting appropriate recipients; making the money move; and collecting and sharing data.”

Seriously? “Selecting appropriate recipients?” “Making the money move?” Does anyone think a paper like this can, will, or should change anyone’s behavior?

This is just the latest example I’ve seen of reports that seem to actually say nothing. By “nothing” I mean one of two things: either 1) the conclusions a paper offers are so general and vague and offer such scant evidence and reasoning that they’re practically useless or 2) the paper asserts conclusions which are so obvious that no one could possibly argue with them.

There’s the paper on practices of high-impact nonprofits that’s been floating around the blogosphere; I thought Albert’s post (linked) did a good job pointing out its shortcoming, but I also want to mention that its 6 attempts at “debunking myths” (pg 34-35) seem to come down to saying: “Effective nonprofits can come in all shapes and sizes.” Really? This changes everything!

There’s the Hard Lessons paper many have praised as a breakthrough in foundation self-criticism. Hard lessons taught here include “Allow room for the definition of success to shift and evolve as people learn what is possible and effective, as relationships deepen, and as the work matures”; “Match evaluation tools to their purposes”; and “Cultivate a flexible learning stance” (pg vii). They don’t, though, include any lessons about program design itself.

We often say we’d like to see more self-evaluation in the nonprofit sector. Papers like these are not what we’re referring to.

Everything a body needs

Nothing but Nets had a simple idea: kids in Africa need bed-nets to protect them from malaria-carrying mosquitoes, and there’s already a huge distribution network in place, through the Measles Initiative. Why not utilize the existing infrastructure to reduce the cost that a bed-net-alone charity would incur, and distribute more nets for fewer dollars? Saving people from malaria and measles – what could be better?

The Global Network for Neglected Tropical Disease Control proposes something similar: organize the handful of organizations distributing medicine to fight the so-called neglected tropical diseases, which include River Blindness, Hookworm, and Elephantiasis. More efficiency means lower costs and ultimately more lives saved.

But, here’s the thing that bugs me. Last year, when I first did research into diarrhea, I learned about something called Oral Rehydration Salts, a packet of which cures diarrhea and costs 5 cents. And, that’s not all: condoms cost pennies and prevent HIV/AIDS transmission, a 50-cent dose of antibiotics cures pneumonia, iron pills reduce incidence of anemia, and Vitamin A pills prevent blindness. And, for the most part, all these conditions affect the same communities: poor, rural areas of Sub-Saharan Africa.
It makes no sense that it takes more than three different organizations to distribute all the small, cheap items mentioned above. Why isn’t someone distributing everything? It’s great that Nothing but Nets and GNNTDC recognized the opportunity in some cases, but why haven’t we seen anyone giving out the whole goodie bag?

There are certainly a lot of good reasons to run a program focused on providing everything – necessary medicine in addition to health and other poverty-reducing servies – for a contained group of people. More about that to come soon. But, if you’re running a distribution program as many organizations do … how can you distribute bed-nets but not ORS? How can you distribute Vitamin A without bringing some bed-nets along? And, why distribute condoms without some good ol’ Ivermectin?

Welcome to the Donors’ Liberation Blog

If you think the Donor Power Blog is about respecting donors, perhaps you also think the 1950s were about respecting women.

Jeff (like many people) didn’t enjoy my argument that the goal of giving should helping people, not feeling good. But it’s not because he disagrees with my “should.” It’s because he disagrees with “the belief that there’s something wrong with donors who don’t think or act like us (the smart, good, aware, evolved, or whatever people).” He hates the concept of educating donors.

In other words, we nonprofit people can criticize and have high expectations for each other – but when it comes to donors, we should never challenge them to be more than what they are. Sweetie, you don’t like math? Don’t worry about it! It’s your brother’s problem! You’re pretty!

It comes down to what you think respect means. Is respect when people treat you like an equal, tell you when they disagree with you, and demand that you be all you can be? Or is it when they’re nice to you, flatter you, and pay for your meal, to maximize their odds of getting what they want from you? Is it when they try to connect with you by opening up, or connect with you by nodding their head, all the while believing that they’re “as different from [you] as a poet is different from an old barn”? (Yes, that is a quote.)

Old barn, you may want to think about proactive giving: writing your own checks, without a relationship to the fundraiser. It’s an unconventional lifestyle, to be sure. You may get some stares. But I like to say that a donor without a fundraiser is like a fish without a bicycle.

Big government vs. the private sector

Did I get your attention, political junkies?

So, a lot of people subscribe to the interesting theory that good works are best left to the private sector, not the public sector.

This idea makes sense in a lot of ways. We can all think of examples of services where private companies are 100x more accountable – and therefore efficient and effective – than the government. There’s only one tiny chink in this theory’s armor: in the year 2007, the government is 256 times as good at grantmaking as private foundations.

Beyond Philanthropy sums up why this is, while trying to make the opposite argument: “private consultants on foreign development projects cost government agencies $300,000 per year per head in salary and overhead costs. Private philanthropy annual consulting costs per head were only around $100,000 – less by nearly two thirds.” That’s it in a nutshell: the government has higher overhead. That is to say, the government plans, systematically evaluates, and publicly shares its decisions. Foundations don’t.

Check out the studies that have been done of the TRIO programs, the CCDP, and the work of USAID. They are rigorous, intelligent, and honest. They take a hard look at what’s working and what isn’t. They acknowledge their own limitations. They don’t try to throw sand in your eyes like some of the unbelievable puff pieces churned out by the private sector. And more importantly, they’re online. Want to know why the government is funding a Talent Search program? It’ll tell you. Want to know why Gates gave money to the WHO? Too bad. The pattern is thuddingly consistent … whenever Elie and I see a government agency in a charity’s application, our eyes light up because we know we’re about to get real information.

I’m not trying to be a Red here. In fact, I think the private sector could and should be far better than the government at grantmaking. But I know that it isn’t. Why not? Because in today’s language, “government” equals “controversy” and “charity” equals “Smile, give, shut up, and don’t even think about being critical and negative.” So, in an area (doing good) where results are far removed from “customers,” customers demand results from the government – and let foundations and charities get away with murder or whatever it is they’re doing.

FedEx is more accountable than the post office – start missing deliveries and it’ll go out of business, fast – but foundations and charities, today, are far less accountable than the government. Because we let them be. Because we don’t demand more. It doesn’t have to be that way. But don’t talk to me about the superiority of private giving … until and unless we do a better job with it.

Truth please

truth pleaseThe most common type of evidence non-profits have offered us in defense of their programs are independently produced evaluations. In my experience, though, these evaluations sometimes read more like propoganda than like objective assessments of a program’s impact. I came across one glaring example of this phenomenon when reviewing the Vocational Foundation’s application for a Clear Fund grant in Cause 5.

This report by Public/Private Ventures is generally quite positive about VFI, but the thing that caught my eye was the following statement on page 8: “Of the 87 percent of enrollees who complete skills training, 78 percent are placed in jobs. The placement rate is well above the New York City average of 39.6 percent for youth employment programs.*” VFI is twice as good as average – that sounds great.

But is it? The problem is that pesky asterisk. Following it to the bottom of the page, and reading the tiny type, you’ll find that in fact, the average placement rate for long-term employment and training programs, those that are comparable to VFI’s, is 64%. It’s not clear whether this is the percentage of graduates who are placed – in which case VFI is noticeably higher, at 78%, though still nowhere near twice as good – or whether it’s the percentage of enrollees who are placed, in which case VFI rings in at 68% (87%*78%), almost exactly the same as average. Either way, it looks like P/PV is highlighting a number that they know exaggerates the picture, and burying the more accurate story in a footnote.

The rest of the paper goes on to describe VFI’s method – the student selection process, curriculum, and individualized “case-management” – in attempt to understand why VFI’s method is successful, all with the assumption that the model is successful. And, it makes no attempt to give evidence for particular practices – the assumption underlying the entire paper seems to be that VFI is successful, and therefore everything it does must be “what works.”

Of course, we’ve seen some great studies too, those that work hard to assess whether a program is impactful and don’t fall into the trap of just extolling its virtues.

So, if you’re evaluating, don’t sell. Just give us the facts. And, if you’re reading a report, remember to read with a critical eye.