Elie and I have just finished drafting our reviews for Cause 5: help disadvantaged adults become self supporting. We can’t make them public yet because we need to give our applicants a chance to look and point out mistakes (and write any responses they want to write). But here’s a quick story about what we’ve been doing.
First, the moral of the story: deciding where to give is hard. Elie and I have gone through 3-4 completely different approaches, before finding one we’re pretty happy with.
First we tried a pretty quantitative approach: look at how many people each finalist placed “sustainably” in a job (i.e., 12-month retention or above), then estimate how many people would likely have gotten similar jobs on their own, by slicing and dicing Census data to simulate the target population. The difference is “lives changed,” and lives changed divided by expenses should yield “lives changed per dollar,” which can generate a rough ranking. We gave up on this pretty quickly as we realized that many of the differences between our applicants’ populations can’t be captured in any way by the Census (differences in motivation, substance abuse history, etc.)
I then had the bright idea of clumping applicants together when their clients appeared similar. The HOPE Program and Catholic Charities both serve severely disadvantaged adults, similar in most of the ways we have data on; Vocational Foundation and Covenant House both serve disconnected (not employed or in school) youth. I created a big writeup putting the pairs side by side, and arguing that HOPE’s results are so much better than Catholic Charities’ (and VFI’s so much better than Covenant’s) as to imply true “program effects.”
I finished it around 8 this morning, at which point I went to sleep and Elie got up, took a look, and called BS. CCCS takes referrals from the govt; HOPE is working with people who want to work. Covenant House’s clients are over 50% homeless; not so VFI’s. You just can’t compare them like this. The fact is that while we know how many people each charity placed in jobs, we have no way of knowing how a comparable population would do without help. We’ve got to go with what makes sense to us.
And what makes sense to us is that it’s really hard for a 3-12 month program to fundamentally change a person. HOPE’s numbers are strong enough (relative to CCCS’s) to make us think it might be happening, but not enough to blow us away. In the end, everyone’s numbers are consistent with the hypothesis that employment programs can’t help everyone, or even most people; those who are getting jobs are likely the more motivated ones. That doesn’t mean it’s impossible to help people – they might have the willingness, but benefit from picking up specific skills, certifications, or just help with knowing where to look.
So which would you bet on? A program trying to “reform” homeless people at great cost, placing about 30% of them, or a program that finds people who are already willing and able to be a Nurse’s Aide – or Environmental Remediation Technician – and gets them the certification they need? In the end, we answered the latter. The certification model is simple, cost-effective, and makes sense. If I had to bet my life on whether getting people who want to be Nurse’s Aides certified as Nurse’s Aides is helping them, I’d say yes. If I had to bet my life on a 6-month course turning a person around, I’d need a lot more convincing data.
Right now we think the strongest two applicants are St. Nick’s Community Preservation Corp. and Highbridge Community Life Foundation, which follow exactly this model. Both see the vast majority of their clients take the jobs they’re trained for and hold onto these jobs. Both spend relatively little to accomplish this. Both do a million activities we have next to no information about, and both leave us wondering whether their clients could get similar jobs without help.
We prefer St. Nick’s, very slightly, because of the greater variety of jobs it trains for, some of which have much higher pay. A couple other organizations are still falling into our “recommended” category because they have strong numbers, and models that at least plausibly could be responsible for major life change (the HOPE Program is one of these).
We’re using a combination of intuition (our feeling about certification vs. general training), outcomes (we’re not recommending anyone if they don’t have retention numbers to back up the idea that they’re successfully placing clients), and calculations (a rough look at “cost per person placed sustainably” backs up our intuition that certification programs will be most cost-effective). There’s no one magic formula or metric that we’re hanging this decision on, and we know that we’ve made debatable leaps in judgment. But when I read over what we’ve written, and ask myself, “Holden, would you bet on this? If you were responsible for your donors’ karma, would this be your best shot at keeping them safe from lightning?” my answer is yes.
That said, I’ll feel a lot better about it once we put it out there and see what others think (should be within a couple weeks). I seriously cannot believe that other foundation people make these kinds of decisions talking to no one but each other. Does that really happen? That’s crazy.