I’m not trying to be a jerk. I honestly want to know.
Most of the self-evaluation we see from charities looks at clients, but doesn’t compare them to (otherwise similar) non-clients. So it’s probably effective at preaching to the choir, but not at winning over skeptics. When we bring up the issues with it, we constantly hear things like “Of course we’d love to do a randomized/longitudinal study, but those are expensive and difficult and the funding isn’t there.”
This is how I imagine an interested charity could evaluate itself rigorously:
- Use a lottery to pick clients. Many/most humanitarian charities don’t have enough money to serve everyone in need. (And if they do have enough, there’s an argument that they don’t need more.) Instead of dealing with this issue by looking for the “most motivated” applicants, or using first-come first-serve (as most do), get everyone interested to sign up, fill out whatever information you need (including contact info), etc. – then roll the dice.
Unethical? I don’t think so. I don’t see how this is any worse than “first-come first-serve.” It could be slightly worse than screening for interest/motivation … but (a) it’s also cheaper; (b) I’m skeptical of how much is to be gained by taking the top 60% of a population vs. randomly selecting from the top 80%; (c) generating knowledge of how to help people has value too.
Cost: seems marginal, cheaper than using an interest/motivation screen. You do have to enter the names into Excel, but for 1000 people, that’s what, 5 hours of data entry?
Staff time: seems marginal. - Get contact information from everyone. This is actually part of step 1, as described. These days you can get on the Web in a library, homeless people blog, and I’m guessing that even very low-income people can and do check email. Especially if you give them a reason to (see below).
Cost: see above
Staff time: none - Follow up with both clients and randomly selected non-clients, offering some incentive to respond. Incentives can vary (many times, with clients, providing information can be a condition of continuing support services). But if push comes to shove, I don’t think a ton of people would turn down $50.
Cost: worst case, sample 100 clients and 100 non-clients per class and pay them $50 ea. That’s a decent amt of sample size. Follow up with each cohort at 2, 5, and 10 years. That’s a total of 600 people followed up with = $30k/yr, absolute maximum.
Staff time: you do have to decide how to follow up, but once you’ve done that, it’s a matter of sending emails. - Check and record the followup responses. If possible, get applicants to provide proof (of employment, of test scores) for their $50. Have them mail it to you, and get temps to audit it and punch it into Excel.
Cost: assuming each response takes 30min to process and data entry costs $10/hr, that’s $3k/yr.
Staff time: none. - And remember: Have Fun! Did you think I was going to put something about rigorous statistical analysis here? Forget it. Data can be analyzed by anyone at any time; only you, today, can collect it. When you have the time/funding/inclination, you can produce a report. But in the meantime, just having the data is incredibly valuable. If some wealthy foundation (or a punk like GiveWell) comes asking for results, dump it on them and say “Analyze it yourself.” They’re desperate for things to spend money on; they can handle it.
(Also, I’m not a statistics expert, but it seems to me that if you have data that’s actually randomized like this, analyzing it is a matter of minutes not hours. Writing up your methodology nicely and footnoting it and getting peer-reviewed and all that is different, but you don’t need to do that if you’re just trying to gauge yourself internally.)
The big key here, to me, is randomization. Trying to make a good study out of a non-randomized sample can get very complicated and problematic indeed. But if you separate people randomly and check up on their school grades or incomes (even if you just use proxies like public assistance), you have a data set that is probably pretty clean and easy to analyze in a meaningful way. And as a charity deciding whom to serve, you’re the only one who can take this step that makes everything else so much easier.
I tried to be generous in estimating costs, and came out to ~$35k/yr, almost all of it in incentives to get people to respond. Nothing to sneeze at, but for a $10m+ charity, this doesn’t seem unworkable. (Maybe that’s what I’m wrong about?) And this isn’t $35k per study – this is $35k/yr to follow every cohort at 2, 5, and 10 years. That dataset wouldn’t be “decent,” it would be drool-worthy.
And the benefits just seem so enormous. First and foremost, for the organization itself – unless its directors are divinely inspired, don’t they want to know if their program is having an impact? Secondly, as a fundraising tool for the growing set of results-oriented foundations. Finally, just for the sake of knowledge and all that it can accomplish. Other charities can learn from you, and help other people better on their own dime.
The government can learn from you – stop worrying about charity replacing government and instead use charity (and its data) as an argument for expanding it. In debates from Head Start to charter schools to welfare, the research consensus is that we need more research – and the charities on the ground are the ones who are positioned to do it, just by adding a few tweaks onto the programs they’re already conducting.
So what’s holding everyone back? I honestly want to know. I haven’t spent much time around disadvantaged people and I’ve never run a large operation, so I could easily be missing something fundamental. I want to know how difficult and expensive good self-evaluation is, and why. Please share.