About a year ago, we participated in a Giving Carnival on the topic of metrics in philanthropy, and laid out the metrics we were planning on using – along with caveats. Seeing a continuing interest in this topic (even including a panel called “Metrics Mania”), I’d like to share a bit of my progressed thinking on the matter – specifically that debating metrics in the abstract seems unlikely to go anywhere useful.
As promised, our research tried to divide charities into causes to make “apples to apples” comparisons; but as many predicted (and as we acknowledged would be true to some extent), even after this narrowing it was impossible to truly put different charities in the same terms. Any two given job training programs serve different groups of people; comparing them directly on “people placed in jobs” is a futile endeavor. We looked at health programs’ abilities to “save lives,” but not all lives are the same, and different health programs have wildly different and difficult-to-quantify impacts on non-life-threatening conditions.
This doesn’t mean that metrics are futile or useless. There’s a big difference between being able to demonstrate emotionally relevant results (such as lives saved) and having no evidence other than an unfamiliar development officer’s report of a gut feeling. And there can be enormous differences in the “cost per person” associated with different approaches, enough – when taken in context and accompanied by intuitive judgment calls – to make a big difference in my view of things. For example, it’s typical for employment assistance programs to cost in the neighborhood of $10,000-$20,000 per person served, while we’re ballparking certain developing-world aid programs as $1,000 per life saved – though the story doesn’t end there, that difference is larger than I would have guessed and larger than I can ignore.
Bottom line, both the metrics we used and the ways we used them (particularly the weight of metrics vs. intuition) ended up depending, pretty much entirely, on exactly what decision we were trying to make. We took one approach when all we knew was our general areas of focus; we modified our approach when we had more info about our options; we frankly got nothing out of the completely abstract discussions of whether metrics should be used “in general.” There are as many metrics as there are charities, and there are as many necessary debates about metrics as there are giving decisions.
I agree with those who say metrics are important, and those who say we can’t let them dictate everything (and it seems that nearly everyone who weighs in on “the metrics debate” says both of these things). But I don’t feel we can have the really important conversation at that level of abstraction. Instead, the conversation we need to have is on the specifics. In our case, and that of anyone who’s interested in NYC employment assistance or global health interventions, that means asking: did we use appropriate metrics for the charities we’ve evaluated? Are there other approaches that would have been a better use of our time and resources? Did we compare our applicants as well as we could have with the resources we had?
This is a conversation with a lot of room for important and fruitful disagreement. If you’re interested in the question of metrics, I ask that you consider engaging in it – or, if you don’t find our causes compelling, conducting and publishing your own evaluations – rather than trying to settle the question for the whole nonprofit sector at once.