That last post was a doozy, wasn’t it. Please read it, though, if you get a chance, because this is really important. It’s my first attempt in a while to do what I’ve been saying forever that we can do: put the reasoning behind our decisions in writing, so that you can see the basis for everything we say and everything we decide. I’d really appreciate your thoughts on the following:
- Readability and usability – are you able to use the writeup to understand what I’ve found and what I think?
- Fairness and transparency – is the context, reasoning and evidence for every judgment call I make clear?
- Coverage – I can’t be comprehensive, but am I making intelligent decisions about what lines of research to pursue?
- Quality of analysis – are my conclusions reasonable? Do they make you feel I’ll make good bets on helping people?
It is a very preliminary, rough draft – 2 days from “What’s Head Start?” to “Oog, OK, finished the blog post. My head hurts.” – but the sooner I start getting feedback on these things and improving my approach, the better.
A few other thoughts that have come up in my first stabs at research:
- I’m probably (no promises!) going to be posting more often on the blog, and when Elie joins us full-time on August 3, I’ll be asking him to as well. This time around, knowing I had to write a blog post really focused my thoughts and stopped me from forming impressions that I can’t explain; that’s a good thing, and I want to keep it up. My goal is to create a public document of everything that goes into our decisions – the more often I’m writing down where I stand, the better I’ll force myself to keep track of why I think what I think (and the earlier my progress will be out there to be critiqued).
- If you think it’s best to forget about measurement and spend all our money on “getting things done” … take a nice careful look at that writeup and its sources and please think again. Many of the Head Start studies found strong and immediate positive effects on children’s IQ … effects that were gone two years later. Meanwhile, the St. Pierre study observed positive changes in just about every aspect of its clients’ lives … but putting them alongside against a control group, the researchers realized that everyone’s lives were improving (maybe a strong economy, maybe just the fact that they were hand-picking people at their worst point) … and their program was having no impact at all.Next time you hear someone say “The smiles on the children’s faces are the only data I need,” think about this. The fact is, as a local service provider, you simply have no way of getting around problems like what’s above. No amount of direct observation and experience with your clients can tell you if you’re having a real and lasting impact on their lives. A long, tedious, dry study can. It’s annoying but true.
- I believe that there is a monstrous and unfortunate disconnect between charity and academia. Here’s why.
- Reading academic papers was a whole different universe from reading charities’ self-evaluations (as I did last year). With the latter, I was constantly trying to figure out how the study was run and what kind of problems might be present; the papers themselves seemed entirely unaware of selection bias issues, making conclusions like “Test scores rose over period X, so the program was a success.” The academic papers on Head Start, on the other hand, were crystal clear – much clearer than I ever was in my head – about how each study was designed and what potential problems (or lack thereof, for randomized studies with low attrition) might be involved. Those that didn’t employ randomization apologized up front. It’s like people have been thinking about these issues for years and years, and have formed a whole culture around them … and many nonprofits and their evaluators seem totally oblivious.
- I can’t get freaking decent library access. This is crazy. Sorry, but the New York Public Library is crap – spotty collections, closed stack, no borrowing research materials. Meanwhile, Columbia won’t let me within 10 feet of them because I’m not in academia. NYU wants $225 to get in the door and another $600 for borrowing privileges, which we might do, but still … in the world of research libraries, my nonprofit affiliation is worth about as much as my talent for whistling. Why hasn’t this come up a million times, for every foundation trying to research their issues? I gotta say it – probably because they’re not doing that.
- It’s just unbelievable how excited academics seem to get about anything that has public policy implications. Witness the enormous amount of research related to Head Start. And it’s no surprise – every academic paper begins by explaining why it’s important, and for many papers this is the weakest section. The Head Start-related papers are nothing less than gleeful in proclaiming their relevance to the debate, and thus to actual current events. Well, that’s great – but what about the question of, you know, what we should do with our own money? I haven’t seen a single paper justify its existence by saying it’s relevant to the question of where foundations, or individuals, should give charitably. Why not???
Look. Academia and nonprofits are a match made in heaven. Nonprofits are always complaining that they have no capacity for evaluation; academics complain that no one cares about their work. Academics have the resources, knowledge, and patience to do ridiculously long and diligent studies; nonprofits, at least potentially, have the real-world significance to make it matter. Guys, I can’t watch you ignore each other like this.