So I’ve been watching the ballgame, and it struck me how much sports announcers have impacted my outlook on charity. I can explain.
The most common form of “evidence” we get from charities goes something like this: “We don’t have the data, but we’re here, every day. We work with the children, personally. We’ve been doing this for decades and we’ve accumulated a lot of knowledge that doesn’t necessarily take the form of statistics.”
Put aside, for a minute, the fact that we get that same story from all 150 charities we’re deciding between (all of which presumably think their activities are most deserving of more funding). There’s another problem with the attitude above, one that occurs to me every time I hear Michael Kay announcing a baseball game. In sports, unlike in charity (and really unlike in most things, which is why I find it an interesting case study), the facts are available – and when you look at them, you realize just how little that “on the ground” experience can be worth.
The fact is that baseball announcers and sportswriters spend their entire lives watching, studying, and thinking about sports. Many of them are former athletes who have played the game themselves. They are respected, they are paid to do what they do, and they are more experienced (i.e., they’ve seen more) than I’ll ever be in my life. And yet so many of them truly know absolutely nothing.
“Jeter’s a whole different player in October,” says Mr. Kay (demonstrably false). “You don’t want young pitchers carrying you in the playoffs.” (Comically false – 3 of the last 5 World Series champions had rookie closers.) I’m not giving any more examples – this post would hit 30,000 words in a heartbeat. But I’m happy to refer you to sources that give 2-3 examples per day of seasoned professionals – who’ve spent their whole lives on this stuff – saying things that are obviously, intuitively, factually, empirically, demonstrably, completely wrong.
It hits me over and over again, and I still haven’t quite gotten used to it. My only explanation is that humans have an incredible ability to ignore what they actually see, in favor of (a) what they expect to see (b) what they want to see. Now when I talk to an Executive Director or Development Officer whose life consists of running a charity and whose livelihood depends on convincing people that it’s the world’s best way to help people … I don’t know how much these factors cloud their judgment. Maybe not at all, in some (truly amazing, borderline inhuman) cases. But when they assure me that outcomes data isn’t necessary because they’ve been doing this for years, forgive me for having trouble swallowing this: I can’t help but think of Michael Kay, a man who’s done very little with his life but watch the Yankees, and still manages to know nothing about them.
The US government commissioned an
So, if Talent Search participants outperformed their evil twins, Talent Search must be a good thing, right? Not so fast. As page 55 states, Talent Search participants had an 86% graduation rate, while their evil twins were only at 77%. The authors equivocate a bit on this, but to me it’s very clear that you can’t credit the Talent Search program for this difference at all. The program is centered on financial aid and college applications, not academics; to think that it would have any significant effect on graduation rates is a huge stretch.
Fundraising today is all about the pitch; 10 years from now, I hope it will be about the product.
I’m very skeptical of any program that claims great effects with relatively low amounts of intervention, whether it’s a one-time class on condom use or a once-a-week tutoring session. I think about how easy it is for the people I know to sit through some class, walk out full of ideas, and forget them a week later, and I think – if you’re doing anything meaningful for people with this little investment, you must be some sort of sorceror.