The GiveWell Blog

Test my wings? That would mean less time for flying!

Measuring success at helping people is hard; it has inherent limits; it’s time-consuming; and it’s expensive. But it has to be done.

The first reason is that no matter how much sense an idea makes in your head, translating it to reality is another matter. I’d argue that acceptance of this basic idea is the single reason that we now have medical alternatives to prayer.

The second reason is that there are a a LOT of different charities out there for a donor to choose from – and without some sense of what they’ve actually accomplished, a donor has nothing to go on but theories and brochures. To me, that’s not much better than flinging our money randomly around the globe, with anyone who has a good story and a good accountant getting a chance to play. That isn’t a reasonable approach to solving the world’s problems.

The question of how to evaluate a given charity’s activities is a question for another day. It depends on the charity and it’s pretty much never easy. The question for today is, why isn’t it done more thoroughly and more often? And the answer for today (though it’s only one factor) is that funders don’t want to pay for it.

Children’s Aid Society has explicitly told us they’re concerned about funders’ reactions to the amount of their budget that is going to evaluation, “as opposed to” helping people – and that they’ve been unable to execute a major community school evaluation they’ve mapped out because it exceeds the “evaluation budget” designated in their grants. We asked New Visions for Public Schools why they don’t seem to have had the same problem, and they told us that they have, but that they’ve made a priority of fighting for larger evaluation budgets from their own funders. Over and over again, when we ask charities why they haven’t measured things that seem measurable, they’ve responded that the people who fund them don’t want it: many of their funds are often officially earmarked for non-evaluation purposes, and even when they’re not, they’re concerned about donors’ wishes to “get the money right to the people who need it.”

You and I are lucky that the people who build our bridges, cars, and airplanes don’t decide they can get more money directly to us if they cut their testing budget. The people served by nonprofits should be so lucky.

Tears of straw

It’s very unlikely – maybe impossible – that the Straw Ratio bothers you as much as it bothers me. So I’m going to try to spill out all the ranting I have left on this topic over the next few posts, and then we’ll completely be done with it, at least until the next time I tell someone about our project and they go “Oh yeah, I’m really into giving intelligently. Like have you seen these websites that show you how much of your dollar REALLY goes to helping people?”, causing me to completely black out with rage and do things I can’t remember days later, like write incoherent blog posts or light Form 990s on fire. (FYI, this happens about 2x per day on average.)

So. My last post focused on one component of those evil, evil administrative expenses: salaries. Another common administrative expense that jumps to mind (although “administrative expense” is subject to some interpretation, as has been pointed out to me) is information technology. This is something I haven’t seen from the inside, but anecdotes about the generally poor state of nonprofit tech abound. Here’s a pretty good example: Today I Cried, an aborted (as of a few days ago) blog by a nonprofit IT director.

“I think I’m done with nonprofit. I’m looking at corporate – the bigger the better.”

“So much energy wasted on bull[bunnies].”

“Do you know what a modem-router is?”

Oy.

But the most relevant post here is Non-Profit Technology vs. For-Profit Technology. First overarching difference? “Money (restricted/unrestricted funding; discount pricing; used/donated technology; Tech Soup; technology grants)”. What do you know. Those dollars that you’ve so diligently been marking “FOR NEEDY CHILDREN ONLY” made it past that greedy tech department after all.

Ever try to do good work with bad technology?

Is that what you want to be funding?

Low pay driving out the best?? How did this happen?!?!

Whoa, no way – young nonprofit sector workers don’t want to stay in the sector because of low pay? Young people don’t aspire to become nonprofit leaders? Well where are we going to get great leaders? We need great leaders at nonprofits if we want them to get great things done, right?

Man, that’s rough. Anyway, my attention span isn’t so hot, so I forget what I was just talking about, but you know what pisses me off? Charities that blow their money on fat salaries for their greedy executives!! Oh, man. Don’t get me started. At least I have a friend in Charity Navigator, which puts the CEO compensation right on that profile page so I can spot crooks like that Curtis R. Welling. Guy makes 275 THOUSAND DOLLARS (as much as a mid-level investment banker!) for running Americares, which barely cracks the top 20 largest nonprofits in the country. I mean, the AVERAGE nonprofit CEO makes ~$150k/yr – who needs that kind of money? That’s, like, entry-level MBA money!

Listen to me. I want my donation going to needy children, not to some jerk in a suit. I want great leaders in the nonprofit sector. I am confused.

Oh, actually, I think I’ve sorted it out. The problem seems to be that pesky Straw Ratio again, and the mentality that goes along with it: that we should evaluate charities by nitpicking their balance sheets and questioning their operating costs, rather than by looking at what they do and whether it works.

Dear Executive Director, please fire your staff

It’s important that everyone involved in a nonprofit’s mission be accountable. We are working to find the organizations that use their resources in the best possible way. You all are hopefully keeping an eye on our work to test the reasoning in our reviews, and to make sure that we are choosing the best organizations for each cause. And watchdogs like our colleague Mr. Straw Man are always checking to make sure that certain line items in the budget don’t get too out of hand.

What about nonprofits’ employees? From my experience starting a national non-profit and working with a number of others, one difference between the nonprofit sector and for-profit sector that has really struck me (and I can only speak for what I’ve seen) is that nonprofits are much worse at doing internal evaluation of personnel, and most of the time it barely happens at all. What does it matter if Bill is not as good as he could be at fundraising, or if Jane is as good as she could be at managing a project, so long as they have their hearts in the right place? That’s what matters: people who share a common goal to make the world better in their cause, and who are willing to work toward that goal, regardless of whether they are necessarily the best at what they are doing…

Go ahead, ask a non-profit that you are considering donating to:

“Hey, when is the last time that you fired someone? Ok, not recently … well, how about a negative review, gave a salary cut as punishment, or put someone on probation?”

In a for-profit enterprise this sort of thing happens constantly, and most would argue that companies are better off because it leads to better employees and a better company. However, I bet that non profit organizations would be shocked if you asked that sort of question as part of a list along with the laundry list of questions related to program and financial management. But it’s the same thing: spending money wisely includes making sure that your employees are as good as they can be, and that is only accomplished through honest and fair evaluation … sometimes that means firing people.

So make sure you’re asking this question. And executive directors, have the heart to fire anyone who isn’t the best person for the job.

FAQ: What qualifies us to issue evaluations?

We are not health experts, education experts, or social science experts, and we don’t pretend to be. We are donors, working through – and communicating about – the decisions all donors must work through. We do not see our role as designing, managing, improving, or measuring charitable programs; we see our role as understanding these programs as well as non-experts can, with whatever help and testimony from experts is necessary to do so, and sharing our understanding with other non-experts.

We are statistically literate and analytically strong, and we are good at attacking problems with complex and incomplete information; the people who have worked with us in the context of highly competitive and selective environments will attest to that.

But we are not experts. So why should you trust us?

You shouldn’t. You shouldn’t trust anyone to tell you what the best charities are. To a large extent, that’s why our project exists.

When we were trying to figure out where to donate, we had no trouble finding opinions, and recommendations, from experts of many kinds – ranging from foundations to famous economists to watchdogs. Of course, practically every charity we talked to had some form of expert endorsement of their own. The qualifications of these experts are impressive in all kinds of different ways.

But these recommendations didn’t help us. When we were able to scrutinize them, we found reasoning that was sometimes superficial and sometimes just plain didn’t make sense – but the most common problem, by far, was that there was simply no reasoning to be found. These kinds of recommendations don’t cut it, regardless of the resumes that back them.

As I’ve written before (recently), there is no way to evaluate charities with perfect – or even very much – certainty, safety, or precision. This much, the experts can agree on (and they do, from the little we’ve seen – much of it nonpublic – on the development of charity metrics). Intuition, judgment calls, and even philosophy are inextricable parts of every giving decision.

That’s why you can’t trust a person’s conclusion without following their reasoning – no matter who they are. And that’s why expertise in any particular area is so much less important than a commitment to true transparency, and thus to dialogue with anyone in the world – from policy professionals to philosophy Ph.D.’s to ordinary people with great ideas – who cares to participate. GiveWell has already demonstrated that commitment, to a degree we haven’t seen anywhere else.

FAQ: What are our criteria?

You can see the criteria we have used so far on our website. Furthermore, all of our reviews should be explicit about why they say everything they say, and where every rating comes from. If you don’t believe they are, we want to hear about it.

There is an understandable desire for universal, “objective” criteria in the nonprofit sector. On one hand, we believe strongly in the value of measuring impact, and in the value of statistical and other analytical tools. Perhaps related to our financial backgrounds, we generally find ourselves much more interested in measurement, metrics, and statistics than the charities we talk to.

On the other hand, the decision of where to donate inherently must be based on both highly incomplete information and on philosophical considerations. A giving decision can always be reasonably criticized and questioned, yet it must always be made. And the few attempts we’ve seen to eliminate all judgment calls, and put all charities in the same terms, have resulted – in our opinion – in absurdities.

The full story of what information we’re going to collect and what metrics we’re aiming to construct is a long one. We are currently writing it up as part of our business plan, and when we’ve expressed ourselves well, we will make it available on our website. For now, our general guiding principles are as follows:

Separate charities into causes. Some goals are too hard to put in the same terms – for example, choosing between improving a New York City public school and providing a clean water source in Africa is something that different donors will feel differently about. It is not realistic, or really even theoretically possible, to collect and process enough information to make everyone agree. Because the choice between these goals has so much philosophy in it, we put the two into different buckets. A “cause” is a set of charities that we feel can reasonably be compared in the same terms – you can see our preliminary list of causes to address here.

Designate the best charities within causes, not between them. Again, the principle is to compare what we can, and let the donor choose where we can’t. It’s not an easy distinction to draw, and we have to balance separating what can’t be compared vs. doing as much meaningful and systematic comparison as possible.

Within causes, design our metrics around the idea of making a significant impact on people’s lives. We will consistently aim to measure success in terms of people, not in terms of income or life-years or anything else. For disease-fighting charities, this means lives saved; for job-creation charities, this means people who go from indigence to self-sufficiency. We are more interested in how many people a charity serves fully and meaningfully than in the theoretical mathematical product of number of people affected times size of effect. Taking this approach also makes our decisions easier for others to understand and form their own opinions on.

Recognize that the metrics we want will not always be attainable. We have already seen how often the information we want is simply not available, as well as how often the interpretation of it is extremely problematic. We are going to have to use judgment and common sense, like any other donor; the difference is that we will promote

TOTAL, EXTREME TRANSPARENCY. We simply can’t stress the importance of this enough. This is what is already unique about GiveWell, and this is the single quality that makes our evaluations more valuable than anyone else’s. Our reviews don’t just say what we concluded; they don’t just laundry-list our resources; they explain every reason that we think what we think. Anyone and everyone has the power to go through our chain of thought, disagree where they disagree, and use our information and hard work to draw their own conclusions. Furthermore, our reviews are open for all the world to see, comment on, criticize, and improve.

There is no easy way to make giving decisions, and there is no giving decision that is ironclad or close to it. This is probably why grantmakers tend not to share what goes into their decisions; it is also precisely why it is so important to do so. With problems involving this much difficulty and judgment, the single most valuable quality of an evaluation is that it be clear. That is our explicit focus, and if we are ever failing in it we want to hear about it.

Note: I am out of time right now, but will answer the third of the current set of questions within the next couple of days.