The GiveWell Blog

Chess in the Schools

The New York Times recently profiled Chess in the Schools:

The Chess-in-the-Schools program has sought to foster analytical skills on the theory that these will help students succeed academically. The group teaches 20,000 children a year and calculates that it has taught 425,000 children since 1986. Children gather to learn the game at the group’s headquarters in Manhattan.

It seems like 20 years and 425,000 children is quite a lot of investment in the “theory that [chess] will help students succeed academically.” The Times feature provides a calming justification for the investment: “Chess helps promote intellectual growth and has been shown to improve academic performance.” Let’s look at the evidence for this claim.

The study we found

An early-1990s study looks at achievement test scores of chess-playing students over two years at District 9 in the Bronx. It observes that (a) the overall average reading score improved among chessplayers by about 5 percentile points, but didn’t improve among the set of remaining District 9 students; (b) 15 of 22 second-year participants improved their reading scores by some amount, while only 491 of 1118 non-participants in the district – and 245 of 655 non-participants with high reading scores, improved.

This study is riddled with major problems:

  • The numbers the researchers choose to compare seem arbitrary and possibly cherry-picked. Why do the researchers look at the “percentage who improved” among second-year chessplayers but not for both years? Why do they compare the second-year students to “high-performing nonparticipants,” but not give the same comparison when looking at all students?
  • The problem of selection bias is unusually obvious here. They’re comparing kids who volunteered to play chess against those who didn’t. Think of the chess club members at your school, and ask yourself if they would have been just like all the other kids had chess club not been offered. There’s no reason to think these two groups of kids are otherwise similar or would be expected to respond similarly to school.
  • This is a study of somewhere between 22 and 53 students at a single district in the early 1990s. Even if the study were highly rigorous, it would still be a long way from “proof that chess helps promote intellectual growth.”

The studies we couldn’t find

The Chess-in-the-Schools website states:

In 1991 and 1996, Stuart M. Margulies, Ph.D., a noted educational psychologist, conducted two studies examining the effects of chess on children’s reading scores. The studies demonstrated that students who participated in the chess program showed improved scores on standardized tests. The gains were even greater among children with low or average initial scores. Children who were in the non-chess playing control group showed no gains.

Another study in 1999, measured the impact of chess on the emotional intelligence of fifth graders. The results of the study were striking. The overall success rate in handling real life situations with emotional intelligence was 91.4% for the children who participated in the Chess-in-the-Schools program. In contrast, those who were not involved with the chess program had an average overall success rate of only 64.4%.

We’re guessing that the study we’re looking at is an update of the 1991 study since it references no previous studies and discusses results from 1991 and 1992. We can’t find the other studies anywhere. Chess-in-the-Schools provides neither links nor citations.

Even in the best-case scenario, it’s apparently been at least a decade since the last test of the Chess-in-the-Schools model.

“Chess helps promote intellectual growth and has been shown to improve academic performance?”

In researching charities, one of the more discouraging things we’ve learned is how little support it takes for a statement like “Chess helps promote intellectual growth and has been shown to improve academic performance” to be repeated by charities, donors, and even the media.

As far as we can tell, Chess-in-the-Schools is not a demonstrated success story. It’s just been promoted and scaled up like one.

Perspectives on donor irrationality

Jeanne Panossian left two very interesting comments on our blog discussing donor irrationality, from the point of view of someone running a small charity.

  • On donor illusions: ” … It takes extraordinary ethical fortitude to openly tell people how complicated your organization is, normally a donor has made their basic decision in the first 15 to 30 seconds of a conversation …”
  • On the administrative expense ratio: “…While I love to talk all day about [my charity’s impact], I have learned it is not worth my while. Waving a flag and telling people how we pay for our paper clips yields me more funds faster …”

Recommended reading.

My greatest fear about microfinance

How much of microfinance’s popularity in the world of philanthropy comes straight from this story?

I was shocked to discover a woman in the village, borrowing less than a dollar from the money-lender, on the condition that he would have the exclusive right to buy all she produces at the price he decides. This, to me, was a way of recruiting slave labor.

I decided to make a list of the victims of this money-lending “business” in the village next door to our campus.

When my list was done, it had the names of 42 victims who borrowed a total amount of US $27. I offered US $27 from my own pocket to get these victims out of the clutches of those money-lenders. The excitement that was created among the people by this small action got me further involved in it. If I could make so many people so happy with such a tiny amount of money, why not do more of it?

It’s an amazing and moving story. But it’s a story about one giver and 42 beneficiaries in one village.

In 2007, the Grameen Foundation alone saw over $16 million in donations and claimed over 7 million clients served (see its annual report (PDF)). It works in 32 countries on 4 continents. And it’s still putting that $27 front and center.

We know little about microfinance’s actual impact, and much of what we do “know” comes down to myths (myth #6, in particular, seems oddly fitted to the story of the original $27). We’ve seen very little interest in general in pushing skeptically on the appealing stories charities tell.

Dr. Yunus’s original loan was interest-free, while today’s microloans charge interest in the 30%/year range. We know that the for-profit participants in microfinance have been participating for reasons other than one great story.

I don’t feel nearly so confident about the philanthropic participants.

The Carter Center

Early in 2009, we were extremely excited about The Carter Center. It seemed so strong that we devoted weeks to understanding it in depth.

As discussed in a blog post we made at the time, several of its programs work on extremely promising “neglected tropical disease control” activities, and there’s a truly unusual amount of disclosure from these programs. It appeared that the Carter Center is near the top of the heap both for what it’s doing and for how it’s sharing information. To boot, it was directly involved in one of the most cited global health success stories, the near-eradication of guinea worm.

The Carter Center also has several programs that don’t seem as promising. At first we nearly dismissed/overlooked these programs. But as we dug deeper, we realized that just because a charity emphasizes its best programs doesn’t mean it’s spending most of its funds on them. Oddly, the one piece of information we couldn’t seem to find anywhere on its website was how much of its budget was allocated to each program. The back-of-the-envelope calculations we did surprised us: the heavily documented river blindness program seemed as though it must be tiny, while the agriculture program hadn’t published anything since 2005 but appeared at that time to be taking up around 10% of the total budget.

We got in touch with The Carter Center and asked for a budget breakdown by program. We spoke to a senior representative and followed up with him 4 times. We even tried getting a connection of ours who has been a major Carter Center donor to ask for the information. It kept getting put off. Today we still don’t have this information.

To be honest, at this point we don’t know whether the “flagship” disease-control programs are at the core of the Carter Center’s work or act as more of a “hook” for donors while it focuses on things like fellowships for mental health journalism. And we have no sense of what a donor accomplishes by giving them a small gift (a gift that, however it’s officially designated, is likely effectively going to fund what the Carter Center wants it to fund due to the issue of fungibility).

To give a sense of the variety of program type and quality, here’s where we stand on a few select programs:

We wish the Carter Center were as transparent about its budget as it is about (some of) its program activities.

Medicine and philanthropy

David Leonhardt’s excellent piece on health care reminded me of the debates within philanthropy.

For most of human history … [doctors’] treatments consisted of inducing vomiting or diarrhea and, most common of all, bleeding their patients … Yet patients continued to go to doctors, and many continued to put great in faith in medicine … There was a strong intuitive logic behind those old treatments; they seemed to be ridding the body of its ills. They made a lot more sense on their face than the abstract theories about germs and viruses that began to appear in the late 19th century … So the victory of those theories would require a struggle. The doctors and scientists who tried to overturn centuries of intuitive wisdom were often met with scorn. Hippocrates himself wrote that a physician’s judgment mattered more than any external measurement, and the practice of medicine was long organized accordingly.

The single most common retort to the GiveWell approach is that the staffers, volunteers, and even donors “know” the programs they fund are working, based on their intuitions.

Then again, many highly intuitive programs have been shown not to work, and there’s good reason to distrust intuition in this area:

Behavioral researchers have come to believe that there is a clear pattern to when intuition works and when it doesn’t. “Intuitive diagnosis is reliable when people have a lot of relevant feedback,” says Daniel Kahneman, a Nobel laureate in economics who recently collaborated on a project about intuition with Klein. People need a great deal of experience, and the feedback from these experiences — whether a treatment is working, say — needs to come quickly and to be clear. “But,” Kahneman adds, “people are very often willing to make intuitive diagnoses even when they’re very likely to be wrong.” When doctors have been asked to estimate the likelihood of a treatment succeeding based on experience, for example, they give wildly divergent answers. Medicine is full of such examples.”

Feedback is near-nonexistent in the field of philanthropy.

As Toyota built better cars than its competition for less money, it won new customers. Some rivals matched its successes (as Honda did); some lost market share (as Detroit did). No such dynamic exists in health care. William Lewis, a former director of the McKinsey Global Institute who studies productivity, says that the economic benefits from the various quality movements have been quite large but that they are also largely in the past. Most industries have incorporated Deming’s big ideas and are now making only incremental progress. “However, there is one big exception,” Lewis adds. “You guessed it: health care.”

I can think of another exception.

Evaluating microsavings

We’re excited about the idea of microsavings as opposed to microlending. But it isn’t enough to see that an organization offers microsavings. We need to know:

  1. Are savings services being provided relatively efficiently? How many clients are served per dollar of operating expenses?
  2. Are clients able to access their funds when they need them? We have heard anecdotal concerns about client dissatisfaction with the difficulty or bureaucracy involved in accessing savings. In addition to the proxies for satisfaction discussed in our earlier post, we’d like to see the “turnover” of client accounts: does money go in and out, or sit stagnant?
  3. What are the interest rates/fees on the accounts? Excessive fees would concern us, but so would extremely generous interest rates, which would make the program less like a savings account than like giving out cash.

Some of the questions at our earlier post (regarding profitability, client income levels, and client satisfaction) also apply.