The GiveWell Blog

Volunteer tutoring program

Via Joanne Jacobs: a large randomized controlled trial found statistically effects of a volunteer tutoring program on reading skills.

The effect size (.1-.16 standard deviations on 3 measures; insignificant on one other – see pg 13 of the full study) is in the same ballpark as the effect observed in a recent study of vouchers in D.C. (which we discussed here) – yet this was a 24-week intervention as opposed to a 3-year effect from switching schools. (Though which one “costs” more is debatable, since the voucher program simply reallocated public funds whereas this one required time and expense outside the standard school system.)

Note that this program reached much younger children (grades 1-3 – page 5) and focused on those with the worst performance – an approach that seems sensible based on how early the achievement gap appears. It also focused exclusively on reading, an approach that appeals to me intuitively because – speaking purely from intuition – reading seems like a more universally important skill than other skills taught in school.

Though the effect size isn’t huge, it’s an encouraging result.

Positive but underwhelming voucher study

The third-year evaluation of a federally funded school voucher program in D.C. has recently been released (H/T Joanne Jacobs).

We’ve written before that past voucher studies have shown extremely underwhelming (if any) effects, and at first glance this report would seem to be a change in the pattern: “The evaluation found that the OSP improved reading, but not math, achievement overall and for 5 of 10 subgroups of students examined.” But on slightly closer examination, I’m not sure how much there is to be excited about here. A few observations (page numbers refer to the full study, available here):

  • The study found a statistically significant impact on reading performance after year 3 – but no impact on math performance, and no impact on either after years 1 or 2 (xvii).
  • The impact appears largely to have been confined to students who were less disadvantaged to begin with (see page 36). Students coming from “schools in need of improvement” (i.e., the weakest schools) saw no statistically significant improvement.
  • Even with all of these caveats aside, the impact was small, estimated at about .15 standard deviations after 3 years for students who used (not just received) the scholarship. For context, a .15 standard-deviation improvement for a student initially scoring in the 25th percentile would take him/her to the 30th percentile.
  • It strikes me as odd that the estimated effect of using vouchers was so close to the estimated effect of receiving vouchers (.15 vs. .13 standard deviations -see page 36), even though only 41% of recipients consistently used the scholarships and 25% did not use them at all (see page xxiii). The study does not explicitly address the performance of the students who received scholarships but did not use them – if a similar effect showed up there, I’d worry that randomization wasn’t carried out as intended.

The study is more encouraging than others I’ve seen about the effects of vouchers, but the picture it gives is still very far from the idea that vouchers (alone) can make a significant dent in the achievement gap.

Clarifying the role of different partners

One major question we’ve struggled to answer is: how do the different NGOs, local governments, and international global health partnerships work together to implement a given program?

For example, take mass drug administration of ivermectin to reduce onchocerciasis, one of our favorite programs. In Uganda, the African Programme for Onchocerciasis Control (APOC) works out of the World Health Organization (WHO). The Ugandan Ministry of Health is also involved. So are two nonprofits, The Carter Center and SightSavers International. What role does each play?

Steven Kasolo, the program officer responsible for SightSavers’ onchocerciasis program in Uganda, kindly agreed to speak with us on Tuesday morning. A summary (paraphrased highlights, not verbatim) follows. To us the key points to note are that:

  • Program implementation is ultimately being done exclusively by (district-level) government officials.
  • APOC, The Carter Center, and Sight Savers International provide funding and in some cases help with reporting, but do not have significant staff on the ground.
  • Funds flow from APOC and the nonprofits to the district-level government officials. The funds are kept in separate accounts and come with separate reporting requirements, but otherwise are largely fungible (according to Steven). From a donor perspective, it doesn’t seem to matter much which of the three donation-accepting organizations (APOC, SightSavers, The Carter Center) the funds flow through, given that they’re funding this project.

Elie: What role does SSI [Sight Savers International] play in Uganda’s onchocerciasis program?

Steven: SSI has two roles. One is to help with the evaluation; the other is to actually fund the program. In Uganda, The Carter Center and SSI each contribute 25% of the total budget for onchocerciasis. TCC and SSI are responsible for different districts. The government contributes the rest. Right now, APOC (African Programme for Onchocerciasis Control) doesn’t fund Uganda’s programs.

Elie: When APOC was funding Uganda’s programs, how did that work?

Steven: The Ministry of Health [MoH] applies to APOC and APOC sends funds to the Ministry of Health. MoH sends money to districts and it’s all government employees that implement and document the program. Government employees send activities reports and budgets to me. I also make site visits to local villages (where the programs are being implemented). Then, I send reports on to the MoH and the MoH would send them to APOC.

Elie: Where, specifically, do SSI funds go when you fund an onchocerciasis program?

Steven: We send funds straight to the district. Funds are managed by government employees in the district. These are the same people who receive funds from the MoH. The actual treatments are distributed using unpaid, community volunteers, following the CDTI process.

New, promising charity: The Stop TB Partnership

As part of our current work on developing-world aid, we’ve completed a preliminary report on The Stop TB Partnership. We still have more work to do, but want to share what we’ve learned thus far. Here’s what’s available:

KIPP and self-selection

The Knowledge is Power Program is one of our current recommended charities, but I think that Sarah Mosle’s critique in Slate is very much worth keeping in mind.

Mosle writes:

While KIPP does have outreach efforts to broaden its applicant pool, only the most determined parents are likely to respond to … sign KIPP’s demanding contract. This dedication suggests a higher value on education within these families, and thus kids better able or willing to learn. And the weakest students, not surprisingly, get disproportionately winnowed. In KIPP’s schools in the San Francisco Bay Area, for example, the worst-performing kids have dropped out (or been expelled) in greater numbers in the higher grades; the result has been to inflate the schools’ grade-to-grade improvement.

We agree that the superior performance of KIPP’s students can’t be taken fully at face value, because they may not be a truly representative set of disadvantage students. Our analysis concludes that KIPP most likely is making a difference for the students that it serves, despite these concerns (and Mosle thinks so as well).

However, just because KIPP is making a difference for the students it serves doesn’t mean its model can be fully generalized to close the achievement gap. For one thing, it isn’t clear how many teachers can be found that are at the caliber KIPP aims for. For another, KIPP appears to be aimed at a particular kind of student. I think Mosle’s closing concern is right on target:

But since the biggest debate about KIPP, on both the ideological left and right, is whether or not its methods can work for all disadvantaged children (instead of just a handful of self-selecting families), why wouldn’t it—and its financial, ideological, and media backers—have a strong interest in answering this question once and for all by taking on an entire urban area or even, for that matter, a single neighborhood as, say, Geoffrey Canada has tried to do in Harlem with his Harlem’s Children’s Zone?

There’s something perversely evasive about KIPP’s opening up just one school in Dallas, one school in Albany, N.Y., one school in Oakland, Calif., one school in Charlotte, N.C., one school in Nashville, Tenn., and so on—as if the program recognizes that its best chance at success is to be the exception rather than the rule in any city where it operates.

I believe anyone pointing to KIPP as “the path to closing the achievement gap” is being far too optimistic, although KIPP is a promising way to improve outcomes for the individuals it serves.


Additional GiveWell materials related to KIPP:

  • The summary of our review of KIPP is available here with a link to our full-length review.
  • Our overview of programs aiming to increase equality of opportunity is available here.
  • We’ve blogged about KIPP here and here.

Mistargeted microfinance?

There are many studies attempting to gauge microloans’ impact on borrowers, but most suffer from the problem of selection bias: by comparing participants and non-participants, they may be picking up other differences between these groups (for example, people who participate in microloan programs may be wealthier to begin with, so a study showing that they have higher incomes does not really show that microloans help).

One of the better-known attempts to avoid this problem was a 1997 study that attempted to take advantage of a “landholding requirement”: a program that allowed people to borrow only if they held less than a half-acre of land. This study is one of the few empirically encouraging studies of microloan programs, as we discuss here, although there was some dispute over whether the landholding requirement was strictly enforced.

New analysis by the Center for Global Development’s David Roodman shows that even though the microfinance program attempted to target people with less land, in fact people with more land borrowed significantly more. I agree with Roodman’s general conclusion:

This is not necessarily bad: serving richer clients with larger loans may make it more economical to reach poorer people in the same villages–an example of cross-subsidization. And someone whose main asset is an acre or two or riceland in rural Bangladesh is hardly rich by western standards–and might even make better use of the credit. Still, the “mistargeting” of landed families contradicts the public image of microfinance in Bangladesh as targeting the poorest, and probably needs to be better understood.


Additional GiveWell materials related to microfinance:

  • Our full review of the evidence for microfinance programs is available here
  • We first blogged about microfinance here and here.
  • Those blog posts rely heavily on a white paper published by the Grameen Foundation, available here (PDF).