We’ve written before that past voucher studies have shown extremely underwhelming (if any) effects, and at first glance this report would seem to be a change in the pattern: “The evaluation found that the OSP improved reading, but not math, achievement overall and for 5 of 10 subgroups of students examined.” But on slightly closer examination, I’m not sure how much there is to be excited about here. A few observations (page numbers refer to the full study, available here):
- The study found a statistically significant impact on reading performance after year 3 – but no impact on math performance, and no impact on either after years 1 or 2 (xvii).
- The impact appears largely to have been confined to students who were less disadvantaged to begin with (see page 36). Students coming from “schools in need of improvement” (i.e., the weakest schools) saw no statistically significant improvement.
- Even with all of these caveats aside, the impact was small, estimated at about .15 standard deviations after 3 years for students who used (not just received) the scholarship. For context, a .15 standard-deviation improvement for a student initially scoring in the 25th percentile would take him/her to the 30th percentile.
- It strikes me as odd that the estimated effect of using vouchers was so close to the estimated effect of receiving vouchers (.15 vs. .13 standard deviations -see page 36), even though only 41% of recipients consistently used the scholarships and 25% did not use them at all (see page xxiii). The study does not explicitly address the performance of the students who received scholarships but did not use them – if a similar effect showed up there, I’d worry that randomization wasn’t carried out as intended.
The study is more encouraging than others I’ve seen about the effects of vouchers, but the picture it gives is still very far from the idea that vouchers (alone) can make a significant dent in the achievement gap.