The National Center for Education Evaluation released a report on the D.C. Opportunity Scholars (voucher) program yesterday.
The short version is that it found somewhere between no and few academic gains for those awarded vouchers, no difference in student satisfaction, but an improvement in parental satisfaction. There are, of course, all sorts of caveats (see here for a more complete analysis of the methodology and findings), but that's essentially what the report says.
As Erin Dillon points out, the mixed findings mean that both supporters and opponents of the voucher program can cite the study to support their arguments.
Liam Julian has an interesting reaction over at Flypaper. He essentially argues that the study is irrelevant, and uses the increase in parental satisfaction as evidence of that fact. He also argues that since the voucher program is not doing any harm, that we shouldn't discontinue it.
The report and Julian's reaction both got me thinking. As to the latter point, I'm not sure that a voucher opponent wouldn't have an equally valid point if they said "it's not doing any good, so it should be discontinued," nor am I sure that the report proves that the program's not doing any harm.
But, on to the larger point of whether parental satisfaction, or anything else, proves that the program has been a success or failure:
This is what I'd like to see: before the start of a program such as this one, proponents and opponents of the plan, along with parties with no rooting interest, should define what outcomes would make the program a failure and what outcomes would make the program a success. Then we can compare the findings of the program evaluation to this criteria. The way it's currently being done, one can define success or failure any way they like based on the findings of the evaluation.
More importantly, I wonder what kind of criteria would be placed on these pre-program lists. In other words, how do we really know that schools are succeeding? "Student achievement" (i.e. test scores) is all the rage these days, so I'm sure that would make the list. I'm guessing that people would include other types of learning, such as critical thinking, behavior, safety, parent and student perceptions, attendance, and a host of other things on such a list.
There was, to my knowledge anyway, no such list for this program. So we're stuck with the findings and our post-hoc interpretations of them.
Anyway, the one firm finding in the report seems to be that parents are more satisfied with the new schools in which their children are enrolled than they were with their former schools (even though the kids aren't). That certainly seems to be a good thing but what, exactly, does this mean? Does it mean that parents are simply happier when they get choose their child's school than when they don't, or does it mean that parents are perceiving positive things outside of the scope of the program evaluation?
I could see interpreting this particular finding any number of ways. One could argue that parents care at least as much about other factors as they do about test scores. One could argue that parental satisfaction is ultimately what matters most since parents are ultimately responsible for their children. One could argue that an increase in parental satisfaction is a given in such a program.
In the end, I'm not sure exactly what an increase in parental satisfaction means. If the parent is more satisfied, you'd expect that they might become more involved in the school -- or at least become more cooperative with it. But does an increase in parental satisfaction affect students otherwise? In other words, If I enroll in a school that pleases my mother, does that mean that I'll do any better? Student satisfaction didn't increase, so it would seem that parent and student satisfaction might not go hand-in-hand. If parent satisfaction increases, but student satisfaction doesn't, will the child's behavior change? I'm not sure what the answer is.