Monday, August 16, 2010

LA's Value-Added Kerfuffle

The LA Times ran an article Sunday that's the first in a series leading to the release of scores of teacher effectiveness based on a value-added model for all 3rd-5th grade math/ELA teachers -- which has already created quite a stir and will likely continue to cause a ruckus.  In short, the Times got its hands on 6 years of test data and hired an outside stats guru (the RAND corp's Richard Buddin) to analyze the data and rank the thousands of teachers according to their effectiveness at raising test scores, controlling for a number of factors.

The general tenor of the article suggests that value-added scores are an underutilized resource and that using them more could make huge differences.  The article notes in different ways that teachers are very, very important, that the right ones can make all the difference, and that we should be focusing more on hiring/training better teachers and firing worse ones.  None of which is in any way unusual to read in an article on education these days -- and none of which is completely without merit.  Teachers are important, and we do need to do a better job of ensuring that we have better teachers in our schools -- especially the schools with the most disadvantaged populations.

If you read the article carefully and then the background report on the methodology, however, a few things jump out:

*Teacher quality varies widely within schools -- just as with test scores, there's far more variation within schools than across schools ("Teachers are slightly more effective in high- than in low-API schools, but the gap is small, and the variance across schools is large").  Which means that the highest performing schools don't have all the best teachers and the lowest performing schools don't have all the worst teachers.  Which means that something other than teacher quality is causing schools to be low and high performing.  Which means we should probably focus our attention on more than just teacher quality.

*There's an extremely weak correlation between how the schools fare in the state API rating system and how they fare in a measure of "school effects" that controls for all sorts of factors.  As Buddin writes, "About a fourth of low-API schools have above average school value added relative to other elementary schools in the district.  Similarly, about a fourth of the highest-quartile API schools have below average school effectiveness.  The overall message is that many schools with low achievement levels are producing strong achievement gains and many schools with high achievement levels are producing weak achievement gains for their students."

*I'm not sure exactly how large the teacher effects are, but looking at the info they provide, with the exception of a few outliers, they don't appear to be earth-shatterlingly huge.  The methodology paper says that a student with a teacher one standard deviation above normal would move from the 50th to 58th percentile in ELA.  If I'm doing my math right (which I might not be -- it's late), that means that 2/3 of teachers, on average, move their students up or down less than one-fifth of a standard deviation each year.  The article mentions a teacher who's ranked among the top 5% of all elementary school teachers whose students gain, on average, 4 and 5 percentile points in ELA and math in a given year.

*The article mentions a teacher held in high-esteem at one of the highest scoring schools who performs far below average according to the value-added scores.  According to the article, her principal thinks she's a great teacher as do the kids and parents in her school. This means that either a.) principals, kids, and parents aren't good judges of teacher quality (at least sometimes), and/or b.) what people define as a good teacher only somewhat overlaps with what teachers can do to boost value-added scores

In future articles I'd really like to see a better description/graphic of how the large the differences in impacts of different teachers is.  From what I've read so far, it looks like the vast majority of teachers aren't really all that far apart.  Especially considering that previous research has found that you can't simply add one year of teacher effects from a great teacher to the next year's effects from another great teacher (e.g. having three straight teachers that boost scores 10 percentile points on average won't boost your score 30 percentile points).  I'd also like to see more on the stability of these results on a year-to-year basis -- previous research has found one year's value-added scores to be only loosely correlated with the previous year's scores (I think the latest paper on the topic found that it took three years to compute a stable score, which makes it hard to use value-added scores for yearly hiring or bonus decisions).

Also, don't forget that value-added scores a.) only represent part of what teachers do and b.) currently only apply to a small fraction of teachers.  if we consider elementary schools to be K-5, only grade 3-5 math/ELA teachers had value-added scores -- the majority of teachers in elementary schools teach either K-2 or something other than math/ELA . . . so this use of value-added is no magic bullet.

Lastly, I'd like to add a note about the practical application of these findings.  When we do a rigorous statistical analysis of teacher effectiveness, we control for all sorts of things, from previous test scores to the test scores of other kids in the class, and so on.  In short, the goal is to say "everything else equal, teacher A will raise test scores by x points more than teacher B".  But in real life, everything else isn't equal.  So even though the results indicate that the teachers in the worst schools are about as good as the teachers in the best schools, practically that doesn't mean that parents will be (or should be) any more likely to want their kids to attend the worst schools.  Mr. Jones might raise the average kid's score by 20 points, all else being equal, but that doesn't mean your kid's score is going to go up by 20 points in his class.

Keep your eyes on the situation, b/c I guarantee you there will be lots of exaggerated responses from people on both sides of the issue.  Just remember: value-added scores aren't completely worthless, but they also fall far short of solving all of our problems.

6 comments:

Attorney DC said...

As a former teacher, I'm very wary of value-added scores being used to rate teachers in our public schools. Having taught in several different schools, I realized that many different factors influence a student's academic achievement. For example, if a student suffers a tragedy at home, his or her interest in school may wane, and his grades suffer, notwithstanding having wonderful teachers. Or for another example, if a handful of disruptive students are allowed to remain in the classroom, despite their behaviors, they will negatively influence their fellow classmates' ability to learn.

In addition to the issue of outside variables, I don't believe that value-added methods have been shown to be sufficiently reliable for use in evaluating educators. This is just another attempt to use business models and ideas in an arena that cannot be judged as a business, with "end products" and widgets.

Roger Sweeny said...

Attorney DC,

If schools don't have "end products," if we don't care what those people who attend them turn into, what is the purpose of having schools at all? I suppose the easy answer is "day care." Children are taken care of in a safe and relatively nurturing environment during the day, allowing their parents to get a break or earn money. But most arguments for schools involve students learning things.

If we do care about students learning things, value-added systems are an attempt to measure how much individual teachers have contributed to students' learning. They are, of course, imperfect. Since they are so new, they may get better with time.

It is often forgotten that most school systems already have a system that ranks teachers and pays some more than others. It pays more to the teacher who has taken more education courses. The question isn't whether value-added systems are good. It's whether they are better than the present system. Perhaps surprisingly, I have never heard an opponent of value-added systems arguing for egalitarianism: that we should pay all teachers the same, or that seniority should be the only determinant of salary.

Attorney DC said...

Roger: My point is that children aren't "widgets" and each teacher is not randomly assigned a set of identical materials. If two teachers are each assigned classes with a similar mix of performance ability, but one teacher has an emotionally disturbed student who continually interrupts the class, but cannot be removed due to IDEA regulations, I'd be surprised if that teacher's students didn't end up learning less than the teacher who didn't have this student. Should teacher #1 get less pay than teacher #2 in this situation? If anything, I'd argue that teacher #1 should get more pay, because trying to teach a regular education class with a seriously emotionally disturbed student is probably more taxing than teaching the other class.

My point is that simply saying that Teacher #1's scores went up X points this year and Teacher 2's scores went up Y points this year doesn't say much about whether Teacher 1 is a better educator than Teacher 2. I'd bet all it would do is make teachers shy away from teaching classes with difficult students, including those with IEP's (indicating learning and/or emotional disabilities), lower-income students and students with English as a second language.

Roger Sweeny said...

Attorney DC,

I agree--and I would also pay teacher #1 more. An even better solution would be to get the emotionally disturbed student to an environment where (s)he can learn and not keep others from learning. If that means changing IDEA, so be it. Our unions and ed profs should be pushing for change in IDEA, not accepting it and then dismissing value-added out of hand.

Of course, since IDEA won't be changed in time for the next school year, any value-added system would have to take into account the situation you describe.

I find it very sad that we accept the situation of the disruptive child in the classroom.

Anonymous said...

Great discussion -- especially around IDEA. I am all for these kids learning but not at the expense of others in the classroom...why is this such a sensitive issue? Education all kids in the appropriate environmnet. Mainstreaming for all kids may not be right...

On value added -- it is not new. It has been around a couple of decades I believe. Tennessee has been using it for quite some time. Just believe it has not been used that much in other states...

Do you research of it...

Attorney DC said...

Roger: Thanks for responding to my comments. The way I see it, the problem with your suggested solution (that the value-added scores take special education students into account) is that the current VAM's do NOT take these things into account. There are so many variables that affect student learning that (especially with a small sample size of 20-30 students per year) the VAM's have little reliability in attributing any increase or decrease in student achievement to the effect of that particular teacher. I believe that studies show that a teacher who posts big score increases in one year has a decent chance of posting much smaller increases the following year (and visa-versa). All I can say is that I'm glad I'm not teaching any more in this environment!