Monday, September 20, 2010

A Primer on the Nashville Incentive Pay Experiment, Part 2

Part 2: What to Look For

The results from the Nashville incentive pay experiment are due to be released tomorrow (see last week's post for background info on the experiment) -- here are a few things to keep an eye out for in the final report:


Stability of scores

One of the issues with value-added scores has been their high variability from year to year.  Researchers were worried about the effects of "statistical noise" and random variations in scores before the start of the experiment.  In practical terms, you'll want to know how many teachers received a bonus each year versus how many earned a bonus one or two out of the three years -- and how well teachers who ever earned a bonus did across all three years (e.g. did they earn a bonus for being in the 85th percentile one year but then have a score in the 40th percentile the other two years).  In other words, were bonuses going to the same teachers who tended to outperform other teachers, or were they just randomly assigned each year?


What, if anything, did teachers do differently?

To me, this is the most interesting question.  Did treatment group teachers report putting more effort into teaching because of the incentive?  Did they assign more homework?  Did they crack down more on misbehaving students?  Did they work harder to get certain students out of their class?  Did they focus more on their math class than their other class (some of the teachers taught multiple subjects)?  Did they attend more PD?  Did they spend more time on test prep?  Or did they not bother to change at all?  The baseline numbers are going to get all the attention (i.e. did students of treatment group teachers score higher?) but, I think answering this question is far more important -- both because it informs us as to why, or why not, treatment teachers do better and because it gives us valuable insights into how teachers think and how their actions influence student achievement.


Teacher reaction to receipt/non-receipt of bonus

Did teachers who earned a bonus enthusiastically redouble their efforts while those who didn't simply gave up and maybe quit?  Or did those who failed to earn a bonus decide to redouble their efforts to make sure they got one the next year while those who earned one became cocky and put their teaching on cruise control?  In terms of the numbers, it will be interesting to see if there's any divergence between the scores of first-year winners versus first-year losers (in terms of bonus receipt) -- does one group go up in second years and other down, are both steady, or something else?  Of course, if the scores are simply random each year, then we'd expect first-year losers to do better than first-year winners in the 2nd/3rd years because of regression to the mean.


Demographics of winning/losing teachers

Bonuses are not based on a value-added measure that attempts to control for everything and isolate the individual teacher's effect.  The only thing the computed score takes into account is prior achievement of the students.  So it will be interesting to see if winning teachers taught in better schools, were more experienced, had smaller classes, used a different curriculum, had lower student transience, or other factors that might influence students' gains on the state math test.


Performance in different types of math classes

Were teachers of advanced algebra classes more or less likely to earn bonuses than teachers of remedial math classes?  Since the formula for determining bonuses is somewhat simplified, it may be the case that it's a lot easier to get low-performing student to advance x points than high-performing students, especially if there's a ceiling effect.  Or it may be the case that some subjects are better aligned with the contents of that year's state test.


Improvement of scores

For merit pay systems to transform our schools, we need teachers to improve -- and continue to improve -- while they're eligible for these bonuses.  Why?  Let's say that the treatment teachers, as a group, have a score that ranks them in the 50th percentile historically each year.  Now let's say that the treatment group teachers average a score in the 60th percentile each year.  That would appear to be strong evidence that incentives make teachers better (at least as measured by gains on Tennessee's state tests, anyway), but it would only offer a limited amount of hope for the future because it would indicate only a one-time boost to scores.  Let's say another experiment on a professional development system of some sort yields growth instead of simply a step up -- teachers are in the 50th percentile the first year, the 55th the second, the 60th the third, and the 65th the fourth . . . that system would offer more promise for future growth in school success.


Keep your eyes open for some more pre- and post-release analysis of the experiment . . .

No comments: