Tuesday, September 28, 2010

Exceptions are Exceptional

Many people noted the article on the turnaround of Brockton HS in yesterday's NY Times.  But I didn't notice anybody critiquing the headline.  The piece is titled "4,100 Students Prove 'Small is Better' Rule Wrong".

Yipes.  Regardless of whether or not small schools really outperform larger ones* or whether or not Brockton has really turned around, the headline is absurd.  Exceptions don't disprove rules in social science.  Exceptions are, by definition, exceptional.  Can we please take that into account when we discuss education?

Someone driving home safely while drunk doesn't prove that it's not dangerous to drink and drive.  A sole survivor of a plane crash doesn't prove that plane crashes don't kill.  A child from a troubled neighborhood graduating from Harvard doesn't prove that growing up in a troubled neighborhood doesn't impact one's educational performance.  And one large school excelling doesn't mean that small schools aren't generally better.

update: GothamSchools outdoes the Times yet again by linking to the article with a far more responsible phrase -- "Brockton High School in Massachusetts dispels the myth that only small schools can improve."

*the evidence that small schools actually are better is, to my understanding, weak at best -- but that's not relevant to the point I'm making

Thursday, September 23, 2010

The Primary Purpose of Merit Pay

The most popular opinion of the last few days seems to be that the primary purpose of merit pay is to re-shape the teacher labor force by attracting and retaining better teachers. The notion that performance incentives would motivate teachers to perform better in the classroom has been implicitly or explicitly derided as silly and/or unimportant.

Did I miss something? Maybe I need to do some archival research, but I could've sworn that before the release of the results there weren't many merit pay proponents making this argument. But since learning of the lack of effect on standardized test scores in the Nashville experiment, it seems to be the only one I hear.

After learning of the results, Rick Hess wrote that

The second school of thought, and the one that interests serious people, is the proposition that rethinking teacher pay can help us reshape the profession to make it more attractive to talented candidates, more adept at using specialization, more rewarding for accomplished professionals, and a better fit for the twenty-first century labor force.

and the Washington Post quotes Eric Hanushek saying

The biggest role of incentives has to do with selection of who enters and who stays in teaching - i.e., how incentives change the teaching corps through entrance and exits . . . I have always thought that the effort effects were small relative to the potential for getting different teachers. Their study has nothing to say about this more important issue.

and Tom Kane writes:

the impact of the specific incentive they tested depends on what underlies the differences in teacher effectiveness–effort vs. talent and accumulated skill. I’ve never believed that lack of teacher effort–as opposed to talent and skills–was the primary issue underlying poor student achievement gains. Rather, the primary hope for merit pay is that it will encourage talented teachers to remain in the classroom or to enter teaching.

the Obama administration's official position seems to align with that too.  Here's how the same Washington Post article described their repsonse:

While this is a good study, it only looked at the narrow question of whether more pay motivates teachers to try harder," said Peter Cunningham, assistant U.S. education secretary for communications and outreach. "What we are trying to do is change the culture of teaching by giving all educators the feedback they need to get better while rewarding and incentivizing the best to teach in high-need schools, hard to staff subjects. This study doesn't address that objective.

Maybe I'm wrong and there are more people that would've agreed with these four statements a few days ago than I think, but there were certainly more than a couple people arguing that performance incentives would increase teachers' motivation, improve their classroom performance, and subsequently increase the academic performance of their students.  I've had conversations with people who've directly told me that lack of motivation is a huge problem in teaching and that providing proper incentives would fix this.

Without more research, I can't tell you whether people have conveniently changed their mind about the primary purpose of performance pay or whether those who believe it should be used primarily to alter the teacher labor force are now simply stepping to the forefront while those who believed in its motivational potential are shrinking into the background.  But I'd guess that it's a little of both.

On the plus side, might everyone now agree that teacher pay should be re-fashioned with the primary goal being to encourage the recruitment and retention of excellent teachers?  Do I hear a consenus emerging?  I guess time will tell . . .

Wednesday, September 22, 2010

Nashville Incentive Pay Experiment: Results Wrap-Up

The National Center on Performance Incentives released the final report on the Nashville performance pay experiment (known as POINT) yesterday.  The press release is available here, and the full report is available here.

The study involved 296 middle school math teachers in Nashville who were assigned to either a treatment group (eligible for bonuses of 5, 10, or $15,000) or a control group and then tracked for three years.

The main result was that students assigned to bonus-eligible teachers did not perform any better than students assigned to treatment group teachers.  The lone exception were 5th grade students in years 2 and 3 of the study, but the gains did not persist through the end of 6th grade.  The main portion of the executive summary reads as follows:

POINT was focused on the notion that a significant problem in American education is the absence of appropriate incentives, and that correcting the incentive structure would, in and of itself, constitute an effective intervention that improved student outcomes.
By and large, results did not confirm this hypothesis. While the general trend in middle school mathematics performance was upward over the period of the project, students of teachers randomly assigned to the treatment group (eligible for bonuses) did not outperform students whose teachers were assigned to the control group (not eligible for bonuses).

Prior to implementation, researchers calculated that 55% of teachers would be able to obtain bonuses if their students answered only an additional 2-3 questions correct on the state test.  Across the three years of the study, 33.6% of bonus-eligible teachers earned a bonus in one or more years.

This means that most teachers did not dramatically improve, even for one year.  It also means that the tendency of test scores to bounce around significantly did not result in different random groups of teachers receiving a bonus each year.  About two-thirds of teachers in the treatment group never received a bonus.  And about two-thirds of the teachers who received a bonus earned one in multiple years.

Of potentially more import are the results from the teacher interviews and surveys.  These will continue to be analyzed in the coming months, but right now the main takeaway point is that teachers in the treatment group really didn't report changing much at all.  About 80% of teachers reported that they were already working as hard as they could before the incentives were implemented and were therefore unable to work any harder afterward.

There are two potential bright spots, however, for merit pay proponents:

1.) The project ran smoothly (e.g. the right teachers received the right bonuses at the right time) and didn't suffer any major backlash.  That this was truly a partnership between union, university, district, and other groups probably helped in this regard.

2.) It's unclear right now, but the bonuses may have had a small effect on the patterns of teacher attrition for those in the bonus group.  27 teachers in the control group left to teach in another district while only 15 teachers in the treatment group did.  The numbers are small, so conclusions are hard to draw, but the most popular criticism of the study by performance pay advocates seems to be that it didn't shed any light on how performance pay might affect teacher recruitment and retention.

There was also no evidence that treatment group teachers were successful in gaming the system by having more kids (or lower performing kids) removed from their class, preventing new students from transferring into their classes, focusing more efforts on math instruction at the expense of other subjects, or helping their students cheat on the state tests.

What does it mean?

I wrote before that this was one of the most important studies this decade.  That said, it's still just one study.  And we have to be careful about how we extrapolate from the results of any one study.  The main question the study was set up to answer was whether offering performance incentives to individual teachers would result in their students performing better on standardized tests.  The study offered no evidence that this was the case, and little reason to believe that this would be the case for similarly designed incentive systems.

Why is this so?  The main reason seems to be the lack of any major changes by teachers.  But there are a couple of other possibilities.  Just because teachers reported that they didn't change anything doesn't necessarily mean that they didn't change anything.  So it's not out of the realm of possibility that at least some of the teachers made changes but that these changes didn't yield subsequent results.  It's also possible that, despite the fact that teachers received the project fairly well, they wanted to prove the working hypothesis wrong -- that is, that teachers eligible for a bonus, at least on some level, resented the fact that somebody thought they'd teach better if they were offered a carrot, and decided to not work harder despite the carrot looking awfully delicious.  At the same time, the control group teachers could've worked a little harder to prove that they were plenty motivated to teach solely because they wanted their students to succeed.  While the answer is likely a little of all of the above, I tend to think the most likely scenario is that teachers simply weren't all that much more motivated by the prospect of a bonus fairly far down the road (teachers were paid the following November).

What it doesn't mean is that all merit pay schemes forevermore are doomed to abject failure.  We don't know if different types of bonuses awarded in different ways (shorter time spans, group awards, non-monetary awards, etc.) might have a larger effect, and we know little about how performance pay affects the long-term make-up of the teacher labor force.  At the same time, it does call into serious question the application of the overly simplistic homo economicus model used by economists.  My gut feeling is that economists tend to view teacher motivation and the teaching profession in an overly simplistic manner guided too much by basic economic theories and not enough by the literature on the sociology of teaching or the psychology of motivation.

Where do we go from here?

NCPI has another study utilizing team-based incentives that should be out at some point over the next couple of years.  In addition, other non-randomized studies of the myriad incentive systems that have sprouted all over the country the past couple years are certainly underway as well.  I'm not sure why the NCPI researchers chose to claim that this was "the first scientific study of performance pay ever conducted in the United States," since the definition of "scientific" is hotly debated, but the literature base will certainly continue to grow in the coming years regardless.

In addition to the continuing analysis of the data from this study, there are plenty of opportunities for other researchers and funding agencies to examine questions surrounding the impact of merit pay on the teacher labor force, on school culture, on student outcomes other than test scores, and many other areas.

In the meantime, it's important that merit pay opponents not claim that this study proves once and for all that merit pay does not, and will not ever, work in schools.  If nothing else, it appears likely that better teachers got paid more money than worse teachers -- which is arguably an improvement on the current system.  And, at the same time, it's important that merit pay proponents not claim that this study is meaningless and that we should recklessly proceed with merit pay in schools at breakneck speed.

Merit pay has grown considerably more popular among teachers over the past decade, so it's eminently possible that districts and unions can work together to design and implement better, more nuanced merit pay systems that might have a better chance of success.  I don't think anybody would argue that the status quo is the perfect system, so simply saying that merit pay won't work isn't a solution.  But these results indicate that we should proceed with caution.  The assumption that teachers will work harder for financial incentives is now a dangerous one to make and should be made with caution.  As such, the performance pay systems that will undoubtedly continue to emerge should begin with a more nuanced and informed understanding of the practices and motivations of current and prospective teachers.  We can only hope that an informed and open-minded approach from both sides will eventually result in compensation systems that attract and reward good teachers in ways that current teachers find meaningful and fair.

For more information, see my previous posts on the experiment:

Part 1: Background Info
Part 2: What to Look For
Part 3: Why it Matters
Part 4: What We Can Learn
Live-Blog of Results

Tuesday, September 21, 2010

Live-Blogging the Release of the Nashville Performance Pay Experiment Results

Today marks the release of the National Center on Performance Incentives' report on the Nashville performance pay experiment (known as POINT).  The full report will be available on their website after the discussion.  In the meantime, I'm going to provide you with some snapshots of what's being said and written.  Please check out my previous posts on this study as well.  A live, streaming, video of the press conference is online here.  The press release (which is pretty good) is now available here.  The full report is available here.

Previous posts:
Part 1: Background Info
Part 2: What to Look For
Part 3: Why it Matters
Part 4: What We Can Learn

12:50pm: The press conference has begun.  I'm going to begin by posting the summary statement from the executive summary of the report:

POINT was focused on the notion that a significant problem in American education is the absence of appropriate incentives, and that correcting the incentive structure would, in and of itself, constitute an effective intervention that improved student outcomes.

By and large, results did not confirm this hypothesis. While the general trend in middle school mathematics performance was upward over the period of the project, students of teachers randomly assigned to the treatment group (eligible for bonuses) did not outperform students whose teachers were assigned to the control group (not eligible for bonuses).

Before you get excited, or disappointed, about this, bear in mind what I've written before -- the most important things we can learn from this study aren't what happen to test scores, they're insights into teacher behavior from the interviews and surveys.  Keep checking back for more details on this, and other, topics.

Also, please keep in mind that this study does not definitively prove either that merit pay systems are a bad idea or a good idea.

1:00pm: The number of teachers who received bonuses remained steady throughout (40, 41, 44), but the number of eligible teachers declined significantly (143, 105, 84) -- meaning over half of the teachers received a bonus for being above the historical 80th percentile of teachers in the last year.  This could mean that less successful teachers tended to leave the study -- whether by switching subjects, schools, or careers.  Or it could mean that the tests became easier and more teachers were rewarded.  I would say that it could mean that it took three years for the incentives to have an effect, but the treatment group did not outperform the control group -- even in the final year.

from the executive summary: "attrition of teachers from POINT was high. By the end of the project, half of the initial participants had left the experiment."

The report says that the differences in attrition -- and reasons for attrition -- between control and treatment groups was not statistically significant.  But it reports that 27 control group teachers, and only 15 treatment group teachers, left the study because they'd switched districts during the experiment.  I'd like to know more about this and whether this might be evidence that incentives made teachers slightly more likely to remain in the Metro Nashville school system.

about teacher attrition, from the report:

Teachers who left the study tended to differ from stayers on many of the baseline variables. Teachers who dropped out by the end of the second year of the experiment were more likely to be black, less likely to be white. They tended to be somewhat younger than teachers who remained in the study all three years. These dropouts were also hired more recently, on average. They had less experience (including less prior experience outside the district), and more of them were new teachers without tenure compared to teachers who remained in the study at the end of the second year. Dropouts were more likely to have alternative certification and less likely to have professional licensure. Their pre-POINT teaching performance (as measured by an estimate of 2005-06 value added) was lower than that of retained teachers, and they had more days absent. Dropouts completed significantly more mathematics professional development credits than the teachers who stayed.Dropouts also tended to teach classes with relatively more black students and fewer white students. They were more likely to be teaching special education students.

I'm going to need a little time to digest that, but the next table demonstrates that treatment and control group teachers, in all three years, did not differ in terms of effectiveness (as measured by tests) in the years prior to the experiment -- meaning that more effective teachers didn't seem more likely to stay if they could possibly earn bonuses.  Treatment group teachers, however, were six percentage points less likely to leave the middle school in which they started during the three years.

1:03pm: There were, however, positive effects for 5th grade teachers in the 2nd and 3rd years, though the effects did not persist until the end of 6th grade.  The 5th grade teachers had the same students all day for multiple subjects, so it's possible that they shifted focus to math instruction or that they simply knew their students better and were able to get better results.  The center did analyze results on other subject tests and found no differences that would indicate teachers ignored these subjects and only focused on math.

33.6% of the treatment group received a bonus in at least one year (out of 152, 16 won once, 17 twice, and 18 thrice).  Analysis done by the researchers prior to the experiment found that 55% of teachers were within a few more correct questions per student of attaining scores that would earn them a bonus.

1:08pm: Here are some other interesting tidbits from the executive summary, report, and press conference (which is now in Q&A):

-80% of teachers reported that they were already working as hard as they could and didn't change their effort due to the opportunity to earn an incentive.

-from the executive summary: "The introduction of performance  incentives in MNPS middle schools did not set off significant negative reactions of the kind that have attended the introduction of merit pay elsewhere. But neither did it yield consistent and lasting gains in test scores. It simply did not do much of anything."

-from the report: "From an implementation standpoint, POINT was a success. This is not a trivial result, given the widespread perception that teachers are adamantly opposed to merit pay and will resist its implementation in any form."

-I didn't mention this before, but the placement of teachers in control/treatment groups -- and whether or not they received a bonus was, officially, confidential.  The center didn't distribute this information to principals or teachers and participating teachers signed statements saying that they wouldn't tell anybody their status.  It looks like this was at least moderately successful, as about 75% of teachers reported that they didn't know if anybody had won a bonus in their school.

-There were no differences in student attrition between groups -- meaning that there's no evidence that bonus-eligible teachers were more likely to get problem students removed from their class.  There were also no differences in students enrolling late, and students who left treatment teachers' classes were no lower scoring.

1:12pm: Dale Ballou responds to a question by saying that test scores bounced around a lot before the experiment, making it "difficult to extrapolate" from the data prior to the start of the experiment and tell whether teachers were doing better post-bonus than pre-bonus.

1:16pm: From the report, here are the results from the teacher survey.  In short, nothing huge or shocking (note TCAP is the TN state test):

There are few survey items on which we have found a significant difference between the responses of treatment teachers and control teachers. (We note all contrasts with p values less than 0.15.) Treatment teachers were more likely to respond that they aligned their mathematics instruction with MNPS standards (p = 0.11). They spent less time re-teaching topics or skills based on students’ performance on classroom tests (p = 0.04). They spent more time having students answer items similar to those on the TCAP (p = 0.09) and using other TCAP-specific preparation materials (p = 0.02). The only other significant differences were in collaborative activities, with treatment teachers replying that they collaborated more on virtually every measured dimension. Data from administrative records and from surveys administered to the district’s math mentors also show few differences between treatment and control groups. Although treatment teachers completed more hours of professional development in core academic subjects, the difference was small (0.14 credit hours when the sample mean was 28) and only marginally significant (p = 0.12). Moreover, there was no discernible difference in professional development completed in mathematics. Likewise, treatment teachers had no more overall contact with the district’s math mentors than teachers in the control group.

1:20pm: One question I really want answered (that I don't see in the report) is how the bonus winners and losers differed (experience, school type, etc.), if they differed at all.  A questioner just asked something along those lines, and the response was that they've just begun to look at that, but that they're wary to draw too many conclusions about this because the sample sizes will get smaller the more focused the question is.

1:27pm: not sure this is quite verbatim, from the panel, but it's a good point nonetheless: "just because we didn't find an effect from doing incentives this way doesn't mean you wouldn't find results by doing it another way."

1:43pm: The questions aren't really helping to advance the discussion all that much -- in part b/c there are too many speeches and not enough questions.  The line at the microphone is now too long for me to get a question in before the end of the session, but here's what I'd ask:

1.) There are two assumptions behind performance pay that this study was designed to test: one is that teachers will respond to the opportunity to earn incentives by changing their practice, and two is that these changes in practice will result in better student performance.  The report says that there were very few differences in reported teacher behavior between treatment and control groups, but is there enough data to draw any conclusions about the performance of those who did report making changes?

I had a second question, but it now completely escapes me.  I'll try to find an answer to at least that question, at least.  That's going to do it for now, but expect a more concise summary of the report's findings in the next 24 hours or so.

Wait, just kidding: here are the other four questions that are rattling around in my brain right now.  I ran into one of the researchers in the hallway and asked him the question above in addition to the first three below.  The answer was basically that they're good questions, and ones they plan on looking into more -- the focus for today was completing the analysis of the trends in test scores.

2.) What, if any, differences were there between bonus winners and losers (e.g. were winners more experienced, teaching in better schools, teaching higher/lower performing students, teaching a different subject, etc.)?

3.) What, if any differences, were there between the reactions of bonus winners and losers (e.g. more likely to stay/leave, more/less critical of experiment, more/less enthusiastic, more/less likely to report subsequent changes in behaviors)?

4.) I think 15 treatment group and 27 control group teachers left the district during the experiment.  Can we consider that statistic (and maybe any relevant survey/interview questions) evidence that teachers might be less likely to leave an urban district if they can earn a bonus there but not elsewhere?

5.) A much higher percentage of the treatment group teachers (over half, compared to less than one-third) won bonuses in the final year.  Is that simply because the less experienced teachers were much more likely to leave their assignments, because the test changed, or something else?

Now that's (really) all for now.

Shame on Rick Hess

Rick Hess' post on the Nashville performance pay experiment yesterday received a lot of attention.  In the post, Hess, a merit pay proponent, argues that the results of this experiment will "tell us nothing of value".

I found the post somewhat surprising (as did many others, I think).  And I'm somewhat ashamed to admit that, as I read it, my inner cynic said "I wonder if Hess knows that the results weren't spectacular and he's just trying to discredit them ahead of time?".  But I quickly shook that off, since I tend to give people the benefit of the doubt.  Hess raises a lot of good points, and he seems sincere when he warns readers about the dangers of either side making wild claims based on the results.

Well, I was wrong.  Wrong to trust Rick Hess.  Re-read his post again, but this time be aware that Hess already knew the results of the experiment -- and that the results weren't what he wanted.

That's right.  Hess already knew the results.  I was informed of this by a reliable source familiar with the project -- who also tells me that Hess has spent the last few years lauding the researchers and chomping at the bit to rub the positive results in the faces of unions across the country.  And now that they didn't turn out the way Hess had hoped, he's decided to pretend that he didn't know the results and tell everybody that the results don't matter.

Regardless of one's point of view, that's simply dishonest and disgraceful.  Shame on Rick Hess.

Now, that's not to say that Hess is the only one being dishonest.  I'm sure that there are plenty of people on both sides of the debate that have been, and/or will be, dishonest to some extent.  So I don't want this post to be construed as vindication that opponents of merit pay are always victims and proponents are always evil liars. 

But, at the same time, I find this post particularly galling.  Hess has made a name for himself lately by being somewhat independent-minded on a number of issues.  Sometimes I agree with him, sometimes I disagree -- but he always has something interesting and insightful to say.  In short, he gave me good reason to hold him to a higher standard than most policy wonks who too often seem incapable of seeing an issue from multiple sides.  Indeed, a good portion of what he wrote yesterday is useful and insightful (though he takes it too far), and that's what made the post so remarkable.  But now I don't know what to believe.  A number of his points should still stand (for example, examining the ways in which merit pay affect recruitment and retention is important), but I don't know which ones are sincere and which ones are not.

I respect honesty.  I respect people who seek to advance the discussion of education policy.  Today, I do not respect Rick Hess.

A Primer on the Nashville Incentive Pay Experiment, Part 4

Part 4: What Can We Learn?

Previous posts:
Part 1: Background Info
Part 2: What to Look For
Part 3: Why it Matters

Despite what Rick Hess says, we won't learn "nothing" from the results of the study (but do read his post, as most of his points are good ones).  So what, exactly, will we learn from the results of the study?  It's hard to say exactly, but here are some things that we can and cannot learn from the study:

We can learn how individual teachers respond to a financial incentive offered for individual results.
We cannot learn how individual teachers, groups of teachers, or entire schools respond to financial or other types of incentives offered to groups of teachers or schools.

We can learn whether teachers will change their teaching in ways that will raise student test scores in response to an individual financial incentive.
We cannot learn whether teachers will change their teaching in ways that will increase student engagement, critical thinking, creativity or a myriad of other factors in response to an individual financial incentive.

We can learn whether middle school math teachers in Nashville were more likely to switch schools/districts or leave the profession if they were in the control or treatment group.
We cannot learn whether talented people across the country are more likely to become teachers, and subsequently remain in teaching, if there are performance bonuses in place.

We can learn whether, under this particular system, teachers who are "better" as measured by standardized tests across all three years tend to be rewarded on a year-to-year basis.
We cannot learn whether, under performance pay systems, better teachers (as measured in any number of ways) tend to be paid more.

In short, yes, there are rather severe limitations on what we can learn from this one study.  But, at the same time, I'd argue that we can learn more from this study than from most others.  Despite what Rick Hess, we will not learn "nothing of value," in part because different people value different things.  Hess might think that the teacher recruitment/retention aspect of performance pay is the most important, but plenty of others think the incentivizing of effort is the most important.

What does this mean for the interested observer watching from afar?  It means that your ears should perk up if when you hear strongly worded statements from both sides of the debate.  This study is one piece in the puzzle -- and an important piece, at that.  It's neither the end all and be all of research into performance pay nor an utterly useless waste of time that fails to inform the debate in the least.

A Primer on the Nashville Incentive Pay Experiment, Part 3

Part 3: Why it Matters

Previous posts:
Part 1: Background Info
Part 2: What to Look For

You may have noticed that I'm devoting a fair amount of attention to the results of the Nashville incentive pay experiment that are being released today.  Let me take a couple of minutes to explain why.

The first, and most obvious, point is that this is the first randomized field trial evaluating the effectiveness of a merit pay system.  The debates to date on whether or not we should use some form of performance pay in school have largely relied on ideology and theory.  This will give us the first concrete, empirical, and comprehensive evidence to inform our future policy decisions.  Given the importance of merit pay in the national discussion right now, it makes this one of the most important education studies of the decade.

Now, that is not to insinuate that there still won't be a ton of unanswered questions about merit pay after the results of this are digested (no one study is ever enough to close the book on such a wide-ranging topic) but, rather, that we will know significantly more about how merit pay plays out at the ground level after the release of this study.

Or, at least, we certainly hope we will -- especially given that this represents countless hours of effort by dozens of people over the past five years or so . . . and millions of dollars.  Things can always be done bigger and better, but there won't be anything bigger and better than this for quite some time (if ever), so expect the results to be bandied about by both sides of the debate for years to come.  In other words, expect this study to be the definitive study on how individual teachers respond to financial incentives well into the future.

My next post will address what, exactly, we might learn from the results.  It will likely be followed up at an attempt to live-blog the release of the results beginning around 12:30pm central time.

Monday, September 20, 2010

A Primer on the Nashville Incentive Pay Experiment, Part 2

Part 2: What to Look For

The results from the Nashville incentive pay experiment are due to be released tomorrow (see last week's post for background info on the experiment) -- here are a few things to keep an eye out for in the final report:

Stability of scores

One of the issues with value-added scores has been their high variability from year to year.  Researchers were worried about the effects of "statistical noise" and random variations in scores before the start of the experiment.  In practical terms, you'll want to know how many teachers received a bonus each year versus how many earned a bonus one or two out of the three years -- and how well teachers who ever earned a bonus did across all three years (e.g. did they earn a bonus for being in the 85th percentile one year but then have a score in the 40th percentile the other two years).  In other words, were bonuses going to the same teachers who tended to outperform other teachers, or were they just randomly assigned each year?

What, if anything, did teachers do differently?

To me, this is the most interesting question.  Did treatment group teachers report putting more effort into teaching because of the incentive?  Did they assign more homework?  Did they crack down more on misbehaving students?  Did they work harder to get certain students out of their class?  Did they focus more on their math class than their other class (some of the teachers taught multiple subjects)?  Did they attend more PD?  Did they spend more time on test prep?  Or did they not bother to change at all?  The baseline numbers are going to get all the attention (i.e. did students of treatment group teachers score higher?) but, I think answering this question is far more important -- both because it informs us as to why, or why not, treatment teachers do better and because it gives us valuable insights into how teachers think and how their actions influence student achievement.

Teacher reaction to receipt/non-receipt of bonus

Did teachers who earned a bonus enthusiastically redouble their efforts while those who didn't simply gave up and maybe quit?  Or did those who failed to earn a bonus decide to redouble their efforts to make sure they got one the next year while those who earned one became cocky and put their teaching on cruise control?  In terms of the numbers, it will be interesting to see if there's any divergence between the scores of first-year winners versus first-year losers (in terms of bonus receipt) -- does one group go up in second years and other down, are both steady, or something else?  Of course, if the scores are simply random each year, then we'd expect first-year losers to do better than first-year winners in the 2nd/3rd years because of regression to the mean.

Demographics of winning/losing teachers

Bonuses are not based on a value-added measure that attempts to control for everything and isolate the individual teacher's effect.  The only thing the computed score takes into account is prior achievement of the students.  So it will be interesting to see if winning teachers taught in better schools, were more experienced, had smaller classes, used a different curriculum, had lower student transience, or other factors that might influence students' gains on the state math test.

Performance in different types of math classes

Were teachers of advanced algebra classes more or less likely to earn bonuses than teachers of remedial math classes?  Since the formula for determining bonuses is somewhat simplified, it may be the case that it's a lot easier to get low-performing student to advance x points than high-performing students, especially if there's a ceiling effect.  Or it may be the case that some subjects are better aligned with the contents of that year's state test.

Improvement of scores

For merit pay systems to transform our schools, we need teachers to improve -- and continue to improve -- while they're eligible for these bonuses.  Why?  Let's say that the treatment teachers, as a group, have a score that ranks them in the 50th percentile historically each year.  Now let's say that the treatment group teachers average a score in the 60th percentile each year.  That would appear to be strong evidence that incentives make teachers better (at least as measured by gains on Tennessee's state tests, anyway), but it would only offer a limited amount of hope for the future because it would indicate only a one-time boost to scores.  Let's say another experiment on a professional development system of some sort yields growth instead of simply a step up -- teachers are in the 50th percentile the first year, the 55th the second, the 60th the third, and the 65th the fourth . . . that system would offer more promise for future growth in school success.

Keep your eyes open for some more pre- and post-release analysis of the experiment . . .

Friday, September 17, 2010

Today's Random Thoughts

-The results from the Nashville incentive pay experiment are being released on Tuesday, so keep your eyes open . . . and check back here for some more pre- and post-release thoughts.

-Hopefully everybody read this post entitled "Can Exercise Make Kids Smarter?".  The short answer: yes, it can. There's a fast-growing body of literature linking academic performance to all sorts of social conditions and environmental factors that I think we should start paying more attention to.  In the end, though, the bigger question will be what we do with this knowledge (e.g. figuring out how to get kids to exercise more is probably harder than figuring out how exercise affects brain development, body function, and academic performance).

-There seems to be an awful lot of gnashing of the teeth over Fenty's loss and how it proves that mayors that tackle education reform can't get re-elected.  Isn't it instead possible that the objections of the public were more over style than substance?  Maybe people like Rhee's reforms but don't like the way she implemented them (or maybe they would just rather she implemented different reforms).  I think we'd need a heck of a lot more analysis before we could conclude that Fenty lost simply because he tried to change schools -- and that any other mayor who tries will share a similar fate.

Wednesday, September 15, 2010

The Most Surprising Result of the TIME Poll

I'm somewhat skeptical of national polling on most educational issues, since the public doesn't truly understand much of what they're being asked about (heck, half the wonks don't really understand NCLB).  So I read a few posts about the TIME poll, bookmarked it, and forgot about it.  Finally got around to reading it today . . . there are mixture of results that anybody can use to support their pet reform or ideology, but I think they're all at least fairly close to what we'd expect.  Only one question really surprised me:

4. What do you think would improve student achievement the most?
More involved parents: 52%
More effective teachers: 24%
Student rewards: 6%
A longer school day: 6%
More time on test prep: 6%
No answer/don't know: 6%

Talk about flying in the face of recent rhetoric . . . if we took the last 100 editorials and op-eds on education policy from the nation's major newspapers, how many would call for better teachers and how many would call for more involved parents?  I can only remember one recent one that sort of called for the latter (Samuelson's op-ed about motviation), so I might have to guess 99 to 1.

A Primer on the Nashville Incentive Pay Experiment

Part 1: Background Information

According to Eduwonk, results from the Nashville incentive pay experiment are due to be released soon.  I've been meaning for a while now to write up some background information on the experiment so that we have some context when the results are released, so this seems like as good a time as any.

The National Center on Performance Incentives was started in 2006 with a 5 year, $10 million grant received from the Department of Education's Institute for Education Sciences.  The center is housed at Vanderbilt University's Peabody College and run in conjunction with various partners, including the RAND Corporation and the University of Missouri.  Peabody's Matthew Springer and James Guthrie (now of the George W. Bush Institute for Public Policy) are the directors, and the center is staffed by people from a range of institutions across the country (full list).  The funding was to cover two experiments plus other related costs.  The first experiment was conducted in Nashville from 2006-09 and was dubbed the Project on INcentives in Teaching (POINT).

The center started at Vanderbilt the same time that I did, and I worked there during my first year (2006-07) to earn my keep around here.  I haven't been involved with the center since then and have no information on what the results are.

The original experiment design was to encompass 200 middle school math teachers in the Metropolitan Nashville Public Schools -- 100 in the control group and 100 in the treatment group.  Teachers in the treatment group were eligible for bonuses of up to $15,000 for each of three consecutive school years.  Each teacher received $750 every year for participating as long as they completed all the required surveys, interviews, etc.  Teachers were recruited into the experiment in the fall of 2006, not long after the school year had begun.

Bonuses were based on student gain scores* (not quite the same as value-added, see technical note at end) on the Tennessee state test (TCAP).  Unlike virtually every state, TN's assement is system is vertically scaled, meaning that scores can be compared across years on the same scale (a score of, say, 250 in 7th grade means the same thing as a score of 250 in 6th grade).  This means that a student who goes from 240 to 260 from 6th to 7th grade gained 20 points.  Meanwhile, researchers looked at the years preceding the experiment to determine the average growth of students at each level.  Taking the previous example, let's assume that the average TN 6th grader scoring a 240 on the state test then scores 255 next year.  This would mean that a student who scored 260 was 5 points above average.  For that, a teacher would receive a score of +5, and each student the teacher taught would be scored similarly.  The average score for a student with teacher x would be calculated.  The purpose of calculating scores this way was to strike a balance between statistical rigor and transparency/ease of communication.  The result is a calculation that's not quite as rigorous as a value-added score, but a lot easier for teachers to understand.

When the teacher's final score has been calculated, it's then compared to the historic average for middle school math teachers in Nashville.  If a teacher scores in the 80th percentile, they earn a $5,000 bonus, the 85th percentile earns a $10,000 bonus, and the 95th percentile yields at $15,000 bonus.  The targets for the bonuses stay the same the entire three years, so it's possible for every teacher in the treatment group to earn a bonus each year (in other words, they're not competing against each other).  It's my understanding that for the first year the bonuses were distributed along with paychecks the following fall, but I don't know what the procedures were the following two years.

The experiment ended in May, 2009 and a large team of researchers have been poring over data from test scores, interviews, surveys, and other sources of information ever since.  This means that there is going to be a lot of analysis released at some point in time -- and that it's going to take a while for even the most informed reader to sort through.

technical note: A "gain score" is simply the gain in a student's score from year to year (260 - 240 = a gain of 20 points), while a "value-added score" is an attempt to isolate a teacher's effect on a student's score and might control not only for a student's previous achievement level but also the other teachers he/she has or has had, the school he/she attends, demographic factors, class size, peer effects, and any number of other things.  In other words, a gain score is just the raw growth a student exhibits while a value-added score is a more precise estimate of exactly how a specific teacher influenced that growth (though value-added could be computed for schools, states, etc. as well).

Tuesday, September 14, 2010

Today's Random Thoughts

-Jay Mathews echoes a point I've often made in private: most news stories about the cost of college attendance grossly overstate what the average student actually ends up paying.  Though student loan debt is not a trivial problem, I think there are probably more people scared off by misperceptions of the costs of college than there are people who are bankrupt because of their attendance.

-Aaron Pallas shoots holes in the claims made in a recent op-ed about the miracles worked by a group of CA schools.  In an op-ed I've seen mentioned numerous places, Caitlin Flanagan claims that the ICEF elementary schools closed the achievement gap.  Pallas does some number crunching and finds that's not even remotely true (except for one of the five schools, and only in 2nd grade reading) -- indeed, their students' test scores are only slightly better than district averages for African-American students.  I have to say I'm mildly surprised that her claims made it past the editor's desk.  There are a lot of reasons to be skeptical of the charter school movement writ large, but there's really no arguing the fact that some charters have achieved outstanding test results.  In other words, there's plenty of statistical evidence to support arguments for the proliferation of charter schools -- it seems odd that anybody would need to resort to misrepresenting the test scores of a few select charters.

-Stephen Sawchuck makes a reasonable point about the possibility that value-added scores can save the jobs of unfairly maligned good teachers as well as unfairly maligning good teachers.  Both sides of the debate would do well to remember that there are many positive and negative aspects of value-added scores.

-Kevin Carey writes about a very interesting chart on the growth in college expenditures.  Basically, the chart shows that "student-oriented" expenditures at the top 1% of most selective colleges have skyrocketed, have grown quite quickly at other schools among the top 10%, and haven't grown terribly fast at the rest.  I suppose the takeaway points are that the growing concern about runaway spending in higher education really only apply to a select few colleges (where money isn't really an issue in a lot of ways), and that our colleges are growing further and further apart in terms of resources.

Oh Meyer Goodness! Redux

Last week, Peter Meyer wrote a piece that I called "baffling".  Well, at least he's consistent, because today he did the same thing again.  Over at Flypaper, he almost seems to be calling for a return to segregated schools.

Here's some context: This NY Times article today referenced this report about suspensions in urban middle schools, the major finding of which was that black students (particularly males) were much more likely to be suspended than white students -- and that the gap had widened over the past few decades.

Meyer makes a leap at the end of his post, implying that the desegregation of schools is responsible for this disparity and referencing an MLK quote from decades ago to back up his support of segregated schools.

In the post where he first references the MLK quote, he does make a reasonable point that desegregation shouldn't be our sole policy aim (though, at the same time, I don't think very many people think it should be).

But there are two major problems with his latest post:

1.) As far as I can tell, the report says zero about any differences in suspension rates between more and less racially diverse schools.  In other words, there's really no readily apparent evidence for Meyer's claim.

2.) 1954 has come and gone.  We, as a society, have decided that separate but equal is inherently unequal.  Nostalgia for the past is one thing, but do we really have to go back and repeat all our mistakes?  Black males are also more likely to be arrested, does that mean we should create segregated neighborhoods to accompany our segregated schools?

I hardly think Mr. Meyer's next post is going to argue for separate drinking fountains and bathrooms, but he'd do well to remember that it's a slippery slope.  If you're going to advocate for segregated schools, please do so more thoughtfully and use actual evidence.

update: I originally misspelled Mr. Meyer's name as "Mayer" in this post. My apologies; no matter how much I disagree with him on this issue he still deserves to have his name spelled correctly.

Monday, September 13, 2010

Responsibility With No Responsibility

Researchers and practitioners all seem to agree that teachers are the most important factor within a school.  And many have taken that another step and asserted that teacher quality is almost the only thing that matters.  I've pushed back by pointing out that lots of things affect a teacher's performance other than a person's talent or moral character.

But here's what blows my mind.  Across the country, people seem to argue that teachers need to put up or shut up -- and that if a school fails it must be a result of poor teaching.  And, yet, all across the country, teachers are told to do things the way the principal, superintendent, board of ed, or whomever wants them done.  The last decade has seen the proliferation of scripted curricula ("teacher-proofing" they call it) and increasing micromanagement in urban schools (ask an NYC teacher if they have their "word wall" up or if their bulletin board properly displays student work).  If teachers bear all the responsibility for student success, why are they given so little responsibility for what and how students learn?

Think about it: if a teacher's not given any responsibility for how and what their students learn, then how can we hold them responsible for how and what students learn?  It's accountability without autonomy, responsibility with no actual responsibility.

When NYC started their principal accountability program, it was in the context of an "autonomy zone".  Principals signed contracts that basically said they would be fired if student achievement didn't improve in 5 years.  And, in return, principals had far more say over how their school was run and how professional development funds were spent.

Teachers, on other hand, aren't really offered the same deal.  They're essentially being told that they will be held responsible for what happens in their classroom (which isn't entirely unfair) -- but also that they will run their classroom a certain way . . . or else. 

If we don't trust teachers to do what's in the best interest of students, then maybe they're not the ones we should be pointing fingers at when students don't learn.  If a teacher follows a scripted curriculum and students don't learn, maybe we should point our fingers at the curriculum writers.  If a teacher follows the checklist the district passes down and students don't learn, maybe we should point our fingers at the district personnel.  If a teacher does everything their principal demands of them and students don't learn, maybe we should point our fingers at the principal.

If we think a teacher's primary responsibility should be to stick to the curriculum, decorate their rooms the way the superintendent says to, and follow the instructions of their principal, then, by all means, we should evaluate them on these things and hold them accountable when they fail to do them.  But if, instead, we think a teacher's primary responsibility is to ensure that students learn, maybe we should think about letting them determine what and how students learn before holding them accountable for this.

Friday, September 10, 2010

Today's Random Thoughts

-Tennessee has figured out a solution (hat tip: Stephen Lentz) to the fact that only about 1/3 of teachers teach tested subjects but that all teachers are supposed to have 35% of their evaluation based on value-added scores . . . all "non-TVAAS" (the state test) teachers will simply have their school's average score used for their evaluation.  Problem solved!

-Aaron Pallas continues his critique of the LA value-added kerfuffle, arguing that the LA Times did not do enough to inform its readers about the statistical uncertainty in value-added measurements.  He argues that they should've used confidence intervals (something that popped into my head the other day) to more accurately describe the estimate of a teacher's effect on student test scores (they send you a confidence interval with your SAT scores, so why not with a value-added score?) in addition to better describing year-to-year and subect-to-subject variability.  This is a follow-up to his incisive critique of the Times' failure to follow normal standards of journalism when verifying the student data.

Jay Mathews has Killian Betlach's take on what it's like to be told to restructure a school.

Roger Garfield, a teacher in DC, provides an insider's view of some of the problems the schools face.  The first couple paragraphs brought back a lot of memories for me.

Newark's answer to the Harlem Children's Zone is the Global Village, a group of five schools that have received federal turnaround dollars.

Robert Samuelson says the real key to reform is student motivation.  It's a pretty short op-ed, and there's a lot more to it, but I think he raises a valid point.  If student motivation doesn't change, why would we expect student learning to change?  But I don't think it's quite as strong of a repudiation of other policies as he argues, since better principals, better curricula, better teachers, smaller classes, and so on could conceivably alter student motivation (but if they don't, they probably won't work).

Wednesday, September 8, 2010

First Day of School: Where Are You?

Today is the first day of school in NYC, which always brings back a flood of memories for me.  But today it's making me think about something a little different.

I left the classroom.  I left the classroom for a number of reasons, but near the top is that my experiences there were horrible.  In the end, even if I wanted to, I just couldn't stand the thought of teaching for the next 30 or so years.  In the end, I was too weak to make it.

And yet, I now find myself up in the ivory tower consorting with others who regularly cast stones at the lowly teachers (who simply need to put up or shut up if we ever want to fix our disaster of an educational system).  And you know what?  Most of them couldn't hack it in the classroom either.

There's a lot of lip service from us non-teachers about how important teachers are, but I'd say there's even more disrespect.  Whether anybody wants to admit it or not, the word "teacher" is said with at least some amount of disdain in many policy circles.

And that troubles me.  Not because there aren't bad teachers out there, but because most people not only couldn't do much better, they don't even have the courage to try.  No amount of money could convince me to go back and teach in the Bronx permanently.  I shudder even thinking about it.

I still remember my last day of teaching and the conversation I had with another teacher.  He was a former business executive twice my age, but had been a teaching fellow just like me.  "Worst two of years of my life," he said.  "Mine too," I said (which he scoffed at because of our age differential).

Today's the first day of school (in NYC, here in TN some schools started a month ago).  Where are you?  Are you in the classroom?  If not, why?  And how should that make you feel about those who are?

Teachers catch a lot of flak for resisting change (among other things).  But to everyone else out there who couldn't hack it in the classroom (or doesn't want to try), I ask you this: shouldn't those who actually show up in a classroom every day be at least a little wary of what we say?

If you were a teacher, would you want somebody who can't or won't teach telling you what to do?  I'm not arguing that those of us outside the classroom never have anything valuable to suggest, just that we're usually arrogant in the way we suggest it.  If we can't hack it in the classroom, we can at least mind our p's and q's when talking to, or about, those who can.

To all you teachers out there who are still doing what I couldn't, I tip my hat to you.  Today I acknowledge that, in many ways, you are better than I.  Today I acknowledge that you are far more important to our educational system than anybody wearing a fancy suit or carrying fancy credentials.  And, as such, today I acknowledge that my role as a researcher should be to help you.  I'll do my best.  I trust you'll do the same.

Be Careful When Calling For "Great Teachers"

Davis Guggenheim, director of the forthcoming documentary "Waiting for Superman," writes an op-ed for the Huffington Post (hat tip: Gotham Schools) that fits pretty well with conventional wisdom on schools nowadays.  In it, he repeatedly asserts that "We can't have great schools without great teachers"

This is true.  We can't.  But this is also a dangerously overly simplified narrative.

I say this for three reasons:

1.) Teachers are the single most important within-school factor, there's really no dispute over this.  But estimates of teacher impact on student test scores find that teacher quality only explains about 20% of the variation in these test scores.  So let's be careful not to insinuate that teachers are the only thing that matter or that teachers should be expected to fix everything.

2.) Lines like the one Guggenheim uses are great soundbites, but too many people assume that teachers are simply "good" or "bad" when they read or hear such things.  In reality, teachers don't come out of the womb either good or bad; they perform poorly or superbly for any number of reasons.  These include, but are not limited to: experience, class size, school quality, curricula, the actions of other teachers, the actions of administrators, and the particular students they've been assigned this year.  All of these factors are mostly out of a teacher's control in any given year.  So simply searching for great teachers isn't really enough: we have to search for them, train them, place them in a context where they can succeed, and then convince them to keep doing what they do (and doing it well).

3.) Guggenheim suggests that the solution to all our problems in education is a simple one: we need great teachers.  He further suggests that curriculum, class size, etc. don't really matter.  Both of these are false.  Finding, training, and retaining great teachers is anything but simple, and teacher quality is but one of many, many things that matter in education.

I can't fault Guggenheim for his obsession with teacher quality.  It's probably the one factor that's both important enough and manipulable enough for policy changes to have an immediate and significant impact.  But if there's one thing that I've learned about our educational system it's that changing one factor should never be expected to solve everything.

I need to add to my list of things people should remember about education policy that education is an enormously complicated process involving innumerate moving parts and, as such, we cannot -- and should not -- expect changing one factor to solve all of the system's problems.  There is no magic bullet, no simple fix; changing the course of one child's education is a lifelong process and changing the course of millions of kids' educations is infinitely more difficult.

Teacher quality seems like a good place to start, but let's recognize both that changing it won't be easy and that it's not a good place to stop

Tuesday, September 7, 2010

Oh Meyer Goodness! What was he Thinking?

Peter Meyer's baffling post last night continues to astound me.  The post begins by criticizing Pedro Noguera for arguing that there are two sides to the issue of addressing the achievement gap (at which point I was ready to agree with him), and finishes by arguing that Noguera's side is really wrong and Meyer's side is right (at which point I became baffled).

The post continues to re-hash the idea that we have only two options in combating the achievement gap: fix schools and ignore everything else, or fix everything else and ignore schools.  I've written in the past that this is a "false distinction" and agreed with Geoffrey Canada's assessment that this is "a terrible, phony debate".  We we need to choose option 'c' -- "all of the above".

I've already written a long description of this debate and what research actually says about the influence of non-school factors vs. in-school factors, in addition to a long explanation of how this played out on the ground at my school -- neither of which will be fully re-hashed here except to repeat this: If there's anything upon which education researchers agree it's that student achievement is influenced more by non-school factors than in-school factors -- and the evidence is overwhelming.

Meyer's piece goes downhill with the following paragraph when he asserts that the argument that "poverty causes low academic achievement" is "wrong."  Why is it wrong?  I'm not quite sure.  This is what he writes:

"What some of us have long known is that public schools were started mainly to educate the poor.  And the only reason poverty is a predictor of bad academic achievement results is that educators like Noguera have made it so.  Instead of schools as tools of liberation, we have made them into great houses of mirrors, reflecting back on students the environment they come from."

I'm genuinely unsure of exactly what this means or how, precisely, Pedro Noguera ensures that students from poor families perform worse in school.  But from what I can gather I assume he's arguing that high-poverty schools continue to perform poorly mostly because we expect them to . . . or something like that?

Where I sort of agree with Meyer is where he takes exception to Noguera's statement that "schools alone – not even the very best schools – cannot erase the effects of poverty".  Meyer is right to assert that there's growing evidence that a few select schools have achieved outstanding results virtually by themselves (which, let it be noted, is very different from arguing that we are able to replicate these isolated successes or that we should expect all schools (or at least all high-poverty schools) to work miracles).  But I only sort of agree with Meyer on this point because while I might have preferred that Noguera use slightly different wording, he's likely still technically correct -- and I say that because he specifically differentiates poverty from test scores in the next sentence.  The "effects of poverty" go far beyond just lower test scores, but we conclude that the KIPPs of the world have worked miracles almost solely because they have high test scores.  I, personally, haven't seen (not saying that none exists) research linking attendance at these miracle schools to broader outcomes (e.g. health, college graduation, occupational prestige, incarceration, etc.), and I'd have to guess that while one school can do a lot of good, it can't completely transform every single aspect of every single student's life.  Lastly, I find it odd that Meyer first argues that poverty doesn't cause low achievement and then that schools can, in fact, erase the effects of poverty.

But where Meyer completely loses me is with his assertion that "until we recognize that education is education and that poverty is poverty, we’re not going to fix our schools or enrich our population."  Here he couldn't be more wrong.  The truth is that poverty and education are deeply intertwined -- in both directions (i.e. living in poverty negatively affects students' educational outcomes, and students' educational outcomes affect their life outcomes and the odds they'll live in poverty).  And this is true regardless of whether or not Meyer's proposed reforms will work or not.  While we have plenty of evidence that social conditions and environmental factors experienced disproportionately by those living in poverty influence academic performance, we have precious little evidence that we can consistently change these conditions and factors in ways that will subsequently yield large gains in performance.

In other words, it might be the case that non-school factors are more powerful predictors of student performance but that in-school factors are much easier to change.  In which case, Meyer's call for school reform before community perform could, in the end, be the right way to go.  But even still, I find his utter disavowal of the relationship between poverty and academic performance to be more than a little disturbing (even despite his more conciliatory tone today).  And it leaves me with two sets questions for Mr. Meyer:

1.) What evidence, exactly, do we have that poverty does not influence students' academic performance?  If poverty doesn't cause worse performance, why do students from low-income families perform so much worse?  Is it solely because schools in low-income neighborhoods are worse?  If so, why do high-income students in low-performing schools outperform low-income students in high-performing schools?  And, even if it is only the school influencing achievement, is it not possible that neighborhood poverty is influencing school quality?

2.) Why must a disavowal of the relationship between poverty and academic performance be a prerequisite for the support of school reform?  Can it not be the case that poverty causes lower achievement but that schools can overcome some of these effects?  To argue that poverty is important and that schools are important are not mutually exclusive.

update: I originally misspelled Mr. Meyer's name as "Mayer" in this post.  My apologies; no matter how much I disagree with him on this issue he still deserves to have his name spelled correctly.

Today's Random Thoughts

*From an interesting piece on study habits come a few simple lessons on what helps people actually learn things:
1.) studying the same thing in different places (so that you learn thing in different contexts)
2.) studying more than one thing at once (so that you make connections between the different subjects)
3.) studying the same thing at different times (so that you forget and re-learn things -- "forgetting is the friend of learning")

The piece would seem to support the idea that we should "spiral" when we teach students instead of "scaffold" -- things should be briefly introduced and re-introduced in different contexts instead of teaching one subject one day and a different one the next.  I'd also add that this seems like a pretty good argument for more interdisciplinary teaching/learning -- something that's difficult to do at the K-12 level and for which there's little incentive for college professors to try (indeed, there's probably a discentive since more specialization often equals a greater chance of earning tenure).

*Aaron Pallas proposes a good rule on his new blog: "the weight accorded to any one element of a teacher evaluation system should be proportional to the uncertainty about the inference which is drawn from that element".  In other words, if we're not sure about how meaningful something is then we shouldn't place the whole weight on that item (that goes for both principal evaluations and value-added scores)

* Are middle schools hurting students?  My experience teaching in a middle school has convinced me that those findings are at least plausible (as do my experiences when I was 12-14).  This isn't to suggest that K-8 schools are a silver bullet, just that, in theory, they have more potential than middle schools to advance the learning of 6th-8th grade students (especially if those students can take a leadership role in the school by tutoring younger kids, serving as crossing guards, etc.).

Thursday, September 2, 2010

National Test, Here We Come

I just noticed this item from the wire feed on the NY Times website.  Two groups of states have been given $330 million to develop new, better, tests to be ready by 2014-15.  I've argued many times (see here, here, here, here, and here) before that the only way we can continue to use test-based accountability in the future is if we adopt national standards and a national test.  A year ago I thought we were likely at least a decade away from the former, but both seem to be approaching far more rapidly than I could've imagined.

If you're pro-test-based accountability, this should make you happy -- these developments will make such systems more accurate and more meaningful . . . plus, politically, this is the only way such systems will survive into the next decade.

If you're anti-test-based accountability, you might have mixed feelings -- these developments will make the current testing regime more accurate and more meaningful, but it also likely means that it's not going away any time soon.