As only one small piece of the puzzle, we shouldn't get carried away with the findings. But I was struck with what was -- and was not -- included in their list of differences between the schools. Below is the Executive Summary's list of differences:
We identified one major theme that cut across all ten components: personalization for academic and social learning. In the area of personalization, our findings show that the higher value-added (VA) schools made deliberate efforts through systematic structures to promote strong relationships between adults and students as well as to personalize the learning experience of students. In addition, the higher VA schools maintained strong and reliable disciplinary systems that, in turn, engendered feelings of caring and, implicitly, trust among both students and teachers. Leaders at the higher VA schools talked explicitly about looking for student engagement in classroom walkthroughs as well as in their interactions with students. Teachers at the higher VA schools were more likely to discuss instructional activities that drew on students’ experiences and interests. The higher VA schools also encouraged stronger linkages with parents (p. 5).Included: "soft" factors like trust and relationships.
Not included: virtually everything currently discussed in ed policy circles (school choice, teacher evaluations, merit pay, data-driven decision-making, etc.)
Now, to be fair, many of factors were off the table because the study examined four schools located in the same county which had much in common (no differences in merit pay or district leadership, for example). And there's always the possibility that implementing some of these reforms could change the factors included in the list even if they're not currently present in the schools.
Nonetheless, even when the measuring stick is value-added scores -- the latest, greatest measure being pushed on schools -- many of the factors emphasized by those pushing for its use don't seem to be drivers of the differences.
Most interestingly, the two low-scoring schools had higher scores on some measures of teaching practices and instructional quality than the two high-scoring schools. Here's the summary from the research team on this topic:
Taken together, our indicators of the quality and nature of instruction across the schools -- CLASS-S*, course matrices, student shadowing, and interviews with multiple school stakeholders -- reveals no major differences in instructional quality across the four schools. We cannot turn to evidence in the area of Quality Instruction to explain the differences in value-added achievement between our high- and low-VA schools" (p. 32).
While we certainly shouldn't base our policy decisions on one study examining four schools in one county, I do think it's fair to say that this confirms what we should've known all along -- that high-quality instruction (like every aspect of schools) is not sufficient when trying to create high-performing schools. I should also note that, in many areas, larger differences existed between the honors and regular tracks within the schools than between the high- and low-scoring schools themselves.
Again, I don't want to get carried away with the results of one small-scale study (and I'll refrain from addressing the other 50 or so topics covered by the report at the moment), but I do think that, at this point, we can take two important lessons from this ongoing research:
1.) Regardless of the amount of press coverage, foundation money, or policy directed toward a particular aspect of school reform, not a single factor is sufficient to create a high-quality school.
2.) Even though it's easy (and, arguably, practical) to focus on the simplest, starkest issues, the most subtle, nuanced, and complicated ones are often at least as important.
From a 10,000 foot vantage point, the potential benefits of creating more charter schools, or implementing a merit pay plan or new curriculum are easy to see. But, on the ground, it probably matters more how than whether those things are implemented -- without strong relationships, trust, and commitment, it's unlikely any reform will turn around a school or district.
That fact is really difficult for policymakers to swallow because there's no easy way to change those types of things: what is Congress supposed to do in order to make make teachers at the local elementary school get along better with their students? The relationship between policy and the factors discussed in the report is so indirect that it's easy to just ignore them and focus on simpler solutions. We should all try to resist that temptation.
*"the Classroom Assessment Scoring System for Secondary classrooms (CLASS-S), [is] an observational tool developed by researchers at the University of Virginia, to observe and assess the quality of teacher-student interactions in classrooms. Based on development theory and research suggesting that interactions between students and adults are the primary mechanism of student development and learning (Greenberg, Domitrovich, & Bumbarger, 2001; Hamre & Pianta, 2006; Morrison & Connor, 2002; Pianta, 2006; Rutter & Maughan, 2002), the CLASS-S focuses not on the presence of materials, the physical environment, or the adoption of a specific curriculum but on what teachers do with the materials they have and on the interactions teachers have with their students. The observation tool looks specifically at interactions between teachers and students across four domains: Emotional Support, Classroom Organization, Instructional Support, and Student Engagement" (p. 12).