In the fall, I wrote about an instance in which outsiders may be needed in education reform. Today I'll give you an example of how outsiders can also be dangerous (though this one pertains more to research).
Perhaps the largest change in educational research over the past decade or so has been the sizable increase in large-scale quantitative research, a fair amount of which is conducted by researchers outside of ed schools. Like any change, this has resulted in both positives and negatives. But one thing that worries me is that I consistently notice the people who are most worried about statistical rigor in quantitative analyses (both inside and outside of ed schools) tend to be less concerned with understanding the context and processes of schooling.
And that's incredibly dangerous.
Methodology, statistics, and technical skills are very, very important in the development of good research. But without a proper understanding of how schools work and what is actually happening on the ground, one can't expect to ask the right questions. And if one fails to ask the right questions, it really doesn't matter how complex and rigorous their analysis is because the answers to those questions are meaningless.
Here's one example of how such a process can unfold -- it's completely unrelated to ed policy, but I still think it's illustrative. The Freakonomics Blog posted a brief discussion yesterday of the ending to the Super Bowl. The post said two things (paraphrasing, of course):
1.) Isn't it amazing that the coaches of both teams realized that the Giants scoring a touchdown with about a minute left was actually a better outcome for the Patriots? The Patriots' coaches tried to let the Giants run the ball into the endzone while the Giants' coaches instructed their players not to score a TD. These counter-intuitive behaviors are an excellent example of game theory properly implemented.
2.) But then the Giants failed to take game theory into account when attempting their two point conversion. Wouldn't it have been much better for them to run time off the clock instead of trying to score to go up 6 points instead of 4? They might've been able to kill 20 seconds by running the ball 95 yards backwards and around in circles, and certainly being up 4 with 40 seconds left is better than being up 6 with a minute left. Why didn't the coaches think of this?
There's some clever thinking going on here. Yes, this is an interesting application of game theory. And, yes, running 20 seconds off the clock would've been a better strategy. So the application of economic theory to the situation is exemplary. In a short space, there's a cogent analysis and a provocative question. But there are two fatal flaws.
1.) Both coaches did not apply game theory. Tom Coughlin, the Giants' coach, said he preferred that the team take the guaranteed six points to running down the clock. So let's hold off on patting him on the back for correctly applying game theory.
2.) More importantly, the clock doesn't run during two-point conversions. The Giants could've run around in circles for ten minutes, and there still would've been exactly 59 seconds left on the clock.
So, what we have here, is a smart professor who's well-trained in economic theory and statistics. This training has allowed him to make an important insight about a football game and ask an interesting question. Except that he doesn't actually seem to know much about the rules of football or the context of the situation. Which has rendered his question moot.
And I see the same thing (in a much less dramatic and much less foolish) way happening in education research. Smart people with training in other fields and disciplines and serious methodological credentials come into the field and find some low-hanging fruit ripe for picking. At first, this seems like a great idea. We can never have too many smart, well-trained researchers in education. And the eye of the outsider can be sharp. But then the research starts and we realize that somebody can be smart and well-trained but, at the same time, fail to truly understand how schools work and the contexts under which students, teachers, principals, schools, etc. operate. And then we get smart, well-trained people asking the wrong questions (or interpreting their findings in silly ways). And that neither advances the field nor helps us improve our educational system.
Let's bring the analogy back to football. Let's say that Football was a field in many Universities. Grad students train under faculty who work for Schools of Football and/or Departments of Football Policy, Football Leadership, Football Teaching & Learning, Football Evaluation, Football Foundations, Football Studies, and so on. And most of the research on football is conducted by faculty and grad students from these schools and departments. There's no reason why an economist shouldn't do a study on the costs and benefits of attending school on a football scholarship; why a psychologist shouldn't conduct a study on the impact of playing football on one's personality; or a sociologist shouldn't conduct a study of the impact of playing football on one's social capital. But in order to do these studies well, they first need to understand how the game of football is played, what a player does on the field, how much he practices, and so on. Otherwise they're just chucking their theories against a wall and hoping one sticks somewhere.
So, to all the smart economists, psychologists, sociologists, etc. out there who wish to conduct the research on education: Welcome, we'd love to hear your insights and figure out if we can apply your theories and methods to help us advance our field and improve our schools. But before claiming that you've solved a problem none of us have been able to for the last 100 years, take some time to learn how schools operate. Read a massive and wide-ranging stack of literature. Go visit some schools. Talk to people who work in schools and education departments. Talk to people who study those who work in schools and education departments. Then begin your research.
At the very least, that should save you the embarrassment of asking students how many touchdowns they need to score in order to hit a home run on their fourth grade reading test.
Showing posts with label education research. Show all posts
Showing posts with label education research. Show all posts
Wednesday, February 8, 2012
Thursday, December 1, 2011
Moving on from the "Search for Universals"
As I've delved deeper into the field of education research, I've grown increasingly frustrated by two large problems I see in that research. While listening to Malcom Gladwell's TED talk on consumer choice, something he said helped crystallize the second issue (I'll explore the first in another post).
In the talk, Gladwell discusses the shift of food science (starting with Prego pasta sauce) from trying to create the "one best" product for all to creating multiple products that fit specific groups. He compares that trend to cancer research, arguing that "the great revolution in science of the last 10-15 years" is "the movement from the search for universals to the understanding of variability".
If he's right, then it seems ed policy researchers missed a memo somewhere (to be fair, it's not just ed policy -- I see the same problem in lots of policy literature).
The vast majority of the policy research I've read, seen presented, or heard discussed focuses exclusively, or at least mainly, on the average effect of a particular variable, intervention, or policy. Over the years, we've developed increasingly sophisticated methods to more accurately estimate these average effects. But I rarely hear people discuss the differential effects of the variable, intervention, or policy of interest.
In other words, we keep trying to figure out if different policies "work," but we define "work," as making a statistically significant difference for the average student, teacher, principal, school, district, or state. If we instead asked for whom a given policy works, we'd likely find that it works very well for some while harming others.
In the talk, Gladwell mentions a food scientist hired to create the "one best Pepsi" who conducts taste tests of the product with varying amounts of sugar. To his surprise, no one, clear winner emerges. Some people prefer only a little sugar, some like a medium amount, and some like a lot. He then has an epiphany and realizes that there's no such thing as the "one best Pepsi" (which eventually leads to the creation of a wide variety of Prego sauces that target different audiences).
I'd argue that education policy is almost exactly the same. For example, imagine a new math curriculum. How do we decide if it works? The gold standard of research would dictate that we randomly assign, say, 100 classrooms to use the new curriculum and 100 to use the old one. We'd then compare the average scores at the beginning and end of the year of the treatment and control groups. If kids in the treatment group score significantly higher, on average, than the control group then that curriculum earns a gold star.
We spend increasing amounts of time trying to figure out which math curricula will yield the largest achievement gains across the students who use them. But it seems exceedingly unlikely that "one best math curriculum" truly exists. Some states, districts, schools, teachers, and/or students will do best with curriculum A, some with curriculum B, and some with curriculum C.
Wouldn't our time be better spent figuring out for which students curriculum A would be best and for which students curriculum B would be best (and why)? And we could say the same about all of the largest issues in education policy -- teacher training, teacher pay, charter schools, and so on. In later posts, I'll explore how our research and policy would differ if we aimed to understand and account for variability rather than simply finding the one best policy.
I've long thought it odd that we spend most of our lives being told not to stereotype and make generalizations while the most educated people in the country strive to make the largest generalization possible in their research. Most (though not all) researchers focus incessantly on "generalizability" (one of those words we use in the field but the spellchecker won't recognize): if we can generalize your findings to 10 million students across the country, you're likely to publish the paper in a better journal than if we can only generalize your results to students in one school.
As it turns out, I was right to think that odd. Greater generalizability matters in many ways. But, at some point, we start missing the point -- and for exactly the same reasons teachers and parents across the country are telling our kids not to generalize. Everybody is different.
Every student is different. Every teacher is different. Every principal is different. Every school is different. Every district is different. Every state is different. The best policy, on average, will help one and hurt another. Until we understand this variability, we're doomed to fail.
In the talk, Gladwell discusses the shift of food science (starting with Prego pasta sauce) from trying to create the "one best" product for all to creating multiple products that fit specific groups. He compares that trend to cancer research, arguing that "the great revolution in science of the last 10-15 years" is "the movement from the search for universals to the understanding of variability".
If he's right, then it seems ed policy researchers missed a memo somewhere (to be fair, it's not just ed policy -- I see the same problem in lots of policy literature).
The vast majority of the policy research I've read, seen presented, or heard discussed focuses exclusively, or at least mainly, on the average effect of a particular variable, intervention, or policy. Over the years, we've developed increasingly sophisticated methods to more accurately estimate these average effects. But I rarely hear people discuss the differential effects of the variable, intervention, or policy of interest.
In other words, we keep trying to figure out if different policies "work," but we define "work," as making a statistically significant difference for the average student, teacher, principal, school, district, or state. If we instead asked for whom a given policy works, we'd likely find that it works very well for some while harming others.
In the talk, Gladwell mentions a food scientist hired to create the "one best Pepsi" who conducts taste tests of the product with varying amounts of sugar. To his surprise, no one, clear winner emerges. Some people prefer only a little sugar, some like a medium amount, and some like a lot. He then has an epiphany and realizes that there's no such thing as the "one best Pepsi" (which eventually leads to the creation of a wide variety of Prego sauces that target different audiences).
I'd argue that education policy is almost exactly the same. For example, imagine a new math curriculum. How do we decide if it works? The gold standard of research would dictate that we randomly assign, say, 100 classrooms to use the new curriculum and 100 to use the old one. We'd then compare the average scores at the beginning and end of the year of the treatment and control groups. If kids in the treatment group score significantly higher, on average, than the control group then that curriculum earns a gold star.
We spend increasing amounts of time trying to figure out which math curricula will yield the largest achievement gains across the students who use them. But it seems exceedingly unlikely that "one best math curriculum" truly exists. Some states, districts, schools, teachers, and/or students will do best with curriculum A, some with curriculum B, and some with curriculum C.
Wouldn't our time be better spent figuring out for which students curriculum A would be best and for which students curriculum B would be best (and why)? And we could say the same about all of the largest issues in education policy -- teacher training, teacher pay, charter schools, and so on. In later posts, I'll explore how our research and policy would differ if we aimed to understand and account for variability rather than simply finding the one best policy.
I've long thought it odd that we spend most of our lives being told not to stereotype and make generalizations while the most educated people in the country strive to make the largest generalization possible in their research. Most (though not all) researchers focus incessantly on "generalizability" (one of those words we use in the field but the spellchecker won't recognize): if we can generalize your findings to 10 million students across the country, you're likely to publish the paper in a better journal than if we can only generalize your results to students in one school.
As it turns out, I was right to think that odd. Greater generalizability matters in many ways. But, at some point, we start missing the point -- and for exactly the same reasons teachers and parents across the country are telling our kids not to generalize. Everybody is different.
Every student is different. Every teacher is different. Every principal is different. Every school is different. Every district is different. Every state is different. The best policy, on average, will help one and hurt another. Until we understand this variability, we're doomed to fail.
Tuesday, September 29, 2009
Apples to Apples? Not Necessarily
Mike Petrilli has a good post over at Flypaper on the Hoxby et. al. charter school study that was recently released. It's good to see I'm not the only one pointing out that the Wall St. Journal and others shouldn't be writing NYC charters don't cream -- because they probably do.
But I have to take issue with one thing that Petrilli writes about the study: "because it’s a gold-standard random-assignment study, we can be sure that it’s an apples to apples comparison". Not so fast Mike . . . the random assignment part might well be true, but there are plenty of reasons to think that this isn't quite an apples-to-apples comparison.
First, in regards to the random assignment* designation: it may not actually be random assignment. I have yet to hear back regarding the number of charter schools each kid applied to, but it's certainly possible that those who won a charter school lottery applied to more charters, on average, than those who lost a charter school lottery. In other words, it's possible that the winners were more motivated than the losers to begin with. On the other hand, it's also possible that losers were more likely to apply to charters with long waiting lists -- and it might be true that those with longer waiting lists are superior schools that attract superior students.
In addition to these possibilities, I can think of at least one other way that the study design could result in a comparison that is not quite apples-to-apples (note that the authors did address some of these in the paper):
Attrition may not be the same for charter enrollees as it is for the others. 22% of the lottery-losing students transferred to a school outside of NYC or a private school, while 14% of the charter students transferred to a traditional NYC public school. While it's likely that both groups leave their current school due to some level of dissatisfaction, it may be the case that those who transfer from charter schools to another NYC school do so because they're struggling at their school and/or are encouraged to leave due to discipline or other problems. Meanwhile, it may be the case that those who transfer from traditional public schools to non-NYC schools do so because they view their schools as failing them and think they can do better elsewhere. In other words, it's plausible that the most motivated non-charter students transfer out while the least-motivated charter students transfer out. The authors do take a look at this and conclude that there's no difference in the relationship between test scores and likelihood of leaving between the two groups, but there could be no difference in test scores but a large difference in motivation, attitude, family support, or a myriad of other factors that are more likely to lead to growth in test scores (which, remember, is the outcome variable in the study).
*For those of you without a research background, a random assignment study is just what it sounds like -- a study in which those being studied were randomly assigned to a control group or treatment group (for example, if we flipped a quarter for every kid who walked in the door and those who got tails were assigned to Mrs. Johnson's class and those who got heads were assigned to Mr. Smith's class).
But I have to take issue with one thing that Petrilli writes about the study: "because it’s a gold-standard random-assignment study, we can be sure that it’s an apples to apples comparison". Not so fast Mike . . . the random assignment part might well be true, but there are plenty of reasons to think that this isn't quite an apples-to-apples comparison.
First, in regards to the random assignment* designation: it may not actually be random assignment. I have yet to hear back regarding the number of charter schools each kid applied to, but it's certainly possible that those who won a charter school lottery applied to more charters, on average, than those who lost a charter school lottery. In other words, it's possible that the winners were more motivated than the losers to begin with. On the other hand, it's also possible that losers were more likely to apply to charters with long waiting lists -- and it might be true that those with longer waiting lists are superior schools that attract superior students.
In addition to these possibilities, I can think of at least one other way that the study design could result in a comparison that is not quite apples-to-apples (note that the authors did address some of these in the paper):
Attrition may not be the same for charter enrollees as it is for the others. 22% of the lottery-losing students transferred to a school outside of NYC or a private school, while 14% of the charter students transferred to a traditional NYC public school. While it's likely that both groups leave their current school due to some level of dissatisfaction, it may be the case that those who transfer from charter schools to another NYC school do so because they're struggling at their school and/or are encouraged to leave due to discipline or other problems. Meanwhile, it may be the case that those who transfer from traditional public schools to non-NYC schools do so because they view their schools as failing them and think they can do better elsewhere. In other words, it's plausible that the most motivated non-charter students transfer out while the least-motivated charter students transfer out. The authors do take a look at this and conclude that there's no difference in the relationship between test scores and likelihood of leaving between the two groups, but there could be no difference in test scores but a large difference in motivation, attitude, family support, or a myriad of other factors that are more likely to lead to growth in test scores (which, remember, is the outcome variable in the study).
*For those of you without a research background, a random assignment study is just what it sounds like -- a study in which those being studied were randomly assigned to a control group or treatment group (for example, if we flipped a quarter for every kid who walked in the door and those who got tails were assigned to Mrs. Johnson's class and those who got heads were assigned to Mr. Smith's class).
Wednesday, August 12, 2009
More on Contributing to Society vs. Earning Tenure
Debra Viadero reacts to my piece on blogging and tenure by surmising that there's a "revolution brewing." That's probably too strong of a conclusion to draw based largely on my strong reaction to about an hour's worth of discussion among a group of faculty and grad students. But maybe she's right. In some ways, I hope she is.
As I made abundantly clear in the previous post, I don't really don't like the idea that doing anything besides publishing scholarly books and articles should hurt one's chances of tenure. And I'm still somewhat confused by those who say it should. Here's why:
I understand the argument that tenure should be based largely on scholarly publications. I don't necessarily agree, but I understand it. At the very least, it's the easiest way for somebody to prove that they're a scholar in their field.
But what I don't understand is the argument that faculty shouldn't reach out to the world at large and attempt to convey their knowledge. Indeed, in some circumstances professors have a social responsibility to inform others of what they know. Pretend, for example, that a city council proposed a beautification and tourism program wherein all sidewalks are painted blue. But a team of researchers has just conducted a study on blue painted sidewalks and found that they increase depression, cause cancer, and shorten lifespans (this is totally made up, obviously). If they were aware of the proposal and failed to inform anybody of their findings, they would be in the wrong. It would be like if you knew there was poison in cup of coffee but you sat and there and said nothing while somebody picked up the cup and drank it.
Obviously, most situations aren't this clear cut -- but the point still stands. In at least some circumstances, academics are obligated to report what they know to the general public. Even in less dire situations a lot of good can come from sharing a little knowledge. So not only do I disagree with anybody who argues that professors shouldn't waste their time with the outside world, I don't even understand what the rationale behind the argument is. Should they not risk wasting time on trivial matters when they could be publishing journal articles? Is that really more important than contributing to the betterment of society? Or taking care of one's children? Or feeding one's pet(s)? Guess what: writing academic articles should never be the most important thing in somebody's life.
And if there's no reason to discourage professors from spending time doing things other than academic research, I see no reason why what they do during that time shouldn't be eligible to contribute to their case for tenure.
As I made abundantly clear in the previous post, I don't really don't like the idea that doing anything besides publishing scholarly books and articles should hurt one's chances of tenure. And I'm still somewhat confused by those who say it should. Here's why:
I understand the argument that tenure should be based largely on scholarly publications. I don't necessarily agree, but I understand it. At the very least, it's the easiest way for somebody to prove that they're a scholar in their field.
But what I don't understand is the argument that faculty shouldn't reach out to the world at large and attempt to convey their knowledge. Indeed, in some circumstances professors have a social responsibility to inform others of what they know. Pretend, for example, that a city council proposed a beautification and tourism program wherein all sidewalks are painted blue. But a team of researchers has just conducted a study on blue painted sidewalks and found that they increase depression, cause cancer, and shorten lifespans (this is totally made up, obviously). If they were aware of the proposal and failed to inform anybody of their findings, they would be in the wrong. It would be like if you knew there was poison in cup of coffee but you sat and there and said nothing while somebody picked up the cup and drank it.
Obviously, most situations aren't this clear cut -- but the point still stands. In at least some circumstances, academics are obligated to report what they know to the general public. Even in less dire situations a lot of good can come from sharing a little knowledge. So not only do I disagree with anybody who argues that professors shouldn't waste their time with the outside world, I don't even understand what the rationale behind the argument is. Should they not risk wasting time on trivial matters when they could be publishing journal articles? Is that really more important than contributing to the betterment of society? Or taking care of one's children? Or feeding one's pet(s)? Guess what: writing academic articles should never be the most important thing in somebody's life.
And if there's no reason to discourage professors from spending time doing things other than academic research, I see no reason why what they do during that time shouldn't be eligible to contribute to their case for tenure.
Friday, August 7, 2009
Contributing to Society vs. Earning Tenure
I mentioned in my previous post that we had quite the interesting -- in a bad way -- discussion at the meeting of the sociology of education section. What transpired may have been the most frustrating experience I've had in academia. Those of you who read my blog regularly know that I try to choose my words carefully -- that I aim to provide measured responses to the issues of the day and see all sides of an issue. I think that's important, so I took a few hours, ate a good dinner, calmed down, and thought this through before beginning this post.
That said, I have never taken part in a conversation where academics were so narrow-minded, pompous, or downright illogical. Here's what happened:
Throughout the day there was much discussion surrounding ways for sociologists of education to get their voices heard by the mainstream media, to influence policymaking, and, generally speaking, reach a larger audience. In the last session of the day, one senior scholar pushed back against this theme, arguing that there was nothing wrong with doing sociological research purely for the purpose of advancing the field of sociology. A perfectly valid point, in my mind: while I often wish academia would make more of an effort to conduct practically relevant research and translate that research into a readily accessible format, this need not apply to every last paper.
Over the course of the next hour the discussion veered into the decisions of students and faculty to reach out to the popular press, write policy memos, run blogs, and so forth. In other words, commenting on their areas of expertise in places other than academic journals and books. The room was decidedly mixed on the topic, with some encouraging everybody to reach out to the nearest publicist and some expressing more reservations.
And then one senior scholar suggested that people wait until they have tenure to do such things. The argument was that one should focus on academic work until they established themselves in their field and then they could spend the next thirty years trying to make a practical impact. The statement is not completely without merit -- more experienced scholars should have more knowledge and expertise to share, so it makes sense that they would be the first ones a reporter would call. But that wasn't really the context of the statement. Eventually the conversation turned to tenure and working on non-academic research and commentaries.
Said one: "you're not going to get tenure by blogging." Said another: "you don't have to focus exclusively on academic research, you could have a pet or have a child, but that won't get you tenure either. You have to decide what's important." (quotes probably aren't exact, but I wrote down what they said at the time almost verbatim so they're pretty close)
In other words, the only thing that counts toward tenure is "academic research" -- articles that are published in selected peer-reviewed scholarly journals, or books with an academic focus. And, worse yet, contributing to society in any other way actually hurts one's chances of obtaining tenure. I don't think I'm exaggerating when I say that I was flabbergasted.
Now, to be fair, I can see a strong argument for academic work being the main component when considering whether somebody should earn tenure. And tenure certainly shouldn't be a popularity contest -- (s)he with the most press mentions wins. But there is absolutely no reason why a narrow definition of academic work should be the sole consideration. Besides the fact that teaching and service ostensibly play a large role in the tenure decision at many institutions, I see no reason why all activity of an assistant professor shouldn't be taken into account.
What purpose, exactly, do faculty serve? I always thought they were there to do two main things: 1.) learn about the world, and 2.) teach others about the world. Why the heck would we interpret "others" to mean only the couple dozen people who read your article in a highly specialized academic journal? When a professor helps educate a wider audience they often provide a greater service to society at large than they do when they publish an academic article. And that should be taken into account. Professors should be encouraged to write op-eds, talk to reporters, write policy memos, and even blog. If all knowledge is concentrated in the hands of but a few professors, what's the point?
Where do these senior scholars get off making the judgment that contributing to their knowledge base is more valuable than helping society at large better understand the way the world works? I was more than a little distressed to hear such thoughts come out of the mouths of sociologists -- the same group that spends so much time studying things like stratification, oppression, and equality. I don't know what the explanation is for these thoughts. Are they jealous of those who get more press? Do they simply want young scholars to make it through the same gauntlet they did?
Oddly, it seemed that those who spoke about it being tough to earn tenure when communicating with a larger audience spoke as though there were some higher being or committee that made tenure decisions. But many of these people are the ones sitting on tenure committees making these decisions. They seemed to shake their head and say "this is the way the world is" even though they were perfectly capable of changing it.
I was always led to believe that academia, perhaps more than any other field, values people who color outside the lines, develop new theories, propose new hypotheses, and change the way we see the world. So it seems more than a little hypocritical for such a group to allow only those who meet a narrowly defined set of criteria to enter their fraternity.
I don't know how many people in the room viewed it this way, but I'm sure it wasn't everybody. I wasn't the only person who voiced displeasure with this way of thinking, and other senior scholars seemed to be warning young scholars in a voice that was more cautionary than foreboding. Said one: "it's hard to write for an audience outside your discipline." Translation (I think): it will take a lot of time to prepare material for non-academic sources, so be careful about starting down this path. But, in my mind, this simply underscores the idea that we should reward, rather than punish, those who successfully reach out to a larger audience.
Beyond this, I don't understand why reaching out to a larger audience is frowned upon by some academics when colleges and universities go out of their way to encourage professors to do as such. We get an e-mail every time somebody from our department is cited in the news. The university sends out weekly e-mails that highlight any Vanderbilt research that was in the media recently -- and has an office that works very hard to get that research in the media. When Vanderbilt researchers appear in the press it's good for the school -- it helps earn not only prestige, but also more research dollars. If professors are supposed to serve the university, shouldn't they be publicizing their findings?
Let's look at a practical example. Eduwonkette was written by an ambitious grad student who thought she had something to offer to the conversation on education policy. In my mind, she was right. For over a year, she ran the best education blog around. She quickly provided insightful and revealing information about all sorts of relevant topics. She altered the course of the conversation in many circles -- in a good way. In other words, in a year or two she accomplished more than most faculty will in a lifetime. But now that she's an assistant professor, we're no longer graced with her presence. Did anybody at her new institution or elsewhere suggest that her time would be better served elsewhere? I have absolutely no idea. But if I were in charge of tenure in her department, I would have strongly recommended that she continue with the blog. Who cares if doing so cost her the time to write a couple more academic articles?
I guess the bottom line is this: there are more important things in this world than academic articles. Sure, we should judge professors by what they contribute to academia, but we're a smart group of people -- why can't we figure out a more meaningful definition of this phrase? We complain when students are reduced to a number (test scores), and yet we have no problem reducing professors to a number (of publications) as well. If academics want to be pompous and narrow-minded -- and irrelevant -- they should continue to discourage their peers from making practical contributions to society. But if they actually care about advancing knowledge, they should be rewarding those who do so -- even if it doesn't involve an academic journal.
That said, I have never taken part in a conversation where academics were so narrow-minded, pompous, or downright illogical. Here's what happened:
Throughout the day there was much discussion surrounding ways for sociologists of education to get their voices heard by the mainstream media, to influence policymaking, and, generally speaking, reach a larger audience. In the last session of the day, one senior scholar pushed back against this theme, arguing that there was nothing wrong with doing sociological research purely for the purpose of advancing the field of sociology. A perfectly valid point, in my mind: while I often wish academia would make more of an effort to conduct practically relevant research and translate that research into a readily accessible format, this need not apply to every last paper.
Over the course of the next hour the discussion veered into the decisions of students and faculty to reach out to the popular press, write policy memos, run blogs, and so forth. In other words, commenting on their areas of expertise in places other than academic journals and books. The room was decidedly mixed on the topic, with some encouraging everybody to reach out to the nearest publicist and some expressing more reservations.
And then one senior scholar suggested that people wait until they have tenure to do such things. The argument was that one should focus on academic work until they established themselves in their field and then they could spend the next thirty years trying to make a practical impact. The statement is not completely without merit -- more experienced scholars should have more knowledge and expertise to share, so it makes sense that they would be the first ones a reporter would call. But that wasn't really the context of the statement. Eventually the conversation turned to tenure and working on non-academic research and commentaries.
Said one: "you're not going to get tenure by blogging." Said another: "you don't have to focus exclusively on academic research, you could have a pet or have a child, but that won't get you tenure either. You have to decide what's important." (quotes probably aren't exact, but I wrote down what they said at the time almost verbatim so they're pretty close)
In other words, the only thing that counts toward tenure is "academic research" -- articles that are published in selected peer-reviewed scholarly journals, or books with an academic focus. And, worse yet, contributing to society in any other way actually hurts one's chances of obtaining tenure. I don't think I'm exaggerating when I say that I was flabbergasted.
Now, to be fair, I can see a strong argument for academic work being the main component when considering whether somebody should earn tenure. And tenure certainly shouldn't be a popularity contest -- (s)he with the most press mentions wins. But there is absolutely no reason why a narrow definition of academic work should be the sole consideration. Besides the fact that teaching and service ostensibly play a large role in the tenure decision at many institutions, I see no reason why all activity of an assistant professor shouldn't be taken into account.
What purpose, exactly, do faculty serve? I always thought they were there to do two main things: 1.) learn about the world, and 2.) teach others about the world. Why the heck would we interpret "others" to mean only the couple dozen people who read your article in a highly specialized academic journal? When a professor helps educate a wider audience they often provide a greater service to society at large than they do when they publish an academic article. And that should be taken into account. Professors should be encouraged to write op-eds, talk to reporters, write policy memos, and even blog. If all knowledge is concentrated in the hands of but a few professors, what's the point?
Where do these senior scholars get off making the judgment that contributing to their knowledge base is more valuable than helping society at large better understand the way the world works? I was more than a little distressed to hear such thoughts come out of the mouths of sociologists -- the same group that spends so much time studying things like stratification, oppression, and equality. I don't know what the explanation is for these thoughts. Are they jealous of those who get more press? Do they simply want young scholars to make it through the same gauntlet they did?
Oddly, it seemed that those who spoke about it being tough to earn tenure when communicating with a larger audience spoke as though there were some higher being or committee that made tenure decisions. But many of these people are the ones sitting on tenure committees making these decisions. They seemed to shake their head and say "this is the way the world is" even though they were perfectly capable of changing it.
I was always led to believe that academia, perhaps more than any other field, values people who color outside the lines, develop new theories, propose new hypotheses, and change the way we see the world. So it seems more than a little hypocritical for such a group to allow only those who meet a narrowly defined set of criteria to enter their fraternity.
I don't know how many people in the room viewed it this way, but I'm sure it wasn't everybody. I wasn't the only person who voiced displeasure with this way of thinking, and other senior scholars seemed to be warning young scholars in a voice that was more cautionary than foreboding. Said one: "it's hard to write for an audience outside your discipline." Translation (I think): it will take a lot of time to prepare material for non-academic sources, so be careful about starting down this path. But, in my mind, this simply underscores the idea that we should reward, rather than punish, those who successfully reach out to a larger audience.
Beyond this, I don't understand why reaching out to a larger audience is frowned upon by some academics when colleges and universities go out of their way to encourage professors to do as such. We get an e-mail every time somebody from our department is cited in the news. The university sends out weekly e-mails that highlight any Vanderbilt research that was in the media recently -- and has an office that works very hard to get that research in the media. When Vanderbilt researchers appear in the press it's good for the school -- it helps earn not only prestige, but also more research dollars. If professors are supposed to serve the university, shouldn't they be publicizing their findings?
Let's look at a practical example. Eduwonkette was written by an ambitious grad student who thought she had something to offer to the conversation on education policy. In my mind, she was right. For over a year, she ran the best education blog around. She quickly provided insightful and revealing information about all sorts of relevant topics. She altered the course of the conversation in many circles -- in a good way. In other words, in a year or two she accomplished more than most faculty will in a lifetime. But now that she's an assistant professor, we're no longer graced with her presence. Did anybody at her new institution or elsewhere suggest that her time would be better served elsewhere? I have absolutely no idea. But if I were in charge of tenure in her department, I would have strongly recommended that she continue with the blog. Who cares if doing so cost her the time to write a couple more academic articles?
I guess the bottom line is this: there are more important things in this world than academic articles. Sure, we should judge professors by what they contribute to academia, but we're a smart group of people -- why can't we figure out a more meaningful definition of this phrase? We complain when students are reduced to a number (test scores), and yet we have no problem reducing professors to a number (of publications) as well. If academics want to be pompous and narrow-minded -- and irrelevant -- they should continue to discourage their peers from making practical contributions to society. But if they actually care about advancing knowledge, they should be rewarding those who do so -- even if it doesn't involve an academic journal.
Friday, April 3, 2009
DC Vouchers: Three Year Report
Speaking of ideology and research, I just got the following e-mail (below). If you believe vouchers work, good news: voucher students in D.C. did better in reading, and their parents were more satisfied and believed they attended safer schools. If you believe vouchers don't work, good news: voucher students in D.C. did no better in math, students who transferred from failing schools did no better, and students reported no higher satisfaction nor believed that their schools were safer.
------------------------------------------------------------------------------------------
Subject: NCEE Releases New Report: The Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years
The National Center for Education Evaluation and Regional Assistance within the Institute of Education Sciences has released the report "The Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years."
This congressionally mandated report on the impact of the DC Opportunity Scholarship Program measures the effects of the program on student achievement in reading and math, and on student and parent perceptions of school satisfaction and safety. The evaluation found that the OSP improved reading, but not math, achievement overall and for 5 of 10 subgroups of students examined. The group designated as the highest priority by Congress - students applying from "schools in need of improvement" (SINI) - did not experience achievement impacts. Students offered scholarships did not report being more satisfied or feeling safer than those who were not offered scholarships, however the OSP did have a positive impact on parent satisfaction and perceptions of school safety. This same pattern of findings holds when the analysis is conducted to determine the impact of using a scholarship rather than being offered a scholarship.
To view, download and print the report as a PDF file, please visit:
http://ies.ed.gov/ncee/pubs/20094050/
------------------------------------------------------------------------------------------
Subject: NCEE Releases New Report: The Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years
The National Center for Education Evaluation and Regional Assistance within the Institute of Education Sciences has released the report "The Evaluation of the DC Opportunity Scholarship Program: Impacts After Three Years."
This congressionally mandated report on the impact of the DC Opportunity Scholarship Program measures the effects of the program on student achievement in reading and math, and on student and parent perceptions of school satisfaction and safety. The evaluation found that the OSP improved reading, but not math, achievement overall and for 5 of 10 subgroups of students examined. The group designated as the highest priority by Congress - students applying from "schools in need of improvement" (SINI) - did not experience achievement impacts. Students offered scholarships did not report being more satisfied or feeling safer than those who were not offered scholarships, however the OSP did have a positive impact on parent satisfaction and perceptions of school safety. This same pattern of findings holds when the analysis is conducted to determine the impact of using a scholarship rather than being offered a scholarship.
To view, download and print the report as a PDF file, please visit:
http://ies.ed.gov/ncee/pubs/
Today's Random Thoughts
Sorry for disappearing. Don't worry, I'm still alive -- I just got busy. I should be back in full swing next week. In the meantime, here are a few things that caught my eye.
-The NY Times has an interesting piece on ideology in medicine. I continue to wonder whether education research can ever truly be considered research or whether ideology will simply bias results. I always thought that research would be more reliable in fields like medicine and physics where there's little reason to take a rooting interest in a particular research outcome or to believe something works (or doesn't work) regardless of what the research says. According to David Neumann, I was wrong to believe that about medicine. He says that "The practice of medicine contains countless examples of elegant medical theories that belie the best available evidence."
-Meanwhile, a piece in Slate asks if you're hurting your local public schools by sending your kid to private school. Interesting question, easy answer -- of course you are. Unless, of course, your kid is some sort of detriment to their school (e.g. they have major behavioral problems). About 10% of students in the United States attend private schools. Would public schools improve if they didn't? Of course they would.
-In other news, the federal government has just announced grants of between $2.5 and $9 million dollars (a total of $150 million over five years) to 27 states to develop longitudinal data systems. On the one hand, this is good news. But, on the other, I have to wonder how much cheaper it would be for the DOE to develop one data system instead of paying states to develop 27.
-The NY Times has an interesting piece on ideology in medicine. I continue to wonder whether education research can ever truly be considered research or whether ideology will simply bias results. I always thought that research would be more reliable in fields like medicine and physics where there's little reason to take a rooting interest in a particular research outcome or to believe something works (or doesn't work) regardless of what the research says. According to David Neumann, I was wrong to believe that about medicine. He says that "The practice of medicine contains countless examples of elegant medical theories that belie the best available evidence."
-Meanwhile, a piece in Slate asks if you're hurting your local public schools by sending your kid to private school. Interesting question, easy answer -- of course you are. Unless, of course, your kid is some sort of detriment to their school (e.g. they have major behavioral problems). About 10% of students in the United States attend private schools. Would public schools improve if they didn't? Of course they would.
-In other news, the federal government has just announced grants of between $2.5 and $9 million dollars (a total of $150 million over five years) to 27 states to develop longitudinal data systems. On the one hand, this is good news. But, on the other, I have to wonder how much cheaper it would be for the DOE to develop one data system instead of paying states to develop 27.
Saturday, August 2, 2008
ASA Day 1
I'm here at the annual meetings of the American Sociological Association. Today was actually the second day of the conference, but the first I was able to attend. I heard a number of interesting presentations today -- here are some tidbits:
-In "School Disengagement and Problem Behavior: Distinguishing Cause from Consequence," Joseph Michael Gaspar and Paul Hirschfield examined the relationship between disengagement and delinquency in Chicago middle-schoolers. One model found that delinquency led to disengagement in school a year and a half later, but that disengagement did not lead to delinquency a year and a half later. Another model, however, using fixed-effects, found that both conditions led to more of the other condition a year later. The authors' preliminary conclusion is that delinquency causes more negative outcomes in the long-run while disengagement may affect students more in the short-term. One weakness was that "delinquency" was broadly defined and included many things that happen outside of school -- I'd be more interested in finding out if disengagement in school leads to more delinquency.
-In "Schools and Delinquency Revisited: Delinquent Affiliations in Middle and High School," Mark Warr and Robert Crosnoe looked at the actions of students' friends. They found that delinquent behavior increased steadily until about 10th grade, when it leveled off. Similarly, they found that moral condemnation of such behavior declined steadily until about 10th grade, when it also leveled off. The delinquency level of students in some schools was about 10x as high as in others -- meaning that significant differences do exist. The only students they found that were "peer-proof" and did not make friends with delinquent individuals were those that were highly religious and those that were socially isolated. Among those who said that all of their friends were going to college, 91% planned on attending college. Among those who said that none of their friends were going to college, less than half planned on attending college.
-In "Juvenile Delinquency, College Attendance, and the Paradoxical Role of Higher Education in Crime and Substance Use," Patrick Michael Seffrin and Stephen A. Cernkovich looked at the behavioral trends of those who do and do not attend college. They found that those who did not attend college drank alcohol and used drugs more often and committed more crimes before attending college. While attending college, however, college students drank more alcohol, used more drugs, and committed more crimes than their similarly aged peers who were not in college -- a surprising reversal. The authors attribute the increase in crime to the increase in alcohol consumption and the increase in unstructured socializing.
-In "School Disengagement and Problem Behavior: Distinguishing Cause from Consequence," Joseph Michael Gaspar and Paul Hirschfield examined the relationship between disengagement and delinquency in Chicago middle-schoolers. One model found that delinquency led to disengagement in school a year and a half later, but that disengagement did not lead to delinquency a year and a half later. Another model, however, using fixed-effects, found that both conditions led to more of the other condition a year later. The authors' preliminary conclusion is that delinquency causes more negative outcomes in the long-run while disengagement may affect students more in the short-term. One weakness was that "delinquency" was broadly defined and included many things that happen outside of school -- I'd be more interested in finding out if disengagement in school leads to more delinquency.
-In "Schools and Delinquency Revisited: Delinquent Affiliations in Middle and High School," Mark Warr and Robert Crosnoe looked at the actions of students' friends. They found that delinquent behavior increased steadily until about 10th grade, when it leveled off. Similarly, they found that moral condemnation of such behavior declined steadily until about 10th grade, when it also leveled off. The delinquency level of students in some schools was about 10x as high as in others -- meaning that significant differences do exist. The only students they found that were "peer-proof" and did not make friends with delinquent individuals were those that were highly religious and those that were socially isolated. Among those who said that all of their friends were going to college, 91% planned on attending college. Among those who said that none of their friends were going to college, less than half planned on attending college.
-In "Juvenile Delinquency, College Attendance, and the Paradoxical Role of Higher Education in Crime and Substance Use," Patrick Michael Seffrin and Stephen A. Cernkovich looked at the behavioral trends of those who do and do not attend college. They found that those who did not attend college drank alcohol and used drugs more often and committed more crimes before attending college. While attending college, however, college students drank more alcohol, used more drugs, and committed more crimes than their similarly aged peers who were not in college -- a surprising reversal. The authors attribute the increase in crime to the increase in alcohol consumption and the increase in unstructured socializing.
Tuesday, June 10, 2008
In Defense of Research
Kevin Carey weighed in last night with some advice for ed researchers. Liam Julian heartily agreed today in Fordham's blog.
You can read their posts if you want to know everything that was said but, in short, they argued that education research needs to:
-get to the point
-be relevant
-be readable
-simplify
-conclude with something other than "more research is needed"
Half of me agrees and half of me disagrees with what was said. On the one hand, some researchers do need to work harder to make sure that their work is relevant and research journals are certainly not fun reading. When I first started my PhD, I would've made a lot of these same arguments. Our family Christmas letter that year said that my goal was to write a research article that could be read in one sitting w/o falling asleep.
But, on the other hand, things are the way they are for a reason. A snappy, simple, easy-to-read article has a place in ed policy -- but not usually in a top journal. It's simply not possible to fully explain research without nuance and details -- things that make reading it boring. Both complain about researchers who conclude by saying that more research is necessary. Authors do this for a reason: it's called being a responsible researcher. A responsible researcher acknowledges the shortcomings of their research. A responsible researcher acknowledges that no firm conclusions can be made in most circumstances. It is the rare article that can conclusively prove anything -- it takes a body of literature to do that. Asking authors to make strong conclusions based on weak evidence is the equivalent of asking them to be dishonest.
Before I get carried away, let me point out that the system is fundamentally broken. The average person is not going to sit down, read an education journal, and change they way they run their school(s) or classroom(s). As Mary Brabeck pointed out in EdWeek a couple weeks ago, we need research that will translate into actions on the ground level. Currently, the link between research and practice is tenuous at best. But the answer isn't to dumb down research articles -- the answer is to translate research articles into better resources for policymakers, principals, teachers, etc.
You can read their posts if you want to know everything that was said but, in short, they argued that education research needs to:
-get to the point
-be relevant
-be readable
-simplify
-conclude with something other than "more research is needed"
Half of me agrees and half of me disagrees with what was said. On the one hand, some researchers do need to work harder to make sure that their work is relevant and research journals are certainly not fun reading. When I first started my PhD, I would've made a lot of these same arguments. Our family Christmas letter that year said that my goal was to write a research article that could be read in one sitting w/o falling asleep.
But, on the other hand, things are the way they are for a reason. A snappy, simple, easy-to-read article has a place in ed policy -- but not usually in a top journal. It's simply not possible to fully explain research without nuance and details -- things that make reading it boring. Both complain about researchers who conclude by saying that more research is necessary. Authors do this for a reason: it's called being a responsible researcher. A responsible researcher acknowledges the shortcomings of their research. A responsible researcher acknowledges that no firm conclusions can be made in most circumstances. It is the rare article that can conclusively prove anything -- it takes a body of literature to do that. Asking authors to make strong conclusions based on weak evidence is the equivalent of asking them to be dishonest.
Before I get carried away, let me point out that the system is fundamentally broken. The average person is not going to sit down, read an education journal, and change they way they run their school(s) or classroom(s). As Mary Brabeck pointed out in EdWeek a couple weeks ago, we need research that will translate into actions on the ground level. Currently, the link between research and practice is tenuous at best. But the answer isn't to dumb down research articles -- the answer is to translate research articles into better resources for policymakers, principals, teachers, etc.
Friday, April 25, 2008
Limitations of Research and the Headlines that Ignore Them
I'm in the middle of finals week, so I only have a couple minutes, but I think this is worth looking at for a second.
A grand headline in the NY Times today reads "Study Suggests Math Teachers Scrap Balls and Slices." The article summarizes a study being published today that found people learned math better when taught abstractly (think: formulas) than when taught using manipulatives (the balls and slices to which the headline refers).
The finding is extremely interesting. And I saw it echoed and trumpeted all over the place. And then I read the article. It was a short study on one kind of problem done with college kids. Meaning that we have virtually no idea, based on this study, whether this is true for elementary school kids who learn this way for 6 years.
This is no critique of the authors -- there's value to their study -- but, for the life of me, I can't figure out why everybody seems to think this means that we should stop using manipulatives forevermore. This sheds little, if any, light on that issue.
I'm disappointed in the NY Times, and I'm disappointed in everybody else who parroted these results without explaining what they really meant.
(disclaimer: finals are making me a bit grumpier than usual)
Update: Thanks, Dr. Dorn
A grand headline in the NY Times today reads "Study Suggests Math Teachers Scrap Balls and Slices." The article summarizes a study being published today that found people learned math better when taught abstractly (think: formulas) than when taught using manipulatives (the balls and slices to which the headline refers).
The finding is extremely interesting. And I saw it echoed and trumpeted all over the place. And then I read the article. It was a short study on one kind of problem done with college kids. Meaning that we have virtually no idea, based on this study, whether this is true for elementary school kids who learn this way for 6 years.
This is no critique of the authors -- there's value to their study -- but, for the life of me, I can't figure out why everybody seems to think this means that we should stop using manipulatives forevermore. This sheds little, if any, light on that issue.
I'm disappointed in the NY Times, and I'm disappointed in everybody else who parroted these results without explaining what they really meant.
(disclaimer: finals are making me a bit grumpier than usual)
Update: Thanks, Dr. Dorn
Thursday, April 17, 2008
Don't Interpret an Article's Findings based its Title
I've mentioned a few times that research is nowhere near as definitive as I imagined it was before I started grad school. Every paper has flaws, limitations, and shortcomings -- no matter how the prestigious the journal in which it was published. Peer review isn't perfect. Sometimes people are so focused on examining an issue in one manner that they forget to look at it in other ways.
I found one such example yesterday. In an influential piece in the American Economic Review (a very prestigious journal) written by Caroline Hoxby* in 2000 ("Does Competition among Public Schools Benefit Students and Taxpayers?") found that schools with more competition were "more productive" (higher scores with less money spent). In the paper, competition is determined by the number of school districts within a metropolitan area and the number of streams (rivers/creeks, not any fancy academic term) in that same area. The theory is that more streams led to the creation of more towns (and, hence, more school districts) because they provided geographic boundaries. Those areas where more streams led to the creation of more districts were said to have more choice b/c people could choose to move to more districts closer to home and, therefore, the districts had to compete more not to lose students. The statistics are really sophisticated, and we devoted two full periods of class to examining the underlying theory, the statistical analysis, and later re-analysis (which got quite heated, and in which I have no desire of involving myself at this point in time), but that is the general gist.
Anyway, we were focused on understanding how more competition and choice led to more productive schools when a thought occurred to me. Metropolitan areas that have more streams and, therefore, smaller towns and smaller school districts, don't just have more school districts nearby -- they also have smaller towns and smaller districts. I wondered aloud whether it was possible that the effects of smaller districts (perhaps a tighter-knit community, more parental involvement, less bureaucracy, etc.) might explain the positive effects in addition to, or instead of, more choice.
I'm still somewhat in doubt that the author and numerous reviewers missed that point, but I don't see any indication that district size was controlled for and can find no other method of counting this out. So, for now, my conclusion is that people get stuck thinking about something one way and forget to think about it in others.
*I often refrain from naming names. I mention it in this case only b/c understanding my thought is wholly dependent upon knowing to which article I am referring. I have absolutely no desire to start a fight with Caroline Hoxby (who has proven many times over that she is quite smart -- I particularly like her outside-the-box idea of using streams as exogenous predictor of # of districts); I simply wanted to share an interesting thought on education policy.
I found one such example yesterday. In an influential piece in the American Economic Review (a very prestigious journal) written by Caroline Hoxby* in 2000 ("Does Competition among Public Schools Benefit Students and Taxpayers?") found that schools with more competition were "more productive" (higher scores with less money spent). In the paper, competition is determined by the number of school districts within a metropolitan area and the number of streams (rivers/creeks, not any fancy academic term) in that same area. The theory is that more streams led to the creation of more towns (and, hence, more school districts) because they provided geographic boundaries. Those areas where more streams led to the creation of more districts were said to have more choice b/c people could choose to move to more districts closer to home and, therefore, the districts had to compete more not to lose students. The statistics are really sophisticated, and we devoted two full periods of class to examining the underlying theory, the statistical analysis, and later re-analysis (which got quite heated, and in which I have no desire of involving myself at this point in time), but that is the general gist.
Anyway, we were focused on understanding how more competition and choice led to more productive schools when a thought occurred to me. Metropolitan areas that have more streams and, therefore, smaller towns and smaller school districts, don't just have more school districts nearby -- they also have smaller towns and smaller districts. I wondered aloud whether it was possible that the effects of smaller districts (perhaps a tighter-knit community, more parental involvement, less bureaucracy, etc.) might explain the positive effects in addition to, or instead of, more choice.
I'm still somewhat in doubt that the author and numerous reviewers missed that point, but I don't see any indication that district size was controlled for and can find no other method of counting this out. So, for now, my conclusion is that people get stuck thinking about something one way and forget to think about it in others.
*I often refrain from naming names. I mention it in this case only b/c understanding my thought is wholly dependent upon knowing to which article I am referring. I have absolutely no desire to start a fight with Caroline Hoxby (who has proven many times over that she is quite smart -- I particularly like her outside-the-box idea of using streams as exogenous predictor of # of districts); I simply wanted to share an interesting thought on education policy.
Sunday, March 30, 2008
Complicated Statistics and Education Policy
I've encountered a number of surprises after switching from teaching to researching, but perhaps none have been more notable than the role of complicated statistical methods in education research. It seems that, historically, education research has been denigrated as non-rigorous but that, currently, the trend is toward more quantitative methods -- much of which is done by economists. Pick up virtually any journal and you'll find an article with statistical tables and strings of Greek letters that 99% of the population won't understand. In some cases this is a good thing, but in some cases I'm not so sure that it's not hurting the field. And the question still remains: why have increasingly complicated statistical models become so commonplace in educational research?
Let me start with an anecdote (a decidedly non-rigorous research method):
When I presented my paper on teacher retention at a conference earlier this month, there were apparently a number of audience members who had not been trained in statistics (I used some fairly basic ones for my analysis). I didn't really receive any criticism about the presentation, and the feedback seemed to indicate that most people didn't really understand the tables I'd presented. Afterwards, one guy walked up to me and said "you lost me on the statistics, so I'll take your word for it."
At the time, I chuckled and continued with the conversation. But, in retrospect, this troubles me. Why should he take my word for it? Just because I used some statistics that he didn't understand? And it got me thinking. Does this happen on a larger scale as well? Are people scared to argue with the methods in these papers because they don't understand them? Do some people take economists and others at their word when they do complicated statistical analyses b/c they simply assume that, since they don't understand them, they must be thorough and correct?
Probably the other most notable thing I've learned is that there is no such thing as a perfect research study -- every single one has significant flaws (at least in the social sciences, I claim ignorance on physicists observing quarks). Part of the motivation for this post was the conversation in which I found myself embroiled on another website (here) . The blog post and comments seem (at least to me) to assume that the results of the study provide a definitive answer -- in part b/c it uses incredibly complicated statistics. The paper is better than most, so I hesitate to use it as an example, but, nonetheless, it has flaws and limitations -- just like any other. And I wonder if the methods were more accessible and understandable if people would be so willing to accept the findings without further discussion.
Let me start with an anecdote (a decidedly non-rigorous research method):
When I presented my paper on teacher retention at a conference earlier this month, there were apparently a number of audience members who had not been trained in statistics (I used some fairly basic ones for my analysis). I didn't really receive any criticism about the presentation, and the feedback seemed to indicate that most people didn't really understand the tables I'd presented. Afterwards, one guy walked up to me and said "you lost me on the statistics, so I'll take your word for it."
At the time, I chuckled and continued with the conversation. But, in retrospect, this troubles me. Why should he take my word for it? Just because I used some statistics that he didn't understand? And it got me thinking. Does this happen on a larger scale as well? Are people scared to argue with the methods in these papers because they don't understand them? Do some people take economists and others at their word when they do complicated statistical analyses b/c they simply assume that, since they don't understand them, they must be thorough and correct?
Probably the other most notable thing I've learned is that there is no such thing as a perfect research study -- every single one has significant flaws (at least in the social sciences, I claim ignorance on physicists observing quarks). Part of the motivation for this post was the conversation in which I found myself embroiled on another website (here) . The blog post and comments seem (at least to me) to assume that the results of the study provide a definitive answer -- in part b/c it uses incredibly complicated statistics. The paper is better than most, so I hesitate to use it as an example, but, nonetheless, it has flaws and limitations -- just like any other. And I wonder if the methods were more accessible and understandable if people would be so willing to accept the findings without further discussion.
Thursday, March 13, 2008
The Federal Government and Education Research
Harvard is hosting a brief conference for grad students tomorrow (Friday) but they started off with a panel discussion on the state of education research tonight. The panel was highlighted by Grover "Russ" Whitehurst, the head of the Institute for Education Sciences (IES), which basically distributes all of the money the federal government devotes to educational research. He has been hailed by some for reforming education research and making it more rigorous and respectables, and reviled by others for limiting the scope of educational research to certain topics and methodologies.
I won't say that all 90 minutes of the panel was enthralling, but a number of interesting points were made. These include:
-Whitehurst noted that the education research community is not a powerful interest group in political discussions of policies surrounding education research. I've definitely noticed that. When you factor in that teachers and principals don't have much say in this either, it really makes you wonder who is controlling education research.
-There was much discussion of the fact that too much education research is not useful for people in schools, but few solutions offered.
-Whitehurst said that he made the decision to focus research funding on studies of academic achievement not because other things aren't important but because they have a very limited amount of money and want to do one thing well before moving on to others. While I'm not convinced they couldn't throw a few million dollars toward other desirable outcomes of schooling, I'm heartened to hear that it has been considered and would say the decision seems rational.
-In the face of severe criticism from Mica Pollock, a professor at Harvard, Whitehurst emphasized that neither he nor the department believe that only quantitative methods or randomized field trials are worthwhile (although he did seem to imply that RTFs answer more interesting questions). This sentence means nothing to you if you're not in education research but, basically, he refutes claims that many have made that he prioritizes certain types of research at the cost of other types that are more appropriate for certain questions.
I won't say that all 90 minutes of the panel was enthralling, but a number of interesting points were made. These include:
-Whitehurst noted that the education research community is not a powerful interest group in political discussions of policies surrounding education research. I've definitely noticed that. When you factor in that teachers and principals don't have much say in this either, it really makes you wonder who is controlling education research.
-There was much discussion of the fact that too much education research is not useful for people in schools, but few solutions offered.
-Whitehurst said that he made the decision to focus research funding on studies of academic achievement not because other things aren't important but because they have a very limited amount of money and want to do one thing well before moving on to others. While I'm not convinced they couldn't throw a few million dollars toward other desirable outcomes of schooling, I'm heartened to hear that it has been considered and would say the decision seems rational.
-In the face of severe criticism from Mica Pollock, a professor at Harvard, Whitehurst emphasized that neither he nor the department believe that only quantitative methods or randomized field trials are worthwhile (although he did seem to imply that RTFs answer more interesting questions). This sentence means nothing to you if you're not in education research but, basically, he refutes claims that many have made that he prioritizes certain types of research at the cost of other types that are more appropriate for certain questions.
Thursday, February 28, 2008
Convincing Schools to Change
I was forwarded an interesting piece that Robert Weisbuch, the new president of my alma mater, wrote in the Chronicle of Higher Education the other day. He argues that universities, rather than looking down on, or simply ignoring, K-12 schools, should form a partnership that he calls the "third culture" with primary and secondary schools across the country.
I'm not sure how much he's referencing higher education in general and how much he's directing his plea at education researchers, but I find his comments particularly relevant for researchers. It seems that every article I've read about effecting change in the way that schools are run (particularly regarding the way that teachers teach) basically asks one question: we know how to run schools/classrooms, how can we convince administrators/teachers to things the way we tell them to?
The flaw that I see in this question is that it's exceedingly arrogant, which is probably a large part of the reason that the question never seems to be answered. Yes, research is not effectively utilized in schools, but the fact is that people who work in schools know a great deal more about some things than researchers could ever hope to. The fact that (most) researchers are experts on something does not give them the right to treat teachers and other education officials as inferior beings. Perhaps the way to effect change in schools is to work with people instead of talking at them.
If that third culture is to develop, college faculty members might stop coming on to their school counterparts like gods delivering grace to undeserving sinners. We need to acknowledge that a strong teacher in the schools knows a great deal more about pedagogy than we do. Even beyond the obvious fact that we share the same kids at different stages and the more emotionally compelling fact that professors have kids, too, it is well past time to shed our pretensions, share our status as intellectual leaders, and acknowledge both what school teachers bring to the party and the mutual benefit that accrues from a partnership between equals.
I'm not sure how much he's referencing higher education in general and how much he's directing his plea at education researchers, but I find his comments particularly relevant for researchers. It seems that every article I've read about effecting change in the way that schools are run (particularly regarding the way that teachers teach) basically asks one question: we know how to run schools/classrooms, how can we convince administrators/teachers to things the way we tell them to?
The flaw that I see in this question is that it's exceedingly arrogant, which is probably a large part of the reason that the question never seems to be answered. Yes, research is not effectively utilized in schools, but the fact is that people who work in schools know a great deal more about some things than researchers could ever hope to. The fact that (most) researchers are experts on something does not give them the right to treat teachers and other education officials as inferior beings. Perhaps the way to effect change in schools is to work with people instead of talking at them.
Subscribe to:
Posts (Atom)