Sunday, March 30, 2008

Complicated Statistics and Education Policy

I've encountered a number of surprises after switching from teaching to researching, but perhaps none have been more notable than the role of complicated statistical methods in education research. It seems that, historically, education research has been denigrated as non-rigorous but that, currently, the trend is toward more quantitative methods -- much of which is done by economists. Pick up virtually any journal and you'll find an article with statistical tables and strings of Greek letters that 99% of the population won't understand. In some cases this is a good thing, but in some cases I'm not so sure that it's not hurting the field. And the question still remains: why have increasingly complicated statistical models become so commonplace in educational research?

Let me start with an anecdote (a decidedly non-rigorous research method):

When I presented my paper on teacher retention at a conference earlier this month, there were apparently a number of audience members who had not been trained in statistics (I used some fairly basic ones for my analysis). I didn't really receive any criticism about the presentation, and the feedback seemed to indicate that most people didn't really understand the tables I'd presented. Afterwards, one guy walked up to me and said "you lost me on the statistics, so I'll take your word for it."

At the time, I chuckled and continued with the conversation. But, in retrospect, this troubles me. Why should he take my word for it? Just because I used some statistics that he didn't understand? And it got me thinking. Does this happen on a larger scale as well? Are people scared to argue with the methods in these papers because they don't understand them? Do some people take economists and others at their word when they do complicated statistical analyses b/c they simply assume that, since they don't understand them, they must be thorough and correct?

Probably the other most notable thing I've learned is that there is no such thing as a perfect research study -- every single one has significant flaws (at least in the social sciences, I claim ignorance on physicists observing quarks). Part of the motivation for this post was the conversation in which I found myself embroiled on another website (here) . The blog post and comments seem (at least to me) to assume that the results of the study provide a definitive answer -- in part b/c it uses incredibly complicated statistics. The paper is better than most, so I hesitate to use it as an example, but, nonetheless, it has flaws and limitations -- just like any other. And I wonder if the methods were more accessible and understandable if people would be so willing to accept the findings without further discussion.

No comments: