I'm in the middle of finals week, so I only have a couple minutes, but I think this is worth looking at for a second.
A grand headline in the NY Times today reads "Study Suggests Math Teachers Scrap Balls and Slices." The article summarizes a study being published today that found people learned math better when taught abstractly (think: formulas) than when taught using manipulatives (the balls and slices to which the headline refers).
The finding is extremely interesting. And I saw it echoed and trumpeted all over the place. And then I read the article. It was a short study on one kind of problem done with college kids. Meaning that we have virtually no idea, based on this study, whether this is true for elementary school kids who learn this way for 6 years.
This is no critique of the authors -- there's value to their study -- but, for the life of me, I can't figure out why everybody seems to think this means that we should stop using manipulatives forevermore. This sheds little, if any, light on that issue.
I'm disappointed in the NY Times, and I'm disappointed in everybody else who parroted these results without explaining what they really meant.
(disclaimer: finals are making me a bit grumpier than usual)
Update: Thanks, Dr. Dorn
The most telling point of the Times article to me is the paucity of randomized studies in education, particularly in Math. Your beef it seems to me is with education schools for not doing much more similar research so we can know what works and what does not.
The limitation is not in this research per se. It seems to be far better than most.
You are correct in saying that my problem is not with this particular paper. You're also right that there should be better research in education -- but that's not what I'm writing about in this post.
My beef is with everybody who claimed that this study proved something that it didn't. Every piece of research has limitations, and it's irresponsible to ignore these limitations and misinterpret the findings.
Post a Comment