Monday, March 26, 2012

Education and the Declining Median Class

A column a while back by David Brooks and numerous other reviews of Charles Murray's new book raise an issue I've been meaning to write about: the growing gulf between different classes (or "social tribes," as Brooks labels them) in the U.S. -- not just in earnings, which we hear a lot about, but in both achievement and a number of behaviors related to achievement.  As Brooks writes about the White population Murray discusses:

There are vast behavioral gaps between the educated upper tribe (20 percent of the country) and the lower tribe (30 percent of the country) . . . Roughly 7 percent of the white kids in the upper tribe are born out of wedlock, compared with roughly 45 percent of the kids in the lower tribe. In the upper tribe, nearly every man aged 30 to 49 is in the labor force. In the lower tribe, men in their prime working ages have been steadily dropping out of the labor force, in good times and bad. People in the lower tribe are much less likely to get married, less likely to go to church, less likely to be active in their communities, more likely to watch TV excessively, more likely to be obese.

The NY Times followed up with a story about the achievement gap growing between rich and poor students in the United States.  The article was based largely on two chapters from a book that was released last fall by the Russell Sage Foundation (which, as I've previously mentioned, I highly recommend).

In one chapter, Sean Reardon from Stanford finds that "the achievement gap between children from high- and low-income families is roughly 30 to 40 percent larger among children born in 2001 than among those born twenty-five years earlier" and that the gap between students from families in the 90th and 10th percentiles of income is now "nearly twice as large" as the gap between blacks and whites (the opposite was true 50 years ago).

Since I've spent my entire post-college life either trying to teach low-income students or researching low-income students, the chapter jumped out at me when I first saw a preview of it last summer.  The fact that the achievement gap between rich and poor was actually growing (while the black-white achievement gap has essentially plateaued the past 20 years), jibed with what I've read about, and seen in, both our schools and our society.

But then, when I was preparing to discuss the chapter with my students last fall, I took a closer look at the trends.  And this is what I discovered: the real growth in the gap between rich and poor isn't actually between the richest and the poorest, it's between the richest and those in the middle.

I've pasted one chart from the Reardon chapter to show what I mean.  The gap between those at the 50th percentile and those at the 10th percentile -- the middle-poor gap, if you will (represented by the dashed line) has been fairly steady.  The gap between those at the 90th percentile and those at the 10th percentile, meanwhile -- the rich-middle gap, if you will (represented by the solid line) -- has grown rapidly, from under a quarter of a standard deviation for children born in the 1940's to almost three-quarters of a standard deviation for children born in the 21st century.


I actually missed that point when I first skimmed the chapter, but to me it's maybe the most pressing policy issue of the next few decades: how will middle-income Americans work and live?

As somebody who studies the effects of poverty, I'm pre-disposed to believe that poverty, and the low performance of children living in poverty, poses the largest educational problem.  And there's no doubt that that problem is large.  Even if the poorest kids aren't falling further behind the middle-income students, they're still far behind where they ought to be and farther behind the wealthiest kids than before.

But if the middle of the income distribution falls far behind as well, that could leave us in really serious trouble.  This chart of wage growth that EPI offered in response to Brooks' op-ed is one way of looking at the issue.  The wages of the working poor are awful, but it's the hourly wages of the middle-income folks that have seen the lowest growth in the last 20 years.  Indeed, the median household made only slightly more in 2010 than it did in 1978 ($49K vs. $46K -- Table H-6) when adjusting for inflation.

As the table below indicates, from 1991-2010, median income by education level declined considerably for the nearly 60% of the population with a high school diploma, some college, or an associates degree, held steady for the 20% or so with a bachelor's degree and increased for the 10% or so with a master's or professional degree (to reduce visual clutter, I leave off the 13% or so of Americans who did not reach, or did graduate from, high school and the 1% who earned a doctorate degree*).  Now, education levels increased a bit during the time -- so part of the explanation may be that there were more people in each of the higher groups -- but not by enough to change the fact that Americans with median levels of education are earning less now than they were 20 years ago.


Median income by education level.  Source: US Census, Table H-13

It's hard to imagine a burgeoning economy in any country that sees no real income growth for those at the middle of the income distribution.  But Murray, Brooks, and others (including liberals as well) also note other worrying trends concerning health, marriage, childbirth, etc.

Indeed, another recent NY Times article reported that the majority of babies born to women under 30 are now born out of wedlock.  But more important are the different rates by social class: "About 92 percent of college-educated women are married when they give birth, compared with 62 percent of women with some post-secondary schooling and 43 percent of women with a high school diploma or less".  And while the number of babies born out of wedlock has risen for all three groups, it's the middle group that's seen the starkest increase: in 1990 only 11% of children born to women in their 20's with some college were born out of wedlock, but by 2009 that number had more than tripled to 34%.

Again, the problem is more pervasive among the least-educated women (the comparable number is 51%), but the largest change is in the middle of the income distribution.

Now, you may have noticed that I titled the post "Median Class" instead of "Middle Class," and that's because what a lot of people define as "middle class" isn't actually comprised of people in the middle of the income distribution.  People often talk about college-educated adults belonging to the middle class, but fewer than 30% of adults have a 4-year degree.  Depending on which model of social class one uses, those in the middle of the income distribution (those who fall right around median) are usually classified as lower-middle-class or working class.  And I want to be clear that I'm talking about those who fall right around the median of the income distribution.

It seems to me that a large part of the challenge is economic.  Jobs that pay high wages to employees without high levels of educational attainment are fast disappearing (one could write "high wage/low skill" jobs as shorthand, but I don't possess the skills for most of these jobs so that seems inaccurate to me).  We still have a not-insignificant number of jobs in construction, trucking, the trades, manufacturing, and so forth that pay fairly well, but the number of those types of jobs has declined dramatically -- largely due to globalization and/or technology -- in recent decades.  And it seems unlikely that this trend will dramatically reverse.  In other words, it seems unrealistic to expect anywhere near 70% of the population to find stable employment with decent wages without a 4-year degree.

The result, it seems to me, is that instability and low wages are no longer the domain of only the poorest Americans.  And it seems reasonable to assume that the growth in the rich-middle achievement gap is due, at least in part, to the spread of this job instability to the median earners.

In short, it seems like there's a growing bulge in the middle of the income and education distributions that is lost.  Fewer and fewer can make a good living without a college degree.  They're falling further behind the wealthy academically.  And to make matters worse (to be intentionally and melodramatically blunt), recent reports say they're increasingly divorced, fat, and lazy as well.

That's worrying.  But what really worries me is that I don't see an easy solution. Brooks' integration idea may be a small step in the right direction, and improving our educational system would certainly be another. But neither seems likely to prevent the problem from getting worse in the next 20 or so years.  It seems unlikely we can do any, yet alone all, of the following in a short period of time:

1.) dramatically increase the number of stable, high-paying jobs for those without college degrees
2.) dramatically** increase the number of college-educated adults while also increasing the number of stable, well-paying jobs for those with degrees accordingly
3.) reverse social trends and encourage more two-parent households, more civic engagement, less obesity, etc.

But I hope I'm wrong.  I hope 20 years from now I'm writing about the resurgence of the median class and not about the spread of poverty to children of middle-income households.  Either way, I think recent data indicate we need to adjust our focus when we discuss ways to boost performance of the lowest achievers.  If we want to focus research and policy on those lagging behind, we need to broaden our scope beyond just the 10 or 20% lowest-income Americans.  Those in the middle aren't doing that much better.



*median income increased slightly from 1991-2010 for those with less than a 9th grade education --  from $20,640 to $21,254, decreased from $27,375 to $24,787 for those who attended, but did not graduate from, high school, and also slightly decreased for those with a doctorate as well -- from $121,693 to $119,825.


**"dramatically," in this case, does not mean the 10 percentage point increase we've seen over the past 25 years for adults over age 25 and especially does not the 10 percentage point increase we've seen over the past 35 years for those aged 25-29

Wednesday, March 14, 2012

Thoughts on "Educational Productivity"

Last week, Matthew Ladner produced a stunning chart showing an "implosion" in our nation's educational productivity.  Productivity here seems to be defined as ratio of per-pupil expenditures on public education to average NAEP test scores.  The former has tripled since 1970 while the latter has essentially remained flat for the upper grades.  I'm not sure of the impetus behind that particular post, but the Bush Center has written a similar post that makes all the same mistakes.

Before I delve into those mistakes, I'll point out that they've also created a nifty website that allows people to compare students' standardized test scores in any district's to the scores of other students in the state, nation, and world.*

So, what's wrong with comparing spending to achievement?  Seems straightforward.  And the graph is certainly compelling.  But, alas, the statistics that seem the most straightforward are often the least useful.  Among other issues:

1.) Spending and test scores are on different scales.  Spending can multiply almost infinitely while the test scores have a ceiling.  In the chart on the site, the average 17 year-old scored 306 out of 500 on the NAEP math test in 2008.  Which means that even if every kid in the country earned a perfect score the next time around, the average score would only increase about 63%.  Since school spending has tripled, the ratio of spending to achievement would still be far greater now than it was 40 years ago.

2.) Why would we assume that it takes the same level of effort for a school to get a student to earn a certain score now as it did in 1970?  A zillion factors other than education spending influence achievement levels.  If parenting ability, economic circumstances, living conditions, and such increased dramatically then we shouldn't think it's miraculous if scores increase with no additional school spending.  Similarly, if societal conditions worsen in some way, then it would necessarily mean that more effort is necessary to achieve the same scores.  I have no idea whether it's now harder or easier to get the average 17 year-old to score 306 on the NAEP now than it was in 1970, but we'd need to know that answer to accurately measure educational productivity.

3.) Why would we assume that the same level of spending is commensurate with the same level of effort on behalf of districts now as it was in 1970?  The economic and social context of schooling is dramatically different.  Perhaps most importantly, the number of women in the workforce -- particularly in fields outside of education -- has exploded.  Simple economics dictates that it must cost more to buy the services of an equally qualified teacher.

4.) Test scores were not the sole goal of that increased spending.  Surely, we also aimed to increase the number of high school and college graduates (I don't have the HS stats handy, but almost twice as many 25-29 year olds have bachelor's degrees now as did in 1970 (though it's increased by only about a third since 1975)).  I think a reasonable argument could be made that it's increasingly costly to get each additional student to graduate (i.e. moving the HS graduation rate from 50 to 60% is easier than moving it from 80 to 90%), so we might not expect the same returns per dollar on those measures.  And, surely, we also aimed to improve many other skills (e.g. critical thinking, physical/emotional health, social skills, art appreciation, etc.) that aren't measured by the math and reading tests listed.

So, should Ladner's alarming chart worry us?  I wouldn't dismiss it out of hand.  I wouldn't be the least bit shocked if our returns to effort and spending have decreased the past 40 years.  But we can't tell whether productivity has decreased, remained steady, or increased by looking at that chart.  The most compelling figures he presents are the large increases in non-teacher staff in schools, but some unknown number of support staff are certainly invaluable so even that doesn't prove all that much.

And, by the way, those same four problems apply to any international comparison of a simple spending : test score ratio.  Were we to completely eliminate schools, culture, society, and a myriad of other contextual factors would still produce kids who scored much higher and lower on tests in different countries; it would be harder and easier and cheaper and more expensive to change that in different countries; and each country emphasizes different outcomes to a different degree.

I'll be the last person to argue that our nation's schools are just fine -- we face countless problems with a nearly infinite number of solutions -- so please don't interpret my criticism as an argument for the status quo. I hate the status quo.  But also realize that Ladner's chart gives us exactly zero information about what ails our schools.



*The "world" here is 25 developed nations.  On another note, here's a fun game: they don't seem to have compiled the list of the top-performing districts, so go see which ones you think might rank highest.  So far, I've found:

-The average student in Chappaqua, NY outscores 89% of students internationally in reading and 82% in math
-The average student in Chatham, NJ outscores 88% of students internationally in reading
-The average student in Brookline, MA outscores 77% in math.


Saturday, February 25, 2012

Is Teacher Quality Really Causing the Achievement Gap?

Yesterday, the NY Times released the value-added scores of thousands of teachers over the past five years.  Before and immediately after the release, people seemed to mostly argue the merit of the decision to release the data.  But I have a substantive question about the data.

What caught my eye was that, according to the NYT analysis of this particular set of scores, good teachers are evenly distributed between high-poverty and low-poverty schools.  From the article:

there was no relationship between a school’s demographics and its number of high- or low-performing teachers: 26 percent of math teachers serving the poorest of students had high scores, as did 27 percent of teachers of the wealthiest.

The LA Times reported a roughly a similar situation in LA when they released teachers' scores a couple years ago.  Which is really quite shocking in a number of ways.  Most notably, researchers and practitioners have long assumed that lower-poverty schools had worse teachers than higher-poverty schools -- past studies have repeatedly found that teachers in high-poverty schools are less experienced, turn over at a much higher rate, score lower on achievement tests, attend less selective colleges, etc.  Accordingly, at least part of the theory of action behind the teacher quality movement has been that giving low-income students teachers who are as good as or better than those in higher-income schools would significantly narrow the achievement gap.

But these two measures of teacher quality indicate there may be no major differences between low- and high-poverty schools, while we know that large gaps in achievement still exist between low-income and high-income students.  Which means at least one of two things.

1.) Differences in teacher quality are not a major driver of the achievement gap.

2.) These value-added scores are not a good measure of teacher quality.

I don't think anybody seriously doubts -- or at least that anybody serious doubts -- that some teachers are much better than others and that the best teachers can make a large difference.  But if quality teachers, according to these value-added measure, are roughly evenly distributed between high- and low-poverty schools in LA at the same time that we see differences between high- and low-income students growing, then  improving the quality of teachers (again,as indicated by these value-added measures) in high-poverty schools seems unlikely to close the achievement gap.  Either other factors influence achievement far more, the effects of quality teachers on students are much less direct than many assume, or what we're measuring isn't what matters.

In short, these data indicate that we need to broaden our focus beyond teacher quality and/or re-evaluate the way we're currently measuring teacher quality.

Wednesday, February 8, 2012

How Education Research is like Football Research

In the fall, I wrote about an instance in which outsiders may be needed in education reform.  Today I'll give you an example of how outsiders can also be dangerous (though this one pertains more to research).

Perhaps the largest change in educational research over the past decade or so has been the sizable increase in large-scale quantitative research, a fair amount of which is conducted by researchers outside of ed schools.  Like any change, this has resulted in both positives and negatives.  But one thing that worries me is that I consistently notice the people who are most worried about statistical rigor in quantitative analyses (both inside and outside of ed schools) tend to be less concerned with understanding the context and processes of schooling.

And that's incredibly dangerous.

Methodology, statistics, and technical skills are very, very important in the development of good research.  But without a proper understanding of how schools work and what is actually happening on the ground, one can't expect to ask the right questions.  And if one fails to ask the right questions, it really doesn't matter how complex and rigorous their analysis is because the answers to those questions are meaningless.

Here's one example of how such a process can unfold -- it's completely unrelated to ed policy, but I still think it's illustrative.  The Freakonomics Blog posted a brief discussion yesterday of the ending to the Super Bowl.  The post said two things (paraphrasing, of course):

1.) Isn't it amazing that the coaches of both teams realized that the Giants scoring a touchdown with about a minute left was actually a better outcome for the Patriots?  The Patriots' coaches tried to let the Giants run the ball into the endzone while the Giants' coaches instructed their players not to score a TD.  These counter-intuitive behaviors are an excellent example of game theory properly implemented.

2.) But then the Giants failed to take game theory into account when attempting their two point conversion.  Wouldn't it have been much better for them to run time off the clock instead of trying to score to go up 6 points instead of 4?  They might've been able to kill 20 seconds by running the ball 95 yards backwards and around in circles, and certainly being up 4 with 40 seconds left is better than being up 6 with a minute left.  Why didn't the coaches think of this?

There's some clever thinking going on here.  Yes, this is an interesting application of game theory.  And, yes, running 20 seconds off the clock would've been a better strategy.  So the application of economic theory to the situation is exemplary.  In a short space, there's a cogent analysis and a provocative question.  But there are two fatal flaws.

1.) Both coaches did not apply game theory.  Tom Coughlin, the Giants' coach, said he preferred that the team take the guaranteed six points to running down the clock.  So let's hold off on patting him on the back for correctly applying game theory.

2.) More importantly, the clock doesn't run during two-point conversions.  The Giants could've run around in circles for ten minutes, and there still would've been exactly 59 seconds left on the clock.

So, what we have here, is a smart professor who's well-trained in economic theory and statistics.  This training has allowed him to make an important insight about a football game and ask an interesting question.  Except that he doesn't actually seem to know much about the rules of football or the context of the situation.  Which  has rendered his question moot.

And I see the same thing (in a much less dramatic and much less foolish) way happening in education research.  Smart people with training in other fields and disciplines and serious methodological credentials come into the field and find some low-hanging fruit ripe for picking.  At first, this seems like a great idea.  We can never have too many smart, well-trained researchers in education.  And the eye of the outsider can be sharp.  But then the research starts and we realize that somebody can be smart and well-trained but, at the same time, fail to truly understand how schools work and the contexts under which students, teachers, principals, schools, etc. operate.  And then we get smart, well-trained people asking the wrong questions (or interpreting their findings in silly ways).  And that neither advances the field nor helps us improve our educational system.

Let's bring the analogy back to football.  Let's say that Football was a field in many Universities.  Grad students train under faculty who work for Schools of Football and/or Departments of Football Policy, Football Leadership, Football Teaching & Learning, Football Evaluation, Football Foundations, Football Studies, and so on.  And most of the research on football is conducted by faculty and grad students from these schools and departments.  There's no reason why an economist shouldn't do a study on the costs and benefits of attending school on a football scholarship; why a psychologist shouldn't conduct a study on the impact of playing football on one's personality; or a sociologist shouldn't conduct a study of the impact of playing football on one's social capital.  But in order to do these studies well, they first need to understand how the game of football is played, what a player does on the field, how much he practices, and so on.  Otherwise they're just chucking their theories against a wall and hoping one sticks somewhere.

So, to all the smart economists, psychologists, sociologists, etc. out there who wish to conduct the research on education: Welcome, we'd love to hear your insights and figure out if we can apply your theories and methods to help us advance our field and improve our schools.  But before claiming that you've solved a problem none of us have been able to for the last 100 years, take some time to learn how schools operate.  Read a massive and wide-ranging stack of literature.  Go visit some schools.  Talk to people who work in schools and education departments.  Talk to people who study those who work in schools and education departments.  Then begin your research.

At the very least, that should save you the embarrassment of asking students how many touchdowns they need to score in order to hit a home run on their fourth grade reading test.

Tuesday, February 7, 2012

The Logistics of "Thinning Out" Bad Teachers

Nick Kristof recently wrote another column calling for more high-quality teachers based on the latest paper on value-added measures of teacher quality.  There's a whole lot to discuss about both the column and the research paper, but let me focus for a minute on one small part of it.

Near the end of the column, Kristof writes that "If we want to recruit and retain the best teachers, we simply have to pay more — while also more aggressively thinning out those who don’t succeed. It’s worth it."  Recruiting, retaining, paying (and training, which is left out of this sentence) are all complex endeavors, but the "thinning out" part of the equation is often taken for granted.

Here's my question for Kristof: even if (and that's a big if) we can find a fair, accurate, and agreeable way to identify and dismiss the worst teachers, how many teachers are we actually going to dismiss in such a scenario?

The first question would obviously be whether we need to fire the bottom 5%, 10%, 25% or some other number.  That's up for discussion.

But the logistical question then, is how many teachers among the bottom X% 1.) can be readily identified and 2.) are planning on teaching again next year.  This will differ greatly by school and district, but in some places, this is going to be a very small number.Why?  Let's take a look at what the research says.

First, research consistently finds that it takes 3-5 years for a teacher to reach their potential.  So a good number of the lowest-performing teachers are simply going to be novices who will be better teachers next year.  We don't want to fire a first-year teacher who was in the bottom X% if we have reason to believe they'll be a really good teacher in a couple of years.  That would be incredibly counterproductive.

Second, research has consistently found that value-added measures of teacher effectiveness bounce around considerably from year to year -- particularly for teachers who teach a small number of students (e.g. a 4th grade reading teacher with 18 students versus an 8th grade math teacher with 150 students).  At least one paper has found that averaging scores over three years provides a much better, and more stable, estimate of teacher performance than does any single-year estimate.

Third, a number of recent papers have found that, at least in the first few years, many of the least successful teachers exit teaching.  This makes sense -- if you start a new career and find yourself completely overwhelmed, you're not likely stay very long.

Fourth, teacher attrition is exceedingly high in many high-poverty schools.  The general consensus is that about half of urban teachers leave the field within their first 3 years.

So, what does this mean?  We probably don't want to fire a whole lot of teachers in the first 3-5 years of their career because a.) they're still learning and improving; b.) we can't be that sure who the worst teachers are anyway; and c.) a good portion of the catastrophically bad teachers are self-selecting out of the field anyway.  If we give discount the first two years, when teachers are still learning their craft, and then take three more years to compute accurate value-added scores, it would only be teachers who'd taught for 5+ years who would really be ripe for firing due to low value-added scores.

Which means that the main herd we're trying to thin is the teachers who've made it through those first few years, reached their potential, and for whom we have accurate value-added estimates.  But how many teachers is that?  When I looked at high-poverty NYC middle schools a few years, I found that in the average school, only 1/3 of teachers had 5 or more years of experience.

Let's say that we're very confident in our ability to recruit and retain teachers who are better than our current teaching force and so we decide to fire all below average teachers (a full 50%) -- which would be a far more aggressive plan than any I've seen proposed.  First, the majority of these below average teachers are novices who are still improving and for whom we don't have particularly good estimates of ability.  Given that the majority of struggling beginning teachers either improve or self-select out of the profession, let's estimate that 2/3 of all teachers in their first 5 years are identified as below average teachers.  This would mean that only 1/6 of all teachers in their sixth year and beyond are below average teachers.  And since only 1/3 of teachers are in their sixth year or beyond, this would mean that only 1/18 of all teachers would both have 5+ years of experience and be rated below average.  This is a little under 6% of all teachers.

The average school in my sample had 72 teachers.  So, that's the equivalent of firing four teachers.  And that's under an extremely aggressive scenario.  Besides, now that you've rid your school of the chaff, who, exactly, do you want to fire next year?  And if you want to argue that we could be more aggressive and fire some of the novice teachers, that would mean there'd be fewer low-performing experienced teachers (since teachers tend to be roughly equally effective pre- and post-tenure).  So, for now, let's stick with the assumption that, under an aggressive plan, we'd fire four teachers this year in the average school.

Now, other districts have far more experienced teachers.  And it might make more of a dent there.  But a good number of our poorest-performing schools and districts are quickly churning through teachers too fast for firing low performers to make much of an impact.  Certainly, we should make every effort to rid our schools of the worst teachers (by increasing the performance of, and/or dismissing the lowest performers) -- I don't think anybody seriously disputes that notion.  Or, at the very least, I don't think anybody serious disputes that notion.  But will firing the lowest performing 6% of teachers in high-poverty NYC schools make a difference?  It's possible.  But let's be reasonable -- it's not going to make much of a difference.

So, yes, let's work harder to rid our schools of the worst teachers.  But let's not pretend it will be easy to do.  And, perhaps more importantly, let's not hold our breath while we wait to see if that bullet is actually silver.  In most places, other problems loom far larger.

Monday, February 6, 2012

Evaluating the Evidence on Non-School Interventions

I've been meaning to finish writing this piece for six weeks, and now I finally have.  Enjoy.

One of the most e-mailed articles in the NY Times shortly before Christmas was this piece by Helen Ladd and Edward Fiske on social class and educational achievement, in which the authors call for more non-school interventions ("education policy makers should try to provide poor students with the social support and experiences that middle-class students enjoy as a matter of course"). Overall, I thought it was a pretty good piece, but two things in particular struck me.

1.) That they build an argument for focusing on what happens outside of schools and then their first recommendation is to expand pre-schools.

2.) The recommendations after the pre-school discussion are fairly vague.

While the first is interesting, I'm more intrigued by the second -- and I wonder to what extent it's because they want to recommend that we change 30 things they can't possibly list in the limited space and to what extent it's because they're not sure exactly what to address.

Which begs the question: what do we know about which non-school programs will make a difference?  One particularly promising young scholar has argued that we don't yet know enough (you'll get the joke if you click on the link) to draw many conclusions on the topic.

The authors are certainly right that "Large bodies of research have shown how poor health and nutrition inhibit child development and learning" and they could've included numerous other factors at the family and neighborhood level.  Since we know that these social factors and environmental conditions are causally related to academic performance, trying to ameliorate their impact on low-income children makes all the sense in the world.  But, at the same time, I have yet to find (after extensive searching) a whole lot of evidence that we've been able to successfully do this in ways that rigorous research has found subsequently improved academic performance.  And Russ Whitehurst argues the point even more strongly, writing in a recent report that "There is no compelling evidence that investments in parenting classes, health services, nutritional programs, and community improvement in general have appreciable effects on student achievement in schools in the U.S."

Let's take a look at the few programs they do mention in the piece.  When I search Google Scholar for research on the programs they name, this is about all I can find on the East Durham Children's Initiative, Syracuse's Say Yes to Education program, Omaha's Building Bright Futures, and Boston's Citizen Schools.  Only the last one links the program to any educational outcomes, and it appears to be an internal report.  If there's evidence in peer-reviewed academic journals that these programs have improved students' academic performance, I've yet to see it (note: this is not to say that any of these four aren't working, just that we don't yet have really good evidence that they are).

At this point, some of you may be saying "you forgot about the Harlem Chidren's Zone!".  That's certainly the most-cited example of social policy impacting academics.  But there's a funny thing about that.  As far as I can tell, only one study has linked HCZ to academic outcomes.  And one thing that recently caught me eye is a chapter by Roland Fryer and others in the new Duncan/Murnane book on inequality and schools (highly recommended, btw).  In particular, I find it interesting how they've changed their tune on HCZ the past couple years.

In 2009, Fryer put out an NBER working paper with PhD student Will Dobbie arguing that the HCZ had effectively closed the black-white achievement gap.  The paper got all sorts of play in the press, with David Brooks claiming it proved once and for all that the "no excuses" schools were all that we needed and some of the Broader, Bolder folks replying that, no, it proved once and for all that community resources made the difference.

Shortly thereafter, I asked Geoffrey Canada which it was when he visited Vanderbilt -- he said that we needed both and that it was a "terrible, phony debate" to try and separate them.  Nor could Dobbie and Fryer definitively separate them; in the introduction, they write (emphasis theirs) "We cannot, however, disentangle whether communities coupled with high-quality schools drive our results, or whether the high-quality schools alone are enough to do the trick." (p. 4)

But now they've updated the paper and, according to Fryer's Harvard info page, it's been accepted at the American Economic Journal: Applied Economics. This is from the abstract: "We conclude with evidence that suggests high-quality schools are enough to significantly increase academic achievement among the poor. Community programs appear neither necessary nor sufficient."

This would go nicely with the new book chapter (here's a slightly different version) in which they write, on the first page:

The evaluation of the Harlem Children's Zone allows us to conclude that a high-quality school coupled with community-based interventions does not produce better results than a high-quality school alone, offering further evidence that school investments offer higher social returns than community-based interventions.

That seems like a rather sweeping statement to make based on one preliminary estimate of one program's effects but, nonetheless, their findings do put the burden of proof back on those supporting the Broader, Bolder position.

The closest thing I've seen to a collection research citations indicating that we do have evidence that community-based interventions can work is David Kirp's recent book, but even that involved a good deal of cherry-picking and mostly discussed small programs not explicitly linked with local schools.

So, where does this leave us?  As I wrote above, we have plenty of evidence that a wide range of experiences associated with living in poverty negatively impact kids' academic performance.  And we have plenty of reason to believe that altering these experiences could, potentially, improve kids' academic performance.  But I, and others, would argue that we have precious little empirical evidence that social policy has (or will) alter kids' lives in ways that will subsequently improve their grades, test scores, graduation rate, attainment, etc.  So I find it a bit odd that Ladd and Fiske conclude by writing

But let’s not pretend that family background does not matter and can be overlooked. Let’s agree that we know a lot about how to address the ways in which poverty undermines student learning. Whether we choose to face up to that reality is ultimately a moral question.

I'd make a different pitch if I were they.  I'd write something more along the lines of this: Let's not pretend that family background and living conditions don't matter and can or should be overlooked.  Let's agree that we know a lot about how poverty undermines student learning and how large this impact is.  And let's agree that we urgently need more research on ways to address the links between poverty and education.  The Promise Neighborhoods and other initiatives deserve our full attention and support in the short-run and can potentially provide that will help us better address the problem in the long run.

Of course, twice as many words with half the certainty is a really bad formula for an op-ed.  And there's no quicker way to frustrate policymakers than to write "more research is needed."

But, at the same time, I'm not sure it's helping their cause to claim that we know how to solve the problem.  If I'm in charge of a new Promise Neighborhood, my immediate reaction would be "We do? Great!"  Quickly followed by asking "which factors should I aim to address and which programs do we know are best to address these?"  I don't know the answer to that, and I've yet to hear from anyone who does.

So, in the end, I'd say there's about as much empirical evidence that social policy will close the achievement gap as there is that charter schools, merit pay, and vouchers will close the gap.  That is, very little.  So if we insist on arguing for an either/or approach, this leaves us at a standstill.  Both sides can yell that the other side's evidence is weak.  Which doesn't seem particularly productive to me.

As a researcher, this seems like an excellent argument to conduct a lot more research on the links between social policy and academic performance (as well as on in-school interventions).  Were I a policymaker, I'd want to avoid putting all my eggs in one basket.  We know the status quo doesn't work, but we can't really say for sure what else would be better.  That seems like a golden opportunity for policymakers and researchers to work together and experiment (literally) with a wide variety of reforms -- the former would get to hedge their bets and look prudent and open-minded while the latter would get to conduct groundbreaking research on a crucial issue.

In sum: Do we have conclusive evidence that a particular set of non-school interventions will close the achievement gap?  No, we don't.  So let's not claim we do.  But, let's also vow to keep searching for it.

Tuesday, January 31, 2012

Top 50 Endowments Per Pupil

I'm sure I missed a couple schools (I'm human, I didn't include public schools, and there may be some ultra-small schools I missed), but these are all the schools I could find who are able to spend at least $10K per student per year (assuming a 5% endowment spending rate) while glancing through the rankings and endowment data.


RankSchoolEndowmentEnrollEnd/Stu5%/stuUSNWR
1Princeton$17,109,5087,802$2,192,964$109,648Res 1
2Yale$19,374,00011,701$1,655,756$82,788Res 3
3Harvard$31,728,08019,627$1,616,553$80,828Res 1
4Pomona$1,700,4541,560$1,090,035$54,502LAC 4
5Swarthmore$1,508,4831,524$989,818$49,491LAC 3
6MIT$9,712,62810,566$919,234$45,962Res 5
7Amherst$1,641,5111,795$914,491$45,725LAC 2
8Grinnell$1,500,2191,655$906,477$45,324LAC 19
9Williams$1,784,3052,083$856,603$42,830LAC 1
10Stanford$16,502,60619,535$844,771$42,239Res 5
11CIT$1,772,3692,175$814,882$40,744Res 5
12Rice$4,451,4525,879$757,178$37,859Res 17
13Cooper Union$607,135910$667,181$33,359RCN 2
14Wellesley$1,499,8722,411$622,095$31,105LAC 6
15Berea$978,7351,613$606,779$30,339LAC 27
16Washington &Lee$1,218,1322,173$560,576$28,029LAC 12
17Dartmouth$3,413,4066,141$555,839$27,792Res 11
18Notre Dame$6,259,59811,992$521,981$26,099Res 19
19Richmond$1,877,1933,618$518,848$25,942LAC 27
20Chicago$6,575,12612,781$514,445$25,722Res 5
21Bowdoin$904,2151,762$513,175$25,659LAC 6
22Smith$1,429,5273,113$459,212$22,961LAC 19
23Claremont McKenna$543,2361,278$425,067$21,253LAC 9
24Emory$5,400,36713,381$403,585$20,179Res 20
25Trinity (TX)$962,8292,417$398,357$19,918RUW 1
26Duke$5,747,37714,983$383,593$19,180Res 10
27Bryn Mawr$671,1031,755$382,395$19,120LAC 25
28Wash U$5,280,14313,820$382,065$19,103Res 14
29Northwestern$7,182,74519,389$370,455$18,523Res 12
30Berry$752,5442,087$360,586$18,029LAC 121
31Middlebury$907,6682,532$358,479$17,924LAC 5
32Hamilton$657,5291,861$353,320$17,666LAC 17
33Columbia$7,789,57822,283$349,575$17,479Res 4
34Haverford$402,7301,177$342,167$17,108LAC 10
35Colby$611,4411,825$335,036$16,752LAC 21
36Vassar$814,1302,446$332,841$16,642LAC 14
37Penn$6,582,02919,842$331,722$16,586Res 5
38Carleton$653,4652,020$323,498$16,175LAC 6
39Macalester$654,4652,033$321,921$16,096LAC 25
40Harvey Mudd$243,125773$314,521$15,726LAC 18
41Davidson$509,5831,742$292,528$14,626LAC 11
42Denison$654,5842,275$287,729$14,386LAC 49
43Brown$2,496,9268,695$287,168$14,358Res 15
44Lafayette$658,1462,414$272,637$13,632LAC 40
45Vanderbilt$3,414,51412,714$268,563$13,428Res 17
46Mount Holyoke$602,4812,345$256,922$12,846LAC 29
47Cornell$5,059,40620,939$241,626$12,081Res 15
48Colgate$693,4362,903$238,869$11,943LAC 21
49Oberlin$699,8952,974$235,338$11,767LAC 24
50Holy Cross$606,0742,899$209,063$10,453LAC 29



There seem to be four tiers here:

1.) Three schools can spend over $80,000 per student in a given year
2.) Eight schools can spend between $40K and $55K per student
3.) 13 schools can spend between $20K and $40K per student
4.) 26 schools can spend between $10K and $20K per student

I was curious whether national research universities or liberal arts colleges had more resources per student but, outside the top three, they're pretty evenly mixed.