Tuesday, March 18, 2014

Face Validity and the New Ed School Rankings

For those of you outside academia, "face validity" is a fancy term academics use that simply means that something makes sense upon first glance (or "on face").  US News and World Report released their latest grad school rankings last week, and one thing I notice is the lack of face validity.

First, I should note that I'm a Vanderbilt alum and Vanderbilt dropped to #2 in the education school rankings after ranking #1 for five consecutive years.  I hesitated to write this post lest anybody think it's simply sour grapes.  Or maybe an attempt to draw attention to the fact that Vanderbilt dropped in the rankings as soon as I left . . .

In all seriousness, I really don't care all that much where Vanderbilt ranks.  I don't even know what it actually means to be the top-ranked school of education.  What matters most?  Outcomes of students?  Research of faculty?  Selectivity?  The current rankings measure the latter two but not the first (my biggest criticism of them would be that they make virtually no attempt to measure the education students actually receive).  We could construct the rankings 100 different ways that would all make some amount of sense, so it's a bit ridiculous that the USNWR rankings draw so much attention (and yet, here I am, helping them draw even more attention).

Despite the fact that I claim I have no idea what it means to be the top-ranked school of education, if pressed I'd have to posit that the top four, in some order, are Vanderbilt (Peabody), Columbia (Teacher's College), Stanford, and Harvard.  I think those four have the most history and prestige, but I could be wrong.  By one measure, they have the most recognizable scholars (Stanford has 21; Harvard 19; Columbia 12; and Vanderbilt 11 of the 200 scholars ranked) -- so I don't think I'm totally off-base here.  So it's interesting to me that those schools rank second, third, fourth, and eighth.

Another measure would be to look at which schools have the top-ranked programs.  USNWR ranks the top 20 or so programs in 10 different fields, though they do so solely based on nominations by Deans.  I'd consider these rankings "face validity" because they're simply asking knowledgeable people what looks like it would make sense to them rather than doing any sort of comprehensive multivariate analysis. By clicking on each school's profile, one can see how many programs that school has that made the cut. Below are the top 25 ranked schools of education and the number of fields in which they were ranked:

2.) Vanderbilt: 9
5.) Wisconsin: 9
8.) Columbia: 9
15.) Michigan St.: 8
16.) Ohio St.: 8
4.) Stanford: 7
8.) Columbia: 7
8.) Michigan: 7
11.) UCLA: 7
7.) Washington: 6
10.) Texas: 6
22.) Virginia: 6
25.) Indiana: 6
3.) Harvard: 5
14.) UC-Berkeley: 5
5.) Penn: 4
20.) NYU: 4
20.) Minnesota: 4
18.) USC: 3
13.) Oregon: 1
17.) Kansas: 1
22.) Pitt: 1
24.) BC: 1
1.) Johns Hopkins: 0
11.) Northwestern: 0
18.) Arizona St.: 0

A few things stick out here:

-There looks to be only a mild correlation between a school's overall rank and the number of top programs it has.

-A number of schools have quite a few top-ranked programs but are outside the top 10 -- Michigan State, Ohio State, Virginia, and Indiana are particularly notable.

-Meanwhile, Northwestern and Johns Hopkins have exactly zero top-ranked programs and yet rank above all of those programs.

-Yes, you read that right: Johns Hopkins -- the new #1 School of Education -- has exactly zero programs ranked among the top 20 or so in the country.  Now, I freely admit that I have absolutely no idea whether or not Johns Hopkins has the best faculty, students, research, or anything else we try to measure, but that's pretty striking.


So, something doesn't make sense here.  How could Deans perceive that schools have a slew or dearth of top programs while the overall rankings indicate that the school as a whole is actually merely really good or the cream of the crop?

One possibility is that the Deans' perceptions are wrong and that the data collected by USNWR are a better indicator of quality.  Another possibility is the opposite -- that the rankings are a sham and that we should listen to the Deans.  A third possibility is that the program rankings are misleading in some way (e.g. some fields are more important than others, important fields are missed, the narrow margins are insignificant, or that a few top-5 programs is better than a bunch of top-20 programs).

If it's number one or two then I'd argue that either the overall rankings or the program rankings are lacking in face validity.  In reality, it's probably some mixture of all of the above.  But if memory serves, I believe that Texas, Oregon, and Johns Hopkins have been second one year and 10th or lower another year just in the past five years.  It's possible that school quality changes that fast, but it seems rather unlikely.

Another way of looking at the rankings would be to look at programs that rank at the very top of their fields.  If we look at the number of programs that rank in the top 10/top 5/#1 in their field, we get a different picture for each one and from above:

Wisconsin: 8/7/1
Vanderbilt: 8/5/2
Michigan State: 7/4/2
Columbia: 6/5/0
Michigan: 6/5/0
Ohio State: 6/1/0
Stanford: 5/5/2
Virginia: 4/1/0
Harvard: 3/2/0
UCLA: 3/1/1
Texas: 3/0/0
Indiana: 3/0/0
UC-Berkeley: 2/0/0
Washington: 2/0/0
Penn: 2/0/0
Minnesota: 2/0/0
Oregon: 1/1/0
Kansas: 1/1/0
USC: 1/1/0

When we look at this way, we notice that four schools ranked in the top 10 didn't have a single program ranked in the top five in its field (#1 Johns Hopkins, #5 Penn, #7 Washington, and #10 Texas).

Another thing you may notice is that there are only eight #1 rankings in 10 fields.  That's because the schools ranked first in student counseling and personnel services (Maryland) and technical/vocation education (Penn State) didn't make the top 25.  Which may confirm my earlier hypothesis that some fields are viewed as more important than others.  Or not.  If we look at the number of programs ranked among the best in their field or the top 10/5/#1 in their field, we find a few schools outside the top top 25 that dwarf most of the top 25 schools:

33.) Penn State: 9/4/2/1
33.) Georgia: 9/6/3/0
26.) Maryland: 7/2/1/1
26.) Illinois: 6/2/1/0

All of which leads to a whole lot of confusion.  I'm not really sure what the rankings are measuring to begin with, but it sure seems odd that their specialty rankings would be so misaligned with their comprehensive rankings.  I'd bet that a lot of the Deans polled for the specialty rankings (and academics who think like they do) probably think the overall rankings are lacking in face validity.

2 comments:

robert said...

good list which will surely gonna help those students who are unaware from this ranking so that they can choose wisely :)

Diwakar Saraswat said...

This is an awesome information to help students decide where to enrol. Keep it up!