As I looked through Brian Huddleston’s color-coded table of this year’s US News data (h/t, TaxProf), I saw something interesting. In this time of gloom and doom, almost every school’s overall score went up. Of the ranked schools, ten had the same overall score (not counting Yale — you can’t improve on 100) and nine went down in overall score. But everyone else’s overall score went up. Wayne’s World! Party Time! Excellent!
In this law school economy, how could that be? If you go through the rest of his chart, you’ll see a lot of red (as in, falling numbers in student selectivity metrics, employment, and bar passage rates) mixed in with the yellows and greens. The health of the economy doesn’t look as good as the trend in overall scores would indicate.
However, one column is noticeably green: the student to faculty ratio. This isn’t surprising and has been discussed in lots of places — schools shrink class sizes quickly to maintain student quality metrics, schools can’t shrink faculty as quickly, and so the ratio drops.
Schools also can’t shrink staff or brick and mortar as quickly, either, and all of this translates into increased per capita expenditures. According to the published US News methodology, “faculty resources” amount to 15% of the overall score. Within this indicator, 65% comes from expenditures per student on instruction and library and supporting services, 10% from financial aid, 20% from student/faculty ratio, and 5% from volumes in the library. This score is based on a two-year average.
When schools cut class sizes, student selectivity metrics (25% of total score) stay the same or go down slightly, but this metric goes way up (in the short term). And schools get a better overall score.
Essentially, when the economy slumps, these per capita expenditures kick in like automatic economic fiscal policy stabilizers — the unemployment insurance of the US News rankings, if you will. Only these expenditures don’t do anything good for the law school economy — they just obscure the trends and prop up numbers. This metric is often criticized, and maybe this is another small criticism to add to the more substantial critiques.
Maintaining elevated ratios in this metric isn’t sustainable (many in Congress say the same thing about unemployment insurance). After a few years, we should expect that expenditures will return to sustainable levels and the “overall score” column will destabilize. But it is interesting to see the blip in the chart.
Posted by Eric Carpenter on March 31, 2015 at 07:58 AM
Comments
I agree. I think all the change really does is to make it more difficult to model the data. My guess is that if someone put the energy into modeling the current data, that model would only do an ever so slightly better job with classification (as in, barely noticeable)than Seto’s model — definitely not enough to make it worth the time to do that exercise. His “overall score equivalents” table should still be good.
Posted by: Eric Carpenter | Apr 2, 2015 4:19:21 PM
There’s no material difference between what they did before and what they do now, excepting a discount for school funded jobs.
Posted by: dave hoffman | Apr 1, 2015 7:17:05 PM
The general score increase actually comes from a revised version of the Simkovic and McIntyre article “The Economic Value of a Law Degree.” They’ve adjusted for a bit of inflation, and taken into account changes in the consumer price index and other factors (most notably the sharp decline in the cost of oil). The new version of the article now finds a $1.08 million premium for law degrees, a 8% increase over the original $1 million mark. Thus, even when a school’s employment numbers fall a bit, it can still see an uptick in its overall score because the total value conferred to students has gone up.
With all such studies, the most important caveat to keep in mind is, of course, April Fools.
Posted by: Derek Tokaz | Apr 1, 2015 8:22:49 AM
I’ll also add, regarding employment gains, that percentage of class employed also ties into shrinking class sizes and thus has some relationship to the faculty to student and expenditures to student ratios. In other words, if there are 100 placements a school can generally count on for its graduates each year in its local market then a class of 200 will predictably have a lower placement rate than a class of 120.
Posted by: Former Editor | Apr 1, 2015 7:58:58 AM
Hi Andrew: When I scroll down Brian’s chart, I see a lot of red in that column with some schools having pretty dramatic drops in employment but still seeing gains (sometimes big) against Yale. I’m with you that employment translates into big gains. I’m not sure that it explains this blip, though. Of 150 schools, about 130 made gains while about ten stayed the same and about ten lost some ground.
Posted by: Eric Carpenter | Mar 31, 2015 8:55:16 PM
I am fairly certain that rising employment rates for middle and lower tier schools account for most of the “improving” scores. Employment rates are big percentage categories with wide variance in scores among schools, and therefore account for a lot of the gaps between schools. As schools with low employment rates have moved their rates up while schools with high rates have not (because they were already near the maximum), the overall scores of the improving schools have gone up, often quite substantially. I did some rough calculations a year and a half ago; going from memory, I think that given the shape of the data set, an increase of 5% in employment rate for a mid-tier school might account for a 2 point increase in raw score, assuming that the highest ranked schools do not see any increase in their rate.
Posted by: Andrew Siegel | Mar 31, 2015 8:21:12 PM
Hi Dave: I think the methodology has changed some since the article came out. In 2007, it looks like US News had twelve discreet variables. They standardized each one, weighted each one, summed, and then reported as a percentage of the top school. Now, they have four categories, which have subcategories. They arrive at some value within those categories, then standardize within that category, weight the category, sum with the other categories, and report.
I’m not entirely sure what they do within categories. For example, for “faculty resources,” they do 75% for per capita expenditures, 20% for student/faculty ration, and 5% for number of books. I assume they standardize within the sub-categories, weight, and then sum to get value for that category. Once they get that value, they standardize and then weight by .15.
Which means more steps (and more assumptions) if I want to model this. Ugh.
Please let me know if you see this differently (feel free to get me via email).
Posted by: Eric Carpenter | Mar 31, 2015 4:45:55 PM
Eric
I don’t think you are accurately characterizing how USNews goes about its calculations. Have you read Seto? http://papers.ssrn.com/sol3/papers.cfm?abstract_id=937017
Posted by: dave hoffman | Mar 31, 2015 12:54:55 PM
“By saying “everybody’s score went up,” what we are saying is that just about everybody gained on Yale. In terms of actual quality, I don’t expect that every school got better as compared to Yale. My guess is that Yale is doing what Yale does and stayed relatively unchanged in student size and resources and a blip showed up in the report. “
I’m guessing that Yale’s job outcomes have stayed at very good. This means that the overwhelming majority of schools’ scores should have declined.
That gives us a clue as to how little job outcomes matter.
Posted by: Barry | Mar 31, 2015 12:33:19 PM
I think we may be saying something similar but coming at it from different directions. The overall score tells us is how everybody compares to Yale. After US News runs its magic, it gives the top school a score of 100 and then reports everybody else as a percentage of that school. So, Yale has 100 and a school with a tally of points that is half of Yale’s total tally gets a final score of 50.
By saying “everybody’s score went up,” what we are saying is that just about everybody gained on Yale. In terms of actual quality, I don’t expect that every school got better as compared to Yale. My guess is that Yale is doing what Yale does and stayed relatively unchanged in student size and resources and a blip showed up in the report.
US News uses four indexes (quality assessment, placement success, student selectivity, and faculty resources). Within each index, they tally a bunch of points based on subcategories and then run some form of means analysis (“scores on each indicator were standardized about their means”). I am assuming they convert these scores to standard units. US News then weights those numbers (.4, .25, .2, .15) and totals them.
I’m assuming that not much changed on the first three indexes (when I looked at the chart for the other indexes, those were all reasonably-well mixed up.) The one indicator in the chart that looked pretty green was a subcategory of the faculty resources index.
In this index, many schools increased their expenditures ratios. I’m assuming that Yale stayed the same. The mean shifted closer to Yale, so Yale gave up some standard units. After that runs through the weighting, everyone gets a little closer to Yale, and it looks like everybody gains a few points.
Posted by: Eric Carpenter | Mar 31, 2015 10:01:06 AM
I’m not sure this makes sense as an explanation when you realize that the scoring system turns on category variance. If it were in fact true that everyone’s F/S ratios & expenditures got better, no one’s scores would improve.
Also you can’t observe the actual scores of half of the nation (the ones USNWR doesn’t report raw scores for).
Posted by: dave hoffman | Mar 31, 2015 9:11:34 AM
These two effects have the particularly perverse consequence of propping up the numbers of schools who are behind the curve on right-sizing. Schools that have been able to more effectively shrink their faculty and staff sizes to meet declined enrollment, thereby increasing their long-term financial outlook, actually suffer on both metrics. Schools that are struggling with that maintain elevated ratios, boosting their ranking, even as they place additional financial strains on their endowments or parent universities.
Posted by: Former Editor | Mar 31, 2015 8:22:58 AM
