September 02, 2025 Class

Rank Methods

It’s time to replace college rankings with something better.

By Daniel Diermeier
Diermeir 4

Illustration by Janelle Delia (used with permission).

Higher education has a curious relationship with college rankings. Almost no one thinks they’re valid. University leaders openly criticize them. A handful of universities and a number of leading law schools and medical schools refuse to even participate in them. Yet most universities still go along with the process and use rankings to their advantage when they can. Collectively, we are at sea: far from the time when rankings had a veneer of credibility but not yet arrived at a post-rankings era.

We must move faster toward that opposite shore. With their use of low-quality data, subjective standards and flawed, ever-shifting methodologies, college rankings too often do the opposite of what they purport to do. They make it harder for students to find the right school.

One of the most comprehensive critiques of rankings can be found in former Reed College president Colin Diver’s 2022 book, Breaking Ranks: How the Rankings Industry Rules Higher Education and What to Do about It (Hopkins Press). Diver had good reason to reflect on the subject. Reed College was the first school to pull out of the rankings on principle, in 1995.

Among other issues, Diver focuses on the muddled methodology employed by U.S. News & World Report and other rankings publishers. He points out the problems inherent in arbitrarily selecting and assigning weight to the variables that rankings schemes measure and looks skeptically at the peer assessments, financial measures, selectivity rates, and other factors that go into rankings. “The pseudoscientific precision of the mathematical formulas used in the most popular rankings,” Diver notes, “is really quite comical.”

Three years after the publication of Diver’s book, further analysis is showing criticisms like his to be no laughing matter. 

Last year, a report published by NORC at the University of Chicago and funded by Vanderbilt University assessed the construct validity of five prominent rankings systems, including that of U.S. News. Among the issues it found were subjective weights, proxy measures of questionable relevance, inconsistencies in data quality and a lack of transparency. Key data points were missing, while data for other important metrics, such as graduate outcomes, were incomplete. A major problem, the study found, is that there is no shared definition of what “good” looks like for colleges. Each ranking creates a target and then purports to hold colleges to that subjective standard. 

These flaws matter for the students and families who look to rankings for some semblance of guidance. The rankings obscure one of the great strengths of U.S. higher education, which is the range of institutions that can serve students with very different wants and needs. Students are instead steered toward one view of what makes a good college that may not reflect what matters to them.

In addition, data issues lead to a situation where important attributes of colleges are misrepresented. Consider how U.S. News treats affordability and graduate indebtedness — the amount of loan debt students incur to pay for an education at a given school. Its assessment is based on data solely about students who receive federal aid. But at some top-tier private schools, including Vanderbilt (where I serve as chancellor), many students receive loan-free aid, and some lower-income students may even be able to attend for free.

What we need is not another rankings system, but a ratings system, one that quantifies true measures of academic quality and accessibility.

At Vanderbilt, that holds for families making less than $150,000 a year, about 4 in 5 American families. By ignoring students without loans in its methodology, U.S. News makes these schools seem less affordable. Ironically, students may be persuaded to instead choose colleges that could cost more, or conclude college isn’t for them at all. In this way, U.S. News exacerbates the problem researchers call “undermatching,” which can inhibit the futures of high-achieving students from lower-income households.

Problems with rankings are not limited to domestic rankings systems. A forthcoming NORC analysis of global systems finds that they are similarly flawed, with additional challenges stemming from comparing institutions across cultural contexts and more.

What should we do with rankings in light of their flaws and negative impact? Diver says, “Their costs have outweighed their benefits, and my primary advice to both applicants and educators is to ignore them.” Recognizing that most students and university leaders won’t be able to do that, he offers suggestions for grappling with these flawed systems.

This advice is useful, but ignoring or making the best of rankings isn’t enough. The fact is that students and their advisers need a way to make sense of the complex decision about college options. We need to go further and develop an alternative that eventually displaces rankings. What we need is not another rankings system, but a ratings system, one that quantifies true measures of academic quality and accessibility. It should be data-driven, transparent, stable, and applied to every institution, public and private, in the country. It should also allow students to personalize their list based on what matters to them, rather than relying on someone else’s subjective idea of what “good” should look like. 

Such a system would give students and their families the ability to make choices based on clear, accurate information about cost and quality, not on the self-interested, shifting methodologies of profit-driven rankings organizations.

At Vanderbilt, we are working with other U.S. higher education institutions and are funding NORC to develop and pilot such a system, collecting and analyzing data in a new way. Our goal is to provide richer and more usable information so that students can find the best college for them. The challenge of setting up this system is considerable, but we believe it is necessary to support the next generation of students.

Back in 2001, Bard College president Leon Borstein pulled no punches in his assessment of college rankings. Diver quotes him: “It is the most successful journalistic scam I have seen in my entire adult lifetime. A catastrophic fraud. Corrupt, intellectually bankrupt, and revolting.’’ Almost a quarter-century later, enough is enough. Many universities have mastered the art of playing the rankings game. But rankings regimes are failing students and their families. We owe it to them to build something better.

Dan Diermeier

About the author

Daniel Diermeier, Ph.D., is the chancellor of Vanderbilt University.