the ranking phenomenon
15
www.petersons.com
admissions at Harvard University, who works for a college that frequently
is at the top.
What accounts for this disconnect? How can something be viewed as
a trusted resource for consumers yet be dismissed by nearly all educators,
including those who actually benefit from its findings?
For Arthur Rothkopf, president of Lafayette College in Pennsylvania,
the ranking system fails, in part,
because the numbers gathered by
the newsmagazines are not reli-
able measurements of educational
quality. "Some [of the indicators
measured] are perfectly valid fac-
tors to look at and much of the
data identifies things students want to know," he concedes. But the aggre-
gate number--the number that generates its ranking--is inherently flawed.
It is useful to know the range of SAT scores necessary for admission. It's
good, too, to know the percentage of classes with enrollments of 50 or less.
But the logic of the system falls apart when these numbers are combined
and used to pass judgment on the institution's overall quality. Putting them
together is necessarily arbitrary." Rothkopf questions why these 10 or 15
factors measure quality more reliably than another combination of factors?
What would happen if a category was removed or existing categories were
given greater or less weight? Even minor tinkering with the formula--
which U.S. News does from time to time--produces a dramatic realign-
ment to the list.
Consider the rise and fall of the California Institute of Technology
(CIT). For many years, this rigorous and selective institution ranked highly,
but never broke the Ivy League juggernaut. Then a new editor, Amy
Graham, joined U.S. News. Examining the magazine's ranking method-
ology, she saw a built-in bias toward Harvard, Princeton, and Yale. So the
magazine shifted the weights for a few categories in which several other
Cal Tech's days in the number one
spot were numbered because it is
an institution "where none of the
U.S. News' editors went."