Catatan Wartawan International Herald Tribune dari UNESCO Global Forum on World University rankings, Paris, 16-17 May 2011

Debating the Merits of University Rankings
New York Times,
Published: May 29, 2011

PARIS — When Nan Cao Liu and his colleagues at Shanghai Jao Tong University decided to compile a list of the world’s 500 leading universities back in 2003, their aim was simply “to find out the gap between Chinese universities and world-class universities, particularly in terms of academic or research performance.”

To minimize controversy they decided to use only indicators that could be quantified: the number of Nobel Prizes or Fields Medals awarded to an institution’s faculty or alumni; the number of papers published in two leading scientific journals or cited in other published research. To avoid the taint of partiality, they would use only data collected by other organizations.

Yet the effect of their investigation on the academic landscape was roughly akin to what would happen if, lost in a fireworks factory, you decided to light a match to find your way out.

Though some universities found ample grounds for satisfaction — Harvard has been at the top of the Shanghai ranking since the survey began — others were less flattered. And since this was the first large-scale effort to rank universities without regard to national boundaries, there were unwelcome surprises, and cries of outrage, from countries whose flagship universities were nowhere to be seen on the horizon.

There are now rankings that measure everything from the prominence of universities on the Web — the Webometrics ranking, sponsored by the Spanish Ministry of Education — to dozens of national comparisons published by magazines and newspapers. Institutions that do well trumpet the tidings, hoping to turn enhanced prestige into greater government support or increased donations from status-conscious alumni. Those that do less well try to challenge the rationale by which the complex business of educating students, inculcating a culture and enabling cutting-edge research can be boiled down into a single number.

The reverberations from the explosion set off by the Shanghai researchers could be heard very clearly at the global forum on “Rankings and Accountability in Higher Education” held at Unesco headquarters in Paris this month. Delegates from 68 countries spent two days debating the uses and misuses of rankings as, perhaps for the first time since 2004, representatives from all of the major international ranking organizations, and many of the national ones, confronted their critics. Although, since this was done under the auspices of Unesco, the Organization for Economic Cooperation and Development and the World Bank, and almost all of the participants were academics, the confrontations were mostly intellectual rather than physical or emotional.

Even so, there was palpable tension in the auditorium when the conference keynote speaker, Ellen Hazelkorn, vice president of research at the Dublin Institute of Technology, delivered a shorter, blunter, but no less damning version of the indictment set out in her recent book, “Rankings and the Reshaping of Higher Education.” Although conceding that “rankings appear to be a simple and easy way to measure performance,” Dr. Hazelkorn warned that appearances were deceptive.

Reminding the audience of Einstein’s dictum that not everything that counts can be measured, she asked, “Do rankings measure what counts?” and whether “they actually raise standards,” as their proponents claim. She suggested instead that by increasing the emphasis on factors like reputation, medical and scientific research (which may be too expensive).

For a university to do well in the rankings, Dr. Hazelkorn said, “age and size matter.” The tables are dominated by a “small league of well-endowed universities, in English-speaking countries, usually with a medical school,” she said. And while the ranking organizations say their research is aimed at prospective students or administrators, politicians have been quick to seize on rankings to justify radical changes to funding.

Dr. Hazelkorn cited the example of President Nicolas Sarkozy of France, who declared: “We want the best universities in the world,” he said. “How many universities do we have? Eighty-three? We’re not going to divide the money by 83.”

Reflecting one widespread criticism of ranking — the lack of any objective measure of teaching quality — Richard Yelland of the O.E.C.D. spoke about his organization’s work on the Assessment of Higher Education Outcomes, an effort to measure what university students know and can do by the time they graduate. Modeled on the O.E.C.D.’s influential PISA tests taken by 15-year-olds, the outcomes assessment, which is still in the early stages of development, may someday pose a significant challenge to the private companies that currently dominate rankings.

Criticism also came from those countries that feel not only shut out but ill-served by the ranking process. Imanol Ordorika from the National Autonomous University of Mexico said that none of the international rankings pay attention to the effect of higher education on social mobility, a crucial priority in much of the developing world. Nor, he said, did they consider the role of a university in building the nation. “Yet throughout Latin America, we have seen universities generate the policy ideas that gave birth to our nations,” he said. “In Mexico after the revolution, the universities provided physicians, ministers of public health, engineers.”

Calling the spread of rankings “a homogenizing force” Dr. Ordorika warned: “They push our policy makers to copy Harvard. It’s not that we can’t have Harvards everywhere. It’s that we don’t need and don’t want them.”

His objections were echoed by Najib Abdul Wahed, deputy minister of higher education in Syria, who urged the delegates to go beyond the model of a university dominated by research to allow for “a teaching model, a developmental model, or an entrepreneurial model” in which success would be measured by more diverse goals than simply getting Nobel Prizes or producing reams of published research.

But the case for the defense was equally robust. Phil Baty, deputy editor of the Times Higher Education magazine, conceded that “rankings are rather crude; they can’t be completely objective,” adding that “governments should treat them with real caution.” He insisted, however, that “as long as they are serious and transparent, rankings can be a useful tool.”

Ben Sowter, director of research for QS, Mr. Baty’s main competitor, agreed with critics that rankings “were designed on an Anglophone model of what a university looks like.” But he also said that with 3.4 million students studying outside their home countries, his organization was committed to “helping international students make more informed choices.”

“Rankings are a democratic instrument,” said Gero Federkeil of the Center for Higher Education, a German research organization that publishes its own rankings of German universities. “They offer information to young people who come from a family background where they don’t have university-educated parents who can assist them,” he said.

Riri Fitri Sari, director of information technology at the University of Indonesia, had a suggestion for countries unhappy at how their own institutions are depicted: Start your own ranking, as her country did this year with its “Green Metric” ranking the campus sustainability and the eco-friendliness of university policies. Although U.S. universities still dominate the top 10, there are 22 Indonesian universities in the top 100.

“Ranking will raise standards by increasing competition, because what gets measured is what gets done,” said Dr. Sari. Simply by introducing its own evaluation, Indonesia has “improved our Webometrics” raising the country’s profile on the web. “Now we can be the rankers,” she said.

Despite their disagreements, few of the delegates would argue with Barbara Ischinger, director for education at the O.E.C.D., who said that the phenomenon of rankings was a byproduct of “the shift from a world where higher education was reserved fo

Nor did any of the delegates seem to expect rankings would go away. But many seemed to endorse the call by Jamil Salmi, higher education specialist at the World Bank and author of “The Challenge of Establishing World Class Universities,” for a move toward a less narrow set of measurements. Dr. Salmi said it would be futile, and mistaken, to try to go back to the era before rankings. Instead he urged researchers and policy makers to establish benchmarks that can be used as more accurate gauges of performance.

Instead of generating a league table of elite universities, benchmarks would look at the whole higher education system of a country. “You could then compare the performance of that system with other systems,” Dr. Salmi said.

Such an approach would be more useful because “you could also compare similar institutions both within and across national borders. That would help to identify and transmit the best practice,” he said.

A version of this article appeared in print on May 30, 2011, in The International Herald Tribune with the headline: Debating the Merits of University Rankings.