Breaking Ranks: Assessing Quality in Higher Education

 

The ranks’ progress

A few years ago, the head of a university in Bavaria was asked by an educational organisation whether he would like the university to be included in a league table. “Definitely not” was his answer. When he was finally reassured that his university would not be included, he asked – merely out of curiosity - where it would have been ranked compared to others.


The story illustrates the ambivalence of universities towards rankings. Since the "U.S. News and World Report " published its first league table in 1983, universities have felt the pressure to stay in the ranks or risk failure. Come out on top, and your success is bound to breed further success; slide into the lower ranks, and potential students will go elsewhere. As rankings moved from being a tool to compare national universities to one comparing some 17,000 worldwide, competition for the top slots grew fierce.


But do rankings, in fact, measure what they claim to measure? Are they doing more harm than good? These questions were at the centre of the " OECD/IMHE General Conference “Higher education: quality, relevance and impact” held on 8-10 September in Paris. Three-hundred-thirty participants from 53 different countries attended the conference, one the largest in the history of IMHE and an indication of how urgent these issues are becoming.

 

Strait is the gate

A single number cannot encapsulate all the elements of higher education. The two most popular international rankings, the  "Shanghai Jiao Tong University Ranking (SJT)" and the "Times Higher Education – Quacquarelli Symons Marketing (THE-QS)"  have been criticised for giving too much weight to research, scholarly publications, citations in learned journals and the prestige of older universities, and not enough to teaching and learning.   


Research is the brightest lure for students and faculty. “Research is sexy,” said a respondent to an international survey conducted by conference speaker  Ellen Hazelkorn  Director of the  "Dublin Institute of Technology". “Reputation, unfortunately, is always based on research, and research attracts the best talent.” Research is the most salient example of a nation’s intellectual resources, economic strength and global competitiveness.   

        
One reason for seizing on research, especially scientific research, as an indicator is that it is easier to measure. Compared to the thousands of scientific articles appearing in Nature, the British Medical Journal or the Thompson-ISI scientific journals, humanities scholars publish only 5% of their work in article form; the rest is in books. Scientific and technological advances are the measure of global competitiveness. One respondent in Professor Hazelkorn’s survey said “the easiest way to boost rankings is to kill the humanities.” Another - a faculty member in the social sciences at a German university - was told by administrators “to find ways to connect to the natural sciences, our new strategic focus.”    Arguments that the natural sciences are more relevant are disingenuous. Robert Berdahl, President of the "Association of American Universities", pointed out that the problems of contemporary society – migration, aging, climate change, the legacy of colonialism and religious extremism – cannot be solved by the natural sciences alone.    This is not to say that scientific research is the spoiled sibling. Basic research today is pursued almost exclusively in universities and a few government institutions; industrial laboratories have abandoned it entirely. Mr. Berdahl noted, however, that commercial programmes are beginning to cloud “blue-skies” research in the United Kingdom, as projects proposed by investigators are rejected in favour of government initiatives.

 

The SJT gives a 40% weighting to research, which includes rewarding faculty for publications and high citation rates in journals. It gives another 40% to high-profile faculty, those who have won Nobel Prizes and Fields Medals. The THES-QS gives each of these indicators a more modest 20% ranking, but places 40% on peer review, a category not even included in the SJT. The trouble with these and other weightings is that they lack a sound theoretical basis and merely reflect the view of the publisher, and although the data may be valid, it is likely to be misinterpreted. The SJT itself cautions against using the tables as an overall assessment of a university, but of course that is exactly what people do. There is no evidence that a Nobel Prize winner on campus benefits students, but there is plenty of evidence that prestige does, at least in terms of career opportunities. How can a university gain prestige? The best way is by already having it.

 

Older, pre-1920s universities are consistently ranked as the best in the world. They attract high-calibre students and faculty, and receive generous funding. Few universities can match Harvard with its endowment of 3.5 billion dollars, and most never attempt to do so. But league tables leave them little choice. Rankings are two-dimensional: you’re either one better or one bested. Plucked from their context, universities are subject to a scrutiny that takes no account of the historical, cultural and economic forces affecting performance. 

 

Peer review is commonly regarded as the most reliable method for assessing quality. But how reliable is it? When the THES-QS queried 200,000 academics worldwide to name what they felt were the top thirty universities, they provided them with no indicators on which to form an opinion. Respondents resorted to the usual and most visible criteria of reputation and published research. The survey is not considered representative with an average 1% response rate, mostly from academics in countries where the THES-QS is published, and who – as might be expected – gave their highest marks to British institutions. These institutions may or may not deserve to be there, but how they got there should be made clear. 
 The idea of becoming “world class” has tucked universities into a Procrustean bed of indicators.  The move is dangerous, according to speaker Franz Van Vught, Chairman of the "European Centre for Strategic Management of Universities/". He even found the term grandiose. How could a university be “world-class” in every respect?


 With an estimated 17,000 institutions in the world offering higher education (according to the International Association of Universities), Professor Van Vught wondered how those failing to rise to the top 3% could be judged failures. And should a university appear high in the ranks one year and slip ten places the following, as has happened in the THES-QS, how credible is the measurement? What is to say it won’t climb fifteen places next year? Hitting an air pocket this big is improbable.     


The IMHE and the International Association of Universities (IAU) entered the second phase of a study begun in 2006 to evaluate the impact of rankings on university faculty, administrators and students. The second phase looks at countries whose institutions, armed with new policies encouraging educational excellence, have only recently put their heads into the lion’s jaws of rankings. This phase also broadens the scope to include policy makers and businesses, which have become as attentive to rankings as university leaders and students.   


 Rankings are an inescapable reality in higher education, and though many institutions deplore them, some 50% do not hesitate to use them for publicity and marketing. There is no doubt they can be beneficial, especially to middle-ranking universities with reputations to be made. But rankings fail to do justice to the unique mission of each university. Universities’ missions widely differ and are an expression of their values and character. A miasma of competition surrounds league tables. Vice-Chancellors, picking up the scent, eagerly trim programmes, reorient missions or seek mergers with higher-ranking institutions. Conversely, those higher up jealously guard their hard-won reputations and shy away from collaboration with anyone but their peers. Unbridled competition induces copycat behaviour among universities unless policy encourages diversity.

 

Turning the tables

 

A university unable to hoist itself into the higher ranks may use its lower position as a marketing strategy. In Australia, 19.3% of the student body is international, well above the 6.7% average of other OECD countries; in some universities it is 50%.  


Immigration is a new selling point among less prestigious Australian universities, according to Andrys Onsman, Academic Co-ordinator at Monash University. He referred to an article in the Times of India, which reported that many Indian students applied to the University of Ballarat because it was easier to earn a degree there than in the higher ranked universities, while scoring the same number of immigration points.    Other universities encourage more applications because it lets them be more selective in choosing students, and tighter selectivity is seen as a sign of quality.

 

Such strategies have little or nothing to do with education, and only mask the desperation of university leaders. Teaching and learning get far less attention than they should. An indicator like “teacher/student ratio” reveals nothing about a teacher’s ability to teach or a student’s capacity to learn. Why has so little attention been given to the essential task for which universities were created? 


An assessment is only as good as the available data. Too often, data is scarce. No one may have even thought of collecting it. If further data does become available, however, a university may be nervous about divulging it for fear of harming its reputation. Like success, failure tends to reproduce itself.

 

New measures

An alternative to comparing universities is to compare subjects. Compilers of league tables could also broaden their scope by assessing students, not just administrators and faculty, taking into account their different aims. The  "Centre for Higher Education (CHE)" in Germany publishes a variety of data from which students can construct their own rankings, depending on their needs. The CHE “clusters” universities instead of ranking them.  Universities are judged solely on performance. They are not stamped with a number but categorized as “good”, “medium” or “bad”, and listed alphabetically. Universities in one category are of comparable quality, whereas those in different categories show a marked contrast in performance. Multiple perspectives give a clearer picture of a university’s performance than the distorted lens of rankings, which exaggerate marginal differences of a perhaps trivial nature, creating the false impression that one university is clearly better than another. The CHE also surveys graduates. Speaker Gero Federkeil, Programme Manager of the CHE, said that compared to primary and secondary education, “there are very few direct measures of learning outcomes on a national or even international level in higher education.” Surveys are helping the CHE to gather more information about graduates’ entry into the job market. But graduate surveys are not an untarnished indicator of a university’s performance. The CHE found that graduates who were dissatisfied with their careers or unemployed tended to rate their universities harshly.     


 A better indicator would be a student’s competence. Methodologies like the "Tuning System " or the OECD’s  Assessment of Higher Education Learning Outcomes (AHELO) separate a student’s education into two strands of "disciplinary" and "generic skills". Assessing what a student can actually do after completing his or her studies is a more transparent method than comparing degree programmes. Generic skills might include “abstract thinking”, “ability to apply knowledge in practical situations” and “ability to work in a group.” 

 

Yet there are questions as to whether comparative assessments are even possible between institutions in countries where culture, language, and political systems radically differ. AHELO attempts to identify criteria that could serve as universal benchmarks of quality, permitting institutions in any national context to identify their strengths and weaknesses and to improve their performance. Alongside the “disciplinary” and “generic skills” strands, AHELO weaves in two others:  a “value-added” and a “contextual” strand. When the study concludes in 2010, the result will be the most comprehensive overview of learning assessments thus far attained. 

 

A+ universities for B students

Complaints abound that rankings focus too much on “input” in terms of student admissions, and not enough on “output”, in other words, what a graduate has learned and can apply. An assessment at the beginning and end of a degree would give a more accurate idea of the “value-added” component of a degree programme.  


Top universities draw A+ students, as one would expect; it is no surprise if they yield A+ graduates. But what of universities that accept B students and produce A-level graduates? The added value of the B-student’s degree programme could be considered higher than that offered by the top university.    One might go further and see how much a student can wring from a programme.  Chris Brink, now Vice Chancellor of the University of Newcastle in England, said that while he was Vice Chancellor of a South African University in the aftermath of apartheid he was involved in creating a flexible admissions programme aimed largely at disadvantaged black students. There was criticism that standards would drop, but the strategy was “flexible on access, firm on success”. The goal was to turn weak starters into strong finishers, not just to produce straight-A students.

 

Begging to differ

As students and faculty become more international and mobile, universities struggle to accommodate them. The American university is the standard model on which other universities tailor their programmes. This convergence allows for a fairer comparison of degree programmes and facilitates student mobility through a seamless transfer of credits earned at other institutions and the ease by which students can enrol in foreign masters and PhD programmes. Although the success in Europe of the "Bologna Process " has inspired other countries to harmonise their programmes, convergence has limits, particularly when it conflicts with social norms. The principle of selectivity, the bedrock of American universities, runs counter to the educational philosophy of many European and Latin American countries where higher education should be available to all. The danger is in sacrificing one value to save the other.  


The Anglo-American hegemony is also evident in science, economics and business, where the English language is dominant and often the only language of publication. Fields in which the indigenous language is essential, such as the social sciences, humanities and vocational studies may end up marginalised. 
 The proverb “chase two hares and you’ll lose both” sums up what Vice-Chancellors feel if compelled to make between convergence and diversity. The dilemma arises from a misunderstanding, of which the early Bologna Process was a victim. Dirk Van Damme, head of the OECD’s " Centre for Education Research Innovation (CERI), pointed out the aim of the Bologna Process was not the convergence of curricula, but of learning outcomes. Leaving it to universities to demonstrate how they achieve these outcomes also preserves their autonomy. Both convergence and diversity are possible, but only if there is a transparency of methodology to protect them. Otherwise, Dr. Van Damme warned, “we will end up in confusion”. 

 

Through a glass darkly

Not everyone is enthused over transparency, however. Institutions with gold leaf reputations - the Oxbridges and Ivy Leaguers – have nothing to gain from transparency. It is the mid-ranked universities that have the most to gain, or to lose.  Highly ranked universities enjoy an unfair privilege. For sure, uniqueness is their most powerful asset, but without transparency, it risks turning higher education into what Dr. Van Damme characterised as a “bazaar of undemonstrated reputations”, rather than a system based on evidence of superior learning outcomes. The advent of mass higher education in the 1980s was a consequence of the explosion in technology that required new skills to master it. University students shed their elitist notions and came down to rub elbows in the workplace. Getting a good job meant - and still means - getting a good degree. Providing greater access to higher education placed a new burden on national budgets, forcing governments to revise policy. Employers and taxpayers also had a keen interest in seeing that it paid off, both in terms of skills and money. Universities and government would now be held to account.  To prove that public money was not being squandered, governments began to pursue policies of quality assurance. With the rise of the global economy, governments and universities had to look in two directions: towards a labour market increasingly dependent on technical competence and to a new international “arms race” in research. In Europe, the Bologna Process is an example of a programme aimed at improving the quality of education for all; the "Lisbon Process" is designed to hone a nation’s competitive edge by nurturing champions. 

 

Beyond the walls

More people enter higher education than ever before in history. Some 135 million students are enrolled in postsecondary education. According to the OECD’s  " Education at a Glance, published in September, enrolment has doubled in the last ten years.  


But students, faculty and administrators are not the only stakeholders. Public investment in higher education averages 1% of GDP in OECD countries. The returns are hard to calculate, but Wendy Purcell of the University of Plymouth in the United Kingdom estimates that universities contribute close to £45 billion (56€ billion) annually to the nation’s economy and employ over 600,000 people. Although student bodies are increasingly international, universities help build communities. Figures show that 62% of graduates in the northwest of the UK live and work in the region where they graduated. A university makes hidden contributions to a region, a fact not reflected in rankings. This is a dangerous oversight, as it leaves universities vulnerable in ranking systems that disregard everything except a narrow range of indicators. In 2004, the IMHE, in collaboration with the OECD Public Governance and Territorial Development Directorate, undertook the first phase of a three-year comparative review to determine the regional impact of higher education. On 10 September, the study entered its second phase, which focuses on urban centres situated in participating G8 countries and rapidly developing economies, such as Brazil, China and India. Phase II seeks to broaden the evidence base, create a regional steering committee made up of public and private stakeholders along with universities, and will culminate in a High-Level Global Roundtable to be held in Kansas City in the Spring of 2009. A final synthesis report will be published when the study concludes in 2010.

 

The flow of knowledge drives modern society. With the arrival of the Internet, the flow has become a torrent. It has given fresh impetus to distance-learning and life-long education, and open source projects such as "OpenCourseWare", offered by the Massachusetts Institute of Technology (MIT), distribute course material online, free of charge to anyone, anywhere. Traditional higher education, symbolised by the ivory tower, thrives on status nurtured by the idea of exclusivity, such as a highly selective student body. What speaker  Simon Marginson from the "Centre for the Study of Higher Education at the University of Melbourne  in Australia called the “open source ecology” is laying siege to this conception by making knowledge “hyper-abundant, with the potential of limitless dissemination”. This sudden, expanded availability of knowledge has raised the stakes in its acquisition and use. Higher education is struggling to adapt. The IMHE provides a unique forum where education professionals and policy makers can collaborate to design the tools necessary to insure that quality does not suffer in a global context. Dusty league tables are not going to disappear, but finer instruments will complement them. Today’s stewards of higher education are plucking the ivy from the tower walls and cultivating it wherever it takes root.   

 

Lyndon Thompson