Abstracts: Reputation and ranking – the impact on institutional strategy and behaviour of international ranking tables

 

League Tables: what chance of promotion or relegation?

Author: Andrys Onsman, Associate Professor, Coordinator of Academic Development Programs, Monash University, Australia

 

Australian universities pay great heed to some rankings tables and not so much to others. Of the international ones The Times Higher Education Supplement is preferred to the Jiao Tong, probably because Australian universities fare better in the former. Nationally the Melbourne Institute of Applied Economic and Social Research has wide-spread national status. However, the rankings of greatest interest to Australian universities is that put forward by the federal Government’s Learning and Teaching Performance Fund because financially that has the most immediate and public impact. Like American universities’ single-minded concern with the rankings published in the United States News and World Report, Australian universities are much more concerned with comparisons amongst themselves than internationally. Universities ranked at the lower end are claiming that ranking of any kind is fundamentally flawed and institutionally biased; and that any evaluation of quality based on generic performance criteria will be qualitatively unbalanced. To what extent is the claim that rankings table suggests better and worse universities when in fact there are only different universities, a justifiable claim? Or is it simply an excuse put forward by poorly-performing universities?

Can’t Get No Satisfaction? Promises and Problems of the CHE Ranking in Comparing Student Satisfaction between Germany, the Netherlands and Flanders

Author: Don F. Westerheijden, Dr, CHEPS, University of Twente, Netherlands

 

This paper summarises results of the pilot project in which Dutch and Belgian (Flemish) higher education institutions participated in the CHE Ranking, which literature sees as ‘best practice’ for ranking study programmes to inform (prospective) students. Project aim was to test the CHE Ranking methodology in a non‐German language context. It included data collection from participating
institutions and a survey among their students.

The paper stresses the (promises and) problems of comparing students’ satisfaction measurements internationally. Questions addressed include: what are effects of different scales used, and of rating cultures? (Different mid‐points, different tendencies to use extremes) How should results be interpreted: what are effects of expectation levels? Do such differences result in bias against certain countries? Lessons will be drawn for further pilot projects and an update on expected further pilots will be given.

International collaborations and the effects of rankings

Author: Rebecca Hughes, Director, Centre for English Language Education, University of Nottingham UK

 

This paper asks whether transparent rankings of Higher Education providers will encourage or discourage the development of collaborative awards across borders.  Two scenarios are discussed.  One outlines standard competitive responses, and the other describes the potential for rankings to promote broader goals, including capacity building. 

Rankings can be seen as an analytical tool to support strategic planning and risk management of collaborations by institutions.  They should permit easier access to information about the nature and quality of provision when an institution may be little known beyond its national borders.  By providing specific details of curricula they could enhance the targeted development by both mature and emerging institutions of partnerships with a national strategic importance – a particular discipline evolving in support of specific capacity building needs, for instance.  There is therefore great potential for some of the more aspirational goals of HE providers and national policy makers to be facilitated by well-designed criteria for international rankings.

Conversely, the availability of market information about the reputation of institutions and awards may lead to risk-averse behaviour among the leaders of those high in the rankings or wishing to rise in them.  This behaviour could lead, potentially, to collaborations only emerging between institutions with similar rankings or to the active discouragement of international collaborations.  This ‘playing safe’ would exclude potentially beneficial links between (to name but a few dichotomies) newer and older, Northern and Southern, Anglo-saxon and non-Anglo-saxon institutional partners. 

Drawing on several years’ experience in the development and quality assurance of collaborative links at one institution the presenter argues for criteria to encourage academically stimulating, ethically informed and imaginative collaborations between institutions and across cultures. 

Graduate Surveys as a Measure in University Rankings

Author: Gero Federkeil, CHE- Centre for Higher Education Development

 

The landscape of rankings of higher education institutions has become diverse, national and international. The two influential global rankings made by Shanghai Jiaotong University and the Times Higher Education Supplement mainly focus on research. Although there is some critique concerning field specific problems and national biases bibliometric analysis and citations are widely accepted as a measure of research output. With regard to teaching and learning  measures of educational outcomes or “value added” are still missing in rankings of higher education, in particular in international rankings.

The CHE Centre for Higher Education Development is publishing a ranking of German higher education institutions since 1998 which has to face the same problems: most indicators refer to input and process variables. In Germany only in a very few fields (medicine, pharmacy) there are national exams that could be used as a comparative measure of educational achievement in the ranking.

The CHE has started to include graduate surveys into its ranking in order to include more information from a labour market perspective into the ranking. Graduate surveys can give some information about labour market entry and careers of the graduates. Evidence shows that differences between institutions in labour market success are largely influenced by regional labour market effects. Up to now there is no valid method to control for those effects in order to attribute differences between universities to the institutions. But at the same time graduate surveys can give information about the evaluation of programmes and study conditions by graduates on the background of their work experience and graduates can give a self assessment of the competencies and qualifications that got in their programmes compared to their job demands. While graduate surveys cannot directly measure outcomes or competencies they can be seen as a proxy for outcome measures of higher education. The paper tries to illustrate this with some examples from CHE graduate studies.