The ‘rough beast’ that is the REF
(HB, pp. 128, £45.00, ISBN: 9781473906563)
This important book should be read, and reflected on, by academics, academic managers, university managers, HEFCE, and those parts of Government that are responsible for creating the ‘rough beast’ (Yeats, 1994) that the research assessment exercise (RAE) has become.
It is no exaggeration to say that the various RAEs (Research Excellence Framework (REF) in its latest 2014 guise) have become the single most important driver of university behaviour in the UK over the last twenty years. Originally intended only to help the Government allocate research funding, they now generate league table positions of particular importance for the research intensive universities. The exercise has become increasingly bloated in terms of the time and resources that it uses – 2014’s was estimated to have cost £250 million (Times Higher, 2015a)! One could forgive this if the exercise were seen to have beneficial results in terms of improving the UK’s research quality and evaluating universities’ contributions equitably but it is manifestly the case that it does not, and that it has serious deleterious effects that Sayer forensically documents.
There are many negative impacts of the REF/RAE, not least the closure of healthy departments, job losses, discriminatory practices, huge amounts of unnecessary psychological stress, and ultimately a loss of innovatory, blue-skies that will undermine the research of future generations. Sayer accepts all these but makes clear that his book intends to focus primarily on only one element of the assessment regime, that is ‘the claim from which the REF derives its entire authority as a mechanism for funding allocation and on which it stakes its entire legitimacy as a process of research evaluation – the claim that it is a process of expert peer review’ [original emphasis, 2]. This makes the book very focussed but at the same time obviously limits the scope of the critique, which is a shame.
After the Introduction, Chapter 1 sets out what Sayer calls international benchmarks for peer review. Here, he largely draws on American models used for tenure and promotion decisions. As he states, apart from Australia and New Zealand, which are in any case modelled on the RAE, there are no other national research evaluation systems against which ours can be compared. The main points he wishes to make are that a properly constituted peer review evaluation system should:
- Be transparent in making absolutely clear all the stages and processes that will lead to the decisions
- Be accountable in identifying all the people involved in the process (except expert reviewers), specifying their responsibilities, and justifying their conclusions.
- Be expert in ensuring that the reviewers are both eminent in the profession and specialists in the appropriate academic area.
Most of the rest of the book is devoted to showing that the REF does not meet up to these criteria and so does not constitute an equitable peer review system.
Chapter 2 outlines the history of the many different forms of research exercise, beginning with the very small-scale one in 1986 that, at first, was little remarked on until its results were used to create league tables with very unexpected results. Relatively new universities such as Warwick came in well above some of the long-established ones such as Liverpool or Birmingham. From these fairly small scale beginnings, the REF has become ever more complex, rule-governed and, perhaps because of this, opaque. This is the central ‘hypocrisy’ of the title – the official claims about how transparent the exercise was in contrast with the reality that the real decisions, such as Panel membership, selection of academics, and the final evaluations, were shrouded in mystery.
This chapter covers the debate about the use of metrics, i.e., citation counts as well as or instead of peer review. It was proposed to use metrics in 2008 but after much protest from academics the idea was dropped and even in 2014 only a few Panels used them at all. The suspicion is that metrics would not favour the established universities such as the Russell Group and might instead reveal that there is much high quality research going on elsewhere. The Establishment fought, and won, the battle to ensure that their ‘academic judgements’ were not in any way clouded by actual data about which papers were read and cited. My own Panel – that of Business and Management – resolutely refused to even see any citation data!
Sayer goes on to highlight many other shortcomings of the REF procedures:
- The secretive and opaque nature of the appointment of Panel members
- The extent to which Panels merely represented the established pecking order
- Problems with the Panel having the necessary expertise to properly evaluate all the areas of the submissions
- The huge workload which meant that in practice, whatever the rhetoric, often only the titles and abstracts of papers were read and reliance was placed on things like journal ranking lists
- The refusal to use external indicators such as citations
- The lack of international members when it was supposed to be an international benchmark.
- The effects of the changes to the funding formula in favour of only 4* papers which pushed universities in to being highly selective in staff submitted.
These deficiencies at the level of the REF nationally were complemented by many equally poor practices at the university level, especially in terms of procedures for selecting staff. This is documented in Chapter 3 through a detailed analysis of the author’s own History Department at Lancaster University, and also a survey of staff who were not submitted at Warwick University. The facts of the matter are that virtually all Universities were driven, both by the funding formula and also the presumption that league tables would only be based on the overall grade point average thus excluding the volume of staff, to being much more selective in terms of their staff submitted. The most research intensive, which had been up at 90% in 2008 went down often to around 70%, and some universities were at 20% or below. This Chapter reveals the machinations that went on, often very secretively and often in contradiction to the codes of practice that had been agreed. This led to many staff being omitted, almost certainly to the detriment of their careers, with little transparency as to the who, how, and why.
I should issue one word of warning in that this is purely Sayer’s account of the situation and in fact a number of members of his History Department wrote a letter to the Times Higher disagreeing with his account (Times Higher, 2015b), to which he responded (Times Higher, 2015c).
The final chapter is in some ways the most interesting. It asks the question why, given that the REF is so flawed (I am sure that the vast majority of academics would agree), HEFCE and, complicitly, university managements defend and maintain it? And why, given that they could produce broadly similar results at much lower costs, are metrics shunned? The answer given, and I do not disagree, is that it must be that ‘it works admirably as a disciplinary tool for university management. It also provides an excellent vehicle for the legitimation and replication of the country’s established academic elites’ .
Overall, this is an important book in uncovering the profound dysfunctions of the processes of the REF. Its main weakness is that it focusses very much on the REF as an illegitimate form of peer review but does not elaborate on the many other more general effects that the REF has had on individual academics, departments, innovative research and ultimately our research culture as a whole.