How can next REF more strongly emphasise the unimportance of Impact Factor?
July 10, 2015
I spent much of yesterday morning at the launch meeting of HEFCE’s new report on the use of metrics, The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. (Actually, thanks to the combination of a tube strike and a train strike, I spent most of the day stationary in traffic jams, but that’s not important right now.)
There’s a lot to like about the report, which is a fantastically detailed piece of work. (It weighs in at 178 pages for the main report, plus 200 pages for Supplement I and another 85 for Supplement II. I suspect that most people, including me, will content themselves with the Executive Summary, which is itself no lightweight at 12 pages.) Much has been written about it elsewhere — see the LSE’s link farm — but I want to focus on one issue that came up in the discussion.
As we’ve noted here a couple of times before, the REF (Research Excellence Framework) is explicit in disavowing impact factors and other rankings in its assessments: see the answer to this question: How will journal impact factors, rankings or lists, or the perceived standing of publishers be used to inform the assessment of research outputs?, which is:
No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs. An underpinning principle of the REF is that all types of research and all forms of research outputs across all disciplines shall be assessed on a fair and equal basis.
The problem is, people tend not to believe it. Universities continue to select which papers to submit to the REF on the basis of what journals they were in. And this propagates all the problems of journal rank and the absurdly disproportionate influence that two or three scientifically weak journals have on the whole field of scholarship.
As Richard Butler said in a comment on an earlier post:
Most people inside UK universities that I have talked to say that journal reputation is being considered by departments when preparing their REF submissions, and this has been documented by various articles in The Guardian and THS.
So that’s the background.
Then at the Metrics Tide launch, the question was asked: what more can HEFCE do to convey that they’re looking for good work, not work from high-IF journals?
That was the only point in the meeting where I stuck my hand up — I had things to say, but at that point the chairman of the panel chipped in with a different question, and the moment had passed. So rather than yank the discussion back to that point, I decided it would be better to blog about it.
There are two possible reasons why universities depend on journal rank in general, and impact factor in particular, when deciding on what papers to submit to the REF.
1. They simply don’t believe what the REF says about not caring what venue a paper is in; or
2. They believe the REF is telling the truth, but think that Impact Factor is a good proxy for the qualities that the REF does care about.
The solution to #1 is a just a bigger, bolder statement in the 2020 REF. Instead of being somewhat buried, the 2020 documents should begin with the following statement in 20-point bold font:
Submitted works will be assessed according to their intrinsic quality (clarity, replicability, statistical power, significance) and not according to the venue they appear in. If you use Impact Factors to assess works, you are statistically illiterate.
The solution to #2 is a little more complicated. What it comes down to is education: helping administrators to see and understand that in fact the Impact Factor of the journal that work appears in is not a good proxy for any of the things that we care about.
- Counter-intuitively, there is no statistically significant correlation between the citation count of a paper and the IF of the journal that it appears in.
- Neither is there significant correlation between a journal’s impact factor and the statistical power of the articles that appear in it.
- On the other hand, there is a significant correlation between impact factor and retraction rate: articles appearing in high-IF journals are more likely to be retracted than those in regular journals.
In short, part of the job that the 2020 REF needs to do is to demonstrate to administrators that submitting high-IF papers is not a good strategy for them. It won’t optimise their REF results. IF will give them papers that have no tendency to be highly cited or statistically powerful, but which are more likely to be retracted.