All right then, how should we evaluate research and researchers?

January 25, 2016

Like Stephen Curry, we at SV-POW! are sick of impact factors. That’s not news. Everyone now knows what a total disaster they are: how they are signficantly correlated with retraction rate but not with citation count; how they are higher for journals whose studies are less statistically powerful; how they incentivise bad behaviour including p-hacking and over-hyping. (Anyone who didn’t know all that is invited to read Brembs et al.’s 2013 paper Deep impact: unintended consequences of journal rank, and weep.)

Its 2016. Everyone who’s been paying attention knows that impact factor is a terrible, terrible metric for the quality of a journal, a worse one for the quality of a paper, and not even in the park as a metric for the quality of a researcher.

Unfortunately, “everyone who’s been paying attention” doesn’t seem to include such figures as search committees picking people for jobs, department heads overseeing promotion, tenure committees deciding on researchers’ job security, and I guess granting bodies. In the comments on this blog, we’ve been told time and time and time again — by people who we like and respect — that, however much we wish it weren’t so, scientists do need to publish in high-IF journals for their careers.

What to do?

It’s a complex problem, not well suited to discussion on Twitter. Here’s what I wrote about it recently:

The most striking aspect of the recent series of Royal Society meetings on the Future of Scholarly Scientific Communication was that almost every discussion returned to the same core issue: how researchers are evaluated for the purposes of recruitment, promotion, tenure and grants. Every problem that was discussed – the disproportionate influence of brand-name journals, failure to move to more efficient models of peer-review, sensationalism of reporting, lack of replicability, under-population of data repositories, prevalence of fraud – was traced back to the issue of how we assess works and their authors.

It is no exaggeration to say that improving assessment is literally the most important challenge facing academia.

This is from the introduction to a new paper which came out today: Taylor (2016), Better ways to evaluate research and researchers. In eight short pages — six, really, if you ignore the appendix — I try to get to grips with the historical background that got us to where we are, I discuss some of the many dimensions we should be using to evaluate research and researchers, and I propose a family of what I call Less Wrong Metrics — LWMs — that administrators could use if they really absolutely have to put a single number of things.

(I was solicited to write this by SPARC Europe, I think in large part because of things I have written around this subject here on SV-POW! My thanks to them: this paper becomes part of their Briefing Papers series.)

Next time I’ll talk about the LWM and how to calculate it. Those of you who are impatient might want to read the actual paper first!

References

5 Responses to “All right then, how should we evaluate research and researchers?”


  1. […] said last time that my new paper on Better ways to evaluate research and researchers proposes a family of Less […]


  2. Mike,

    I’m with you on the inappropriateness of the current methods for judging the worthiness of publications. They should be replaced. The only points in their favor are that they are based on quantitative items that are easily measured by machines (citation count, for example).

    Unfortunately, the new criteria you propose, in the yellow box on page 5 of your article, raise a new set of problems. They are subjective, requiring a human to read the paper and make a judgement. “How significant is this result?” “How clearly is this written?” In order to use these metrics, the community will have to identify a group of experts who will read every paper and score them in these categories. That’s (number one) a lot of work for no pay and (number two) dependent on the whims of each reviewer.

    I guess one might claim that peer reviewers are _already_ reading every paper which makes it into the refereed literature, so if we could just get those reviewers to fill out a scoresheet in addition to writing their reports, and then share all those scores, we would have the required data. Is that your idea?

    Not trying to discourage you and your quest, just trying to figure out how it might be put into practice.

  3. Mike Taylor Says:

    Note: this comment was also added to the next post. I replied to it there.


  4. […] the fourth in a series of posts on how researchers might better be evaluated and compared. In the first post, Mike introduced his new paper and described the scope and importance of the problem. Then in the […]


  5. […] researchers and the introduction of LWM (Less Wrong Metrics) by Mike Taylor. You can find the posts here, here, here, and […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: