How to become good at peer-review: three points of disagreement
January 20, 2014
Jennifer Raff wrote a useful guest post on the PeerJ Blog: How To Become Good At Peer-Review. Most of its advice is excellent, and I’d heartily recommend it to anyone starting out on reviewing. But there are three points where I disagree with it. Here are the three things Jennifer said, and my counter-points.
1. Communicating with authors
“Don’t communicate with the authors about their manuscript. All thoughts and comments on it should only go to the editor.”
This may be different in different academic fields, but I’ve been contacted by reviewers of my material, and contacted the authors of papers I’m reviewing, too. Palaeo may be less formal in this respect than fields such as medical research. It’s often useful, for example, to get the authors to send higher resolution versions of the specimen photographs than the downscaled ones the journal passes on; or to get the manuscript in a read-write format that lets you more easily add notes and corrections. Most importantly, I’ve sometimes had to send my marked-up copy of the manuscript directly to the corresponding author because the journal’s automated system has no way to attach it to the formal response.
Perhaps the idea that you shouldn’t communicate with authors comes from confidentiality concerns. But I know who the authors are. (There are no palaeo journals that do double-blind reviewing, and it would be impossible any in a field small enough that you pretty much know who everyone is and what they work on.) And since I never review anonymously, I don’t mind them knowing who I am while I am still doing the review.
In the end, one of the main goals of peer-review — I would say the main goal — is to help the authors make their work the best it can be. Often contacting them directly is the more effective way to do that.
“Ask yourself whether the questions the authors are addressing are really advancing the field in a meaningful way. This does not mean that an article has to be completely novel, but it does mean that the work contributes to the sum of knowledge in the field and does not, for example, simply repeat well known results.”
I only agree with this for certain values of “well known”. In experimental sciences, replication is hugely important, and it’s one of the worst consequences of the prestige-obsessed journal system that it’s so hard to get a replication published. You could almost say that an experimental result that’s only been published once is worthless.
Equally important, or maybe even more important than replication, is the failed replication. When Doyen et al. (2012) tried and failed to replicate the findings of Bargh et al. (1996) on psychological priming, it was an important check on the influence of an article that has been cited more than 2,500 times. Bargh himself was not happy about it, but to quote a much-loved SV-POW! maxim due to Tom Holtz, “Sorry if that makes some people feel bad, but I’m not in the ‘make people feel good’ business; I’m a scientist.”
So a reviewer should only complain about lack of novelty if the experiment has already been replicated several times. (There’s no value in a research paper showing that large and small cannonballs fall at the same speed from the top of the leaning tower of Pisa.)
3. Changing the subject
“Can you think of a better way to address the research questions than what the authors did?” … “You have every right to ask the authors to do a different experiment.”
Ugh. I just hate this. There is literally nothing I detest more in a review than “You should have written this different paper instead”. Please reviewers, review what’s in front of you, not what you would have done instead.
If you think of another approach that you think is promising, by all means suggest it as a followup project. But please in the name of all that we hold dear, don’t let it be a roadblock that delays this work from being published.