Some more of peer-review’s greatest mistakes
August 6, 2012
Last time I argued that traditional pre-publication peer-review isn’t necessarily worth the heavy burden it imposes. I guess no-one who’s been involved in the review process — as an author, editor or reviewer — will deny that it imposes significant costs, both in the time of all the participants, and in the delay in getting new work to press. Where I expected more pushback was in the claim that the benefits are not great.
Remember that the benefits usually claimed for peer-review are in two broad categories: that it improves the quality of what gets published, and that it filters out what’s not good enough to be published at all. I’ll save the second of these claims for next time. This time I want to look at the first.
The immediate catalyst is the two brief reviews with which a manuscript of mine (with Matt as co-author) was rejected two days ago from a mid-to-low ranked palaeo journal. I’m going to quote two sentences that rankle from one of the anonymous reviewers:
The manuscript reads as a long “story” instead of a scientific manuscript. Material and methods, results, and interpretation are unfortunately not clearly separated.
This, in the eyes of the reviewer, was one of the two deficiencies of the manuscript that led him or her to recommend rejection. (The other was that the manuscript is “too long”.) But as I’ve said repeatedly here and elsewhere, all good writing is story-telling. Scientific ideas, like all others, need to be presented as stories in order to sink into our brains. So what’s happened here is that, so far as I’m concerned, the reviewer has praised our manuscript; but he or she thinks its a criticism.
It’s not clear what we should do in response to such a review. To get our paper published in this journal, we could restructure the paper. We could draw every observation, comparison, discussion and illustration out of its current position in a single, flowing argument; and instead cram them them all, out of their natural order, into the “scientific” structure that the reviewer evidently prefers. But I’m not willing to do this, because our judgement is that this would reduce the value of the resulting paper — making it harder to follow the argument. And I hate to do work with negative net value.
In this case, it’s even worse: the initial draft of this paper was in a much more conventional “scientific” structure. In this form, we submitted it to a different journal, whence it was rejected for completely different reasons which I won’t go into. Before submitting the new version, we re-tooled the paper to tell its story in the order that makes most sense. So the reviewer who recommended that the new version be rejected was (albeit unknowingly) requiring us to revert an older and inferior version of the manuscript.
So this is a case where reviewers’ comments really don’t help at all. (To be fair to the reviewer in question, he or she did also say a lot of positive and encouraging things. But they are really just decoration on the big, fat REJECT.)
And sadly this kind of thing is not too unusual. Other reviews I’ve been sent have (A) demanded that we add a phylogenetic analysis even though it could not possibly tell us anything; (B) demanded that we remove a phylogenetic analysis; (C) rejected a paper due to consistently misunderstanding our anatomical nomenclature; (D) rejected my manuscript because they’d never heard of me (ad hominem review); and and my all-time most hated favourite (E) basically told me to write a completely different paper instead.
(E) in particular is a disaster and makes me just want to throw my hands up in the air. Written a paper that analyses a diversity database to draw conclusions about changing clade sizes through time? Too bad, the reviewer wants you to write a literature review on how you assembled the database instead! Written a paper showing that two species are not conspecific? Tough luck, the reviewer wants you to write a paper on the meaning of “genus” instead! I just loathe this. I am increasingly of the opinion that the best response may be “I’ve written that manuscript too, it’s in review elsewhere” or similar.
So those kinds of reviews are a complete waste of time and energy.
Where is the value, then? Well, I’ve mentioned Jerry Harris several times before as someone whose reviews are full of detailed, helpful comments that really do improve papers. I aspire to review like Jerry when it falls to me to provide this service, and I hope I can be as constructive as he is. But really, this is a tiny minority. Most reviews that avoid the traps I mentioned above don’t have much of substance to say. Some “reviews” I’ve received have been only a few sentences long: in such cases it’s hard to believe that the reviewers have actually read the manuscript, and very hard to accept that they have life-or-death power over its fate.
I’ve been wary of writing this post because of the fear that it reads like a catalogue of whining about how hard-done-by I am. That’s not the intention: I want to talk about widespread problems with peer-review, and I wanted to make them concrete by giving real examples. And of course the examples available to me are the reviews my own work has received. Let me note once again that I have made it through the peer-review gauntlet many times, and that I’m therefore criticising the system from within. I’m not a malcontent claiming there’s a conspiracy to keep my work out of journals. I am a small-time but real publishing author who’s sick of jumping through hoops.
Next time: what peer-review is really for.