What is this peer-review process anyway?
August 7, 2012
Let me begin with a digression. (Hey, we may as well start as we mean to go on.)
Citations in scientific writing are used for two very different reasons, but because the two cases have the same form we often confuse them. We may cite a work as an authority, to lend its weight to our own assertion, as in “Diplodocus carnegii seems to have had fifteen cervical vertebrae (Hatcher 1901)”; or we may cite a work to give it credit for an observation or concept, as in “… using the extant phylogenetic bracket (Witmer 1995)”.
The conflation of these two very different modes of citation causes some difficulty because, while many authors would never cite (say) a blog post as an authority, most would feel some obligation to cite it in order to give credit. You might not want to cite this SV-POW! post as authority for a length of 49 m for Amphicoelias fragillimus; but you would hardly use the sacrum illustration from this post in your own work without crediting the source.
So the fact citations do two rather different jobs causes confusion.
When it comes to peer-review, things are worse: not only it trying to do two different things at once (filtering manuscripts, and improving the ones it retains) but the filtering itself has two components — deciding (A) whether a manuscript is good science, and (B) whether it’s “a good fit” for the journal.
What does “a good fit” mean? Anything or nothing, unfortunately. In the case of Science ‘n’ Nature, of course, it means “is about a new feathered theropod”. For Paleobiology, it seems to mean “has lots of graphs and no specimen photos”. In the case of other journals, it’s much less predictable, and can often, it seems, come down to the whim of the individual reviewer. In many cases, infuriatingly, it can be a matter of whether the reviewer’s guess is that a work is “important” enough for the journal — something that you can never tell in advance, but which only becomes apparent in the years following publication.
As a result, a perfectly good piece of work — one which passes the peer-review process’s “is it good science?” filter with flying colours — can still get rejected, and indeed can be bounced around from journal to journal until the author loses interest, leaves the field or dies; or, of course, until the paper is accepted somewhere.
(The already disheartening process of shopping a paper around multiple journals until it finds a home is made utterly soul-crushing by the completely different formats that the different journals require their submissions in. But that’s a completely different rant.)
This is the Gordian knot that PLoS ONE set out to cut by simply declaring that all scientifically satisfactory work is a good fit. Reviewers for PLoS ONE are explicitly told not to make judgements about the probable impact of papers, and only judge whether the science is good. In this way, it’s left to the rest of world to evaluate the importance of the work — just as in fact it always does anyway (through citation, blog discussion, formal responses, and so on).
But eliminating the “good fit” criterion still leaves PLoS ONE reviewers with two jobs: judging whether a manuscript is scientifically sound, and helping the author to improve it. I find myself wondering whether there might be a way to decouple these functions, too.
Perhaps not: after all, they are somewhat intertwingled. The question “is this scientifically sound” does not always receive a yes-or-no answer. The answer might be “yes, provided that the author’s conclusions are corroborated by a phylogenetic analysis”, for example.
I don’t have any good answers to propose here. (Not yet, anyway.) At this stage, I am just trying to think clearly about the problem, not to come up with solutions. But I do think we can see what’s going on with more clarity if our minds can separate out all the different roles that reviewers play. To my mind, dumping “impact assessment” from the review process is PLoS ONE’s greatest achievement. If we can pick things apart yet further, there may well be even greater gains to be had.
I guess we should be asking ourselves this: what, when it comes right down to it, is peer-review for? Back in the day, the filtering aspect was crucial because paper printing and distribution meant that there was a strict limit on how many papers could be published. That’s not true any more, so that aspect of peer-review is no longer relevant. But what else has the Internet changed? What can we streamline?