Well, that about wraps it up for peer-review
November 26, 2012
[See part 1, part 2 and part 3 from a few months ago.]
I’m horrified, but not as surprised as I would like to be, by a new paper (Welch 2012) which analyses peer-reviewer recommendations for eight prestigious journals in the field of economics.
The principal finding is that the reviewers’ recommendations were made up of 1/3 signal (i.e. consistent judgements on the quality of the manuscript) and 2/3 noise (i.e. randomness). Of that 2/3 noise, 1/3 was down to reviewer bias (some are nicer, some are nastier) and 2/3 seemed to be purely random.
And to quote directly from the study:
The bias measured by average generosity of the referee on other papers is about as important in predicting a referee’s recommendation as the opinion of another referee on the same paper.
What this means is that the likelihood of a submission being accepted depends more on a coin-toss than it does on how good your work is. Which seems to validate my earlier speculation that
The best analogy for our current system of pre-publication peer-review is that it’s a hazing ritual. It doesn’t exist because of any intrinsic value it has, and it certainly isn’t there for the benefit of the recipient. It’s basically a way to draw a line between In and Out. Something for the inductee to endure as a way of proving he’s made of the Right Stuff.
So: the principal value of peer-review is that it provides an opportunity for authors to demonstrate that they are prepared to undergo peer-review.
There’s more discussion of this over on the Dynamic Ecology blog.
It’s also well worth reading Brian McGill’s comment on that post: he quotes multiple reviewers of a manuscript that he submitted, completely contradicting each other. Yes, this is merely anecdote, not data; but I have to admit that it chimes with my own experience.
If this research is correct, and if it applies to science as as it does to economics, then here is one horrible consequence: it suggests that best way to get your papers into the high-impact journals that make a career (Science, Nature, etc.) is not necessarily to do great research, but just to be very persistent in submitting everything to them. Keep rolling the dice till you get a double six. I would hate to think that prestige is allocated, and fields are shaped, on that basis.
I’d be really interested to know, from those of you who’ve had papers published in Science or Nature, roughly how many submissions you’ve made for each acceptance in those venues; and to what extent you feel that the ones that were accepted represent your best work.
November 26, 2012 at 1:55 pm
Well, it’s an incredibly small sample size, but since you asked: I’ve gone one for three with submissions to Science and Nature, and yes, I think the one they accepted was the best of the three.
As discussed over at Dynamic Ecology (thanks for the link!), I don’t draw the same dire implications from the Welch study as you do. Whether that means I’ve just grown used to a system I ought to find appalling, I leave it to others to decide.
November 26, 2012 at 2:00 pm
Well, now we have a datum! One more, and we can call it data! :-)
Yes, it’s a surprise to me that this paper doesn’t seem to worry you. I’m still trying to get inside your head. The finding is that only 1/3 of the work that we do as reviewers has any value. Would we be happy with the idea that only 1/3 of the work we do in other areas has value?
By the way, it’s probably even worse than that. From p4 of the paper: “My paper’s estimates of the quality of the peer review process should be viewed as conservative — an upper bound on the accuracy and fairness of referees and thus ultimately on the scientific process this peer review creates.”
November 26, 2012 at 3:50 pm
Re: getting inside my head, afraid I can’t do much more to clarify my thoughts besides the comments we’ve exchanged over on Dynamic Ecology. I guess the only other thing I can say is that while my views on this clearly differ from yours, I don’t think my views are *that* unusual. I have a number of colleagues who basically feel the same way I do, and while that’s a small, non-random sample I’m sure my friends aren’t the only people in the world who share my views on this. I note this not at all as a criticism of you–I can see why someone who holds your views would find it difficult to see how anyone could hold mine–but just to emphasize that this seems to be a topic on which there is quite a wide range of views. I’m not some extreme outlier, I don’t think.
November 26, 2012 at 11:39 pm
Low agreement among reviewers is yesterday’s news.
Cole et al. 1981. Chance and consensus in peer review. Science 214(4523): 881-886.
Abstract
An experiment in which 150 proposals submitted to the National Science Foundation were evaluated independently by a new set of reviewers indicates that getting a research grant depends to a significant extent on chance. The degree of disagreement within the population of eligible reviewers is such that whether or not a proposal is funded depends in a large proportion of cases upon which reviewers happen to be selected for it. No evidence of systematic bias in the selection of NSF reviewers was found.
November 27, 2012 at 5:07 pm
Comparing your numbers with Sturgeon’s Law (“90% of everything is crap”) and the factoid that signal adds while noise cancels, it sounds like the peer review system is not doing too badly after all.
Re publication in Nature etc: how many pages of text and pictures does it take to communicate something really new and significant? I guess it depends how much of the work is done by the pictures, and how much by the text. If the fossil is stunning enough, one gets the impression that the analysis may be pedestrian or quite overblown and still get up. Of the two papers of mine submitted (and accepted) there, I’ve got to say of the second: if that snake had been a dinosaur, it would have had feathers.
November 27, 2012 at 8:31 pm
If the best we can say about the review system is “at least only 66% of it is crap” then I submit that’s not good enough.
“If the fossil is stunning enough” — you mean like the all-but-complete skeletons representing Jobaria and Nigersaurus, both of which were disposed of in a wiper that also managed to squeeze in an incorrect temporal hypothesis? They are two of the most inadequate sauropod descriptions every written; Jobaria, despite is completeness, is still effectively undescribed thirteen years after its “description”. That’s what the glam-mags do to descriptive paper.
If you want to see a stunning fossil given the treatment it deserves, try this.
April 20, 2013 at 8:35 pm
[…] course, anyone who’s actually been through peer-review a few times knows how hit-and-miss the process is. Only someone who’s never experienced it directly could retain blind faith in it. (In this […]
April 29, 2013 at 10:05 pm
[…] Well, that about wraps it up for peer-review See part 1, part 2 and part 3 from a few months ago.] I’m horrified, but not as surprised as I would like to be, by a new paper (Welch 2012) which analyses peer-reviewer recommendations for eight prestigious journals in the field of economics. The principle finding is that the reviewers’ recommendations were made up… Share […]
May 3, 2013 at 7:15 am
[…] to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread assumption that it means more than it […]
May 5, 2013 at 9:10 pm
[…] to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread assumption that it means more than it […]
January 21, 2014 at 8:55 am
[…] wrong, Some more of peer-review’s greatest mistakes, What is this peer-review process anyway?, Well, that about wraps it up for peer-review), I likened peer-review to […]
January 29, 2016 at 10:59 am
[…] assign to a corpus of existing papers, and derive our parameters from that. But we know that experts are really bad at assessing the quality of research. So what would our carefully parameterised LWM be approximating? Only the flawed judgement of […]
February 4, 2021 at 4:41 pm
[…] years ago; at the time, I was pretty much in agreement with Mike’s post from November, 2012, “Well, that about wraps it up for peer-review”. But then in 2014 I became an academic editor at PeerJ. And as I gained first-hand experience from […]
May 19, 2021 at 8:20 pm
[…] the last of these that pains me the most. Of all the comforting lies we tell ourselves about conventionl peer review, the worst is that it’s worth all the extra time and effort because it makes the paper […]
September 16, 2022 at 8:06 am
[…] Peer-review ain’t nuthin’ … but it ain’t much. We know from experiment that the chance of an article passing peer review is made up of one third article quality, one third how …. More recently we found that papers with a prestigious author’s name attached are far more […]