Where peer-review went wrong

August 5, 2012

[Note: this post is by Mike. Matt hasn’t seen it, may not agree with it, and would probably have advised me not to post it if I’d asked him.]

The magic is going out of my love-affair with peer-review. When we started seeing each other, back in 2004, I was completely on board with the idea that peer-review improves the quality of the scientific record in two ways: by keeping bad science from getting published, and by improving the good science that does get published. Eight years on, I am not convinced that either of those things is as true as I initially thought, and I’m increasingly aware of the monumental amount of time and effort it soaks up. Do the advantages outweigh the disadvantages? Five years ago I would have unhesitatingly said yes. A year ago, I’d have been unsure. Now, I am pretty much convinced that peer-review — at least as we practice it — is an expensive hangover from a former age, and does more harm than good.

What’s that? Evidence, you say? There’s plenty. We all remember Arsenic Life: pre-publication peer-review didn’t protect us from that. We all know of papers in our own fields that should never have been published — for a small sample, google “failure of peer-review” in the Dinosaur Mailing List archives. Peer-review doesn’t protect us from these and they have to be sorted out after publication by criticism, rebuttal and refinement. In other words, by the usual processes of science.

So pre-publication peer-review is not getting the job done as a filter. What about its role in improving papers that do get published? This does happen, for sure; but speaking as a veteran of 30 submissions, my experience has been that no more than half of my reviews have had anything constructive to suggest at all, and most of those that have improved my manuscripts have done so only in pretty trivial ways. If I add up all the time I’ve spent handling and responding to reviewer comments and balance it up against the improvement in the papers, my honest judgement is that it’s not been worth it. The improvement in the published papers is certainly not worth as much as all the extra science I could have been making instead of jumping through hoops.

And that of course is ignoring the long delays that peer-review imposes even in the best-case scenario of resubmit-with-revisions, and the much longer ones that result when reviews result in your having to reformat a manuscript and start the whole wretched process again at another journal.

All of this is much to my surprise, having been a staunch advocate of peer-review until relatively recently. But here’s where I’ve landed up, despite myself: I think the best analogy for our current system of pre-publication peer-review is that it’s a hazing ritual. It doesn’t exist because of any intrinsic value it has, and it certainly isn’t there for the benefit of the recipient. It’s basically a way to draw a line between In and Out. Something for the inductee to endure as a way of proving he’s made of the Right Stuff.

So: the principal value of peer-review is that it provides an opportunity for authors to demonstrate that they are prepared to undergo peer-review.

It’s a way of separating the men from the boys. (And, yes, the women from the girls, though I can’t help thinking there’s something very stereotypically male about our confrontational and oppositional review system.)

Finally, I should address one more thing: is this just whining from an outsider who can’t get in and thinks it’s a conspiracy? I don’t think so. I’ve run the peer-review gauntlet quite a few times now — my publications are into double figures, which doesn’t make me a seasoned professional but does show that I am pretty serious. In other words, I am inside the system that I’m criticising.

For full disclosure, I should make it clear that I am writing this a day after having had a paper rejected by reviewers. So if you like, you can write it off as a bitter ramblings of a resentful man. But the truth is, while this was the immediate trigger for writing this post, the feeling has been building up for a while.

Next time: some of the details of why my paper was rejected, and why I think they’re dumb reasons. In part three: what peer-review should actually be for, and what I plan to do with the paper now.

39 Responses to “Where peer-review went wrong”


  1. I agree with you. I’ve started posting my pre-prints on arXiv.org. That way it’s freely available right away (it shows up on Google Scholar within a few days) while the paper is making it’s way through the regular publication process.

    arXiv is for research and journals are for CVs: http://proteinsandwavefunctions.blogspot.dk/2012/04/arxiv-is-for-research-and-journals-are.html

  2. Mike Taylor Says:

    I would unhesitatingly post my manuscripts on arXiv if it accepted palaeontology — alas, it does not: the closest thing in its subject list is “Quantitative Biology”, which is not very close. To my mind the most glaring absence from the open-access scene at the moment is something like arXiv for the other sciences — or, better still, the extension of arXiv to all of science.


  3. There isn’t a “chemistry” category either, but I post there anyway. They have accepted the manuscripts thus far.

  4. Mike Taylor Says:

    Interesting. What category do you post in?


  5. Usually “chemical physics”. I think it is largely irrelevant though, people are not going to discover my papers by browsing arXiv, they are going to find it via search engines like Google Scholar.

    Also, I don’t think anyone at arXiv check/care.


  6. Thank you! As an ex-university press director, I agree with you, having learned this stuff by experience in the tranches, so that I came to your conclusion many years ago. My favourite study was by Paul Zeleza in the 1990s, when he found that scholarly literature on African Studies was being peer reviewed overwhelmingly by white men in the UK. I like your analogy with hazing, because, of course, peer review is not a peer to peer activity at all, but riddled with hierarchical rituals and plagued by turf wars.
    More research needs to be done on the gender and racial biases of our dominant scholarly publishing system.

  7. Matt Wedel Says:

    Two relevant, timely anecdotes:

    The Cerda et al. paper on rampant saltasaurine pneumaticity–which I said, and still believe, is the most important paper on sauropod pneumaticity ever–was rejected from several outlets before being accepted at Palaeontologische Zeitschrift. Who did this serve? I don’t know where it was sent before. One could argue that maybe the authors were too ambitious and sent it to Nature. I would argue back that that particular paper deserved to be in Nature.

    My wife had a phys anth paper rejected last week. One of the reviewers said that she should have involved a radiologist in the study. Apparently that friggin’ moron didn’t realize that a radiologist WAS AN AUTHOR.

  8. Mike Taylor Says:

    The Cerda et al. paper on rampant saltasaurine pneumaticity … was rejected from several outlets before being accepted at Palaeontologische Zeitschrift. Who did this serve?

    Exactly. Who, among those in the world who care about saltasaurine pneumaticity, is going to think, “Oh, I won’t bother to read that it, was only in Palaeontologische Zeitschrift“? No-one. Because that just isn’t how anyone makes their reading choices from the vast firehose of new papers.

    All that was achieved was (A) to prevent the authors from getting on with new work by making them repeatedly revise and reformat work they’d already done, and (B) to prevent everyone else from learning about what they’d discovered.

    I’ll say no more now, for fear of treading on the toes of Part 3 of this series.


  9. […] Last time I argued that traditional pre-publication peer-review isn’t necessarily worth the heavy burden it imposes. I guess no-one who’s been involved in the review process — as an author, editor or reviewer — will deny that it imposes significant costs, both in the time of all the participants, and in the delay in getting new work to press. Where I expected more pushback was in the claim that the benefits are not great. […]

  10. Richard Says:

    As the editor of the Cerda et al. paper at Paläontologische Zeitschrift, I would just like to point out that that paper, in my opinion, benefited substantially from peer review from two referees. Indeed, nearly all papers that I have edited across five different journals in recent years, as well as my own papers, came out stronger after peer review. If the Cerda et al. paper was rejected at different journals first it was likely because of perceived “impact” and importance, which is a separate issue to whether or not peer review is a good idea.

    Your “evidence” for the failure of peer review is simple anecdote as far as I can see. Many of us give weeks (or even months) of our time unpaid each year to reviewing and editing manuscripts to improve the quality of published science; from what I see the system is not perfect (we all have a horror story or two), but it generally works well and manuscripts are substantially improved as a result.

  11. Bill Parker Says:

    Mike,

    I fear you may be taking some steps here towards the ‘dark side’ (you know who/where I mean). Don’t give in. The system most definitely is not perfect, but we can realize this as individuals and strive to provide good reviews ourselves. Hopefully/eventually this will improve the system. But we can’t really go down the path certain other individuals have chosen to take (for simplicity and complete control) and against which we fought so valiantly.

    Overall I have to agree with Richard, ignoring the poor reviewers and the deliberate rejectors (which do certainly exist and are all based on personal politics), that the majority of the time the reviews do serve their purpose and improve the manuscript.

    I do however, strongly agree with your posting that rejection simply because the paper ‘doesn’t fit the journal’ (or rather certain editors ambitions, I suppose) is absolute BS.

  12. Mike Taylor Says:

    I fear you may be taking some steps here towards the ‘dark side’ (you know who/where I mean).

    I do know who/where you mean, and I hear your warning.

    Don’t give in. The system most definitely is not perfect, but we can realize this as individuals and strive to provide good reviews ourselves.

    As indeed I do. The problem is, things are not set up in a way that causes any correlation between the quality of the reviews I provide and those I get in return. I can be as Harris-like as I want, and I still might roll a one on the Handling Editor’s D6, and have my work sent to a hostile reviewer who will not recommend acceptance whatever it says. Or I might give out crappy one-liner reviews, but roll a six and get a detailed, positive, constructive critique of my work. Or indeed I might get a blithe yeah-that’ll-do one-liner, in which case I haven’t really undergone peer-review at all yet my work gets to wear the Peer Reviewed badge.

    The more I write about this, the more I wonder whether my biggest issue with peer-review as it currently stands is what a crap-shoot it is. So very much depends on who the editor happens to send the manuscript to. That can’t be right, can it?

    (An experiment occurs to me. Send the exact same manuscript repeatedly to the same journal. If the journal is big enough that a different HE is assigned each time, they might not realise what’s going on. Then if the second HE happens to choose nicer/better reviewers than the first, a paper once rejected as not good enough for the journal could still end up published there. I wonder how often this happens.)

    Hopefully/eventually this will improve the system. But we can’t really go down the path certain other individuals have chosen to take (for simplicity and complete control) and against which we fought so valiantly.

    And yet … the problem with the group you allude to wasn’t that their journal wasn’t peer-reviewed — indeed, as you know, it is a peer-reviewed journal, at least in some box-checky way. The problem was the plagiarism, not the lack of peer-review. If anything, your experiences stand as a fine warning of why the Peer-Reviewed stamp should not be given credence as an indication of respectability.

    I really don’t know what I think about all this. I just know what I’m nowhere near as certain about it now as I was a few years ago.


  13. I’m curious about something. Perhaps I asked this question before during the last time peer-review was a series, but here goes:

    What would be the value of presenting a submittable ms to a public hosting site (like, say, a “blog”) and have it crowd-reviewed? There are a ridiculous number of professional and interested science nerds who follow THIS blog in particular, and of those, many of people with some foot in the material subjects of both the authors and the blog. We already do a form of this by presenting the paper as “drafts” for review with particular colleagues/friends/frenemies/collenemies prior to submission, so why not present this at large, then submit? You’ll manage, in many ways, to do the job that professional reviewers would be doing by allowing these reviewers to choose the time of review and not be as burdened by time constraints. (It might even save your editor some hassle.)

    Of course, the caveat is that this isn’t “blind review,” but then we’ve noted that in some specialities, there’s no such thing as “blind review” — you all know each other.

  14. Mike Taylor Says:

    I’m curious about something. Perhaps I asked this question before during the last time peer-review was a series …

    I’m pretty sure we’ve never done a series on peer-review before.

    What would be the value of presenting a submittable ms to a public hosting site (like, say, a “blog”) and have it crowd-reviewed?

    Well, that’s essentially what we did with the sequence of six posts starting with Neural spine bifurcation in sauropods, Part 1: what we knew a month ago. Those posts together pretty much constitutes the first draft of a paper. The six parts garnered 11, 18, 11, 16, 9 and 37 comments, for a total of 102. Now by no means all those comments were anything like “reviews”, but I think it’s still fair to say that we go a good, solid helping of pre-submission feedback along the way.

    The resulting paper is now in review at an open-access journal. (Although the version we submitted had changed quite a bit more than either of us had imagined from being a simple concatenation of those six posts.)

  15. Frederik Says:

    @Mike Taylor

    “Overall I have to agree with Richard, ignoring the poor reviewers and the deliberate rejectors (which do certainly exist and are all based on personal politics), that the majority of the time the reviews do serve their purpose and improve the manuscript.”

    Why should we leave aside the existance of poor reviewers (not only in terms of quality but also considering their income which – again – can have a relevant impact to the quality of the review) and deliberate rejectors? I think we all agree on the fact that papers CAN be elaborated and improved by a focused dialogue with a smart person.
    However, the dialogue in peer review proceedings is hierarchical. If the author doesn´t find a peer review journal that believes in the plausibility, relevance and suitability of his_her paper, he_she won´t get it published. I wouldn´t consider this a problem (because you can always publish it online) if there wasn´t a trend to valuate peer review as a quality criterion, while it actually only shows the appreciation of the reviewer(s). This affects quoting strategies as well as science politics. Impact factors decide upon the evaluation=financing of scientific institutes and careers – consequently there´s a pressure to a certain publication habit and this I find highly problematic.

    Peer-reviews isn´t necessarily “bad” or something – I strongly believe we don´t have an alternative to sorting and filtering of knowledge – but in a larger context today´s review proceedings conceal the poltical effect they have on science.

  16. Mike Taylor Says:

    Why should we leave aside the existance of poor reviewers (not only in terms of quality but also considering their income which – again – can have a relevant impact to the quality of the review)

    Frederik, I don’t understand your point about income. You know that reviewers are never paid for their work, right?

  17. Frederik Says:

    Yes, Mike, I know. I probably misspoke here – pardon my English – because that´s exactly what I was reffering to. And while I am not convinced it should be different, I think we all know that professional scientists are constantly running low on time and we are probably all aware of the economic pressure to efficiency (maybe things are different in your country), which – at least in my opinion – have an important impact on the quality of reviews.

    Some publishing houses recommend ~4 hours for a review. I don´t find this a lot of time to create a profound review, even if you are an expert, but it worthes a lot considering the the full timetables of most researchers.


  18. […] — Bora Zivkovic (@BoraZ) August 7, 2012 New on SV-POW: Some More of Peer-Review’s Greatest Mistakes svpow.com/2012/08/06/som… See pt1: Where Peer-Review Went Wrong svpow.com/2012/08/05/whe… […]

  19. Margot Says:

    There may be a positive side to your rant. If the reviewers don’t add much to your manuscripts, it might mean there isn’t much to improve. I’ve never had an article published that didn’t seriously benefit from peer review. (Read: I sumbit rubbish.) And of course, you do get some unnecessarily sour and/or nitpicky comments, but in my case that’s worth it. Maybe next time consider submitting your first draft, and see if then you get more constructive reviews! I know reviewers should just say “publish as is” when the manuscript merits that, but well, if they don’t, you might as well see if you can make that work for you…


  20. One could say that reviewers are paid implicitly when their job requirements as a research scientist requires them to perform review tasks. Thus, the cost of review is passed down as a function of academia: the reviewer is paid indirectly by being given greater allowances on other tasks, or greater access, or the ability to publish in the house journal, etc., and thus defraying the overhead costs of review. It is not fiscal, for sure; the reviewer is not paid by the house who he offers his service to, but this doesn’t mean he receives no benefit.

  21. Matt Wedel Says:

    Oh, there is a benefit to doing good reviews, but it’s not institutional. There are probably institutions that track whether their faculty provide reviews, and some may track how many, but I doubt if anyone in a position of power over a given researcher has any idea how good those reviews actually are (unless possibly that researcher writes exceptionally good or bad reviews). Rather, I think the primary benefit–or deficit–is in terms of professional reputation. There are some folks that are just known for writing good reviews. Likewise, there are a handful that are known for writing terrible reviews. How much those reputations actually matter is up for discussion, but they do exist and from what I’ve seen they’re mostly on point.

  22. Frederik Says:

    Actually I didn´t want to get too much into the discussion on payment in science. I personally rarely met a scientist that didn´t routinely work for longer, without getting paid for it, insofar I find it hard to discuss whether there is a reward or not – but of course for your carreer it can always be wortwhile doing something even if you are not paid for.
    Much more relevant seems to me the fact, that peer review is widely accepted as an indicator of quality while technically it means nothing more than “approved by person X” and if the review is not published afterwards I (as reader) can´t even retrace if this opinion is trustworthy or not. To me it is a systematical problem because it does not only affect reading lists and quotations but also trends in science sponsoring. While I don´t think there should be self-publishing only I think that peer review has to work more like an open debate where the process of selection is transparent. I wrote about this on my blog in case of interest.

  23. Mike Taylor Says:

    Frederik makes an excellent point: peer-review as it is currently practiced is extremely unscientific, in that it gives us only the results of a process with no insight into the process at all, and therefore no way to replicate it or validate it. In many cases, that output is literally a single bit of information (“pass” or “fail”).

    For peer-review to be scientific, it needs to be subject to examination and replication, which means that the following ALL need to be freely available along with the resulting paper: the submitted manuscript, the reviews themselves, the editor’s letter that results from them, the author’s response to the editor (which often contains explanations of why the reviewers’ suggestions should not be followed) and the paper that results from this process.

    (For true transparency, the identities of the reviewers and editor should also be known. In that way, the quality of their input can be judged. I know there are arguments for anonymous peer-review; I don’t find any of them compelling enough to override the enormous disadvantage of obfuscation and lack of consequences.)


  24. […] Where peer-review went wrong and Some more of peer-review s greatest mistakes and What is this peer-review process anyway? by Mike Taylor […]


  25. I think the reasoning behind blind peer-review is sound, and almost certainly fully grounded in the premise that testers and tested individuals should NOT necessarily be aware of one another. This is usually why in blind tests a secondary set of individuals are used for various “priming” studies to divorce the testers from any interacted with the subjects. A person is given a task, blind to who he is reviewing, in order to segregate personal bias from review. This is also why editors (by and large) remove author identifiers when possible. I know you guys know this. This IS scientifically-founded. It ISN’T useful when the field is small enough that the reviewers KNOW the work’s authors in question, but otherwise should be.

    As I’m sure you know, the difficulty increases in performing “blind” review as the specialty of a discipline becomes narrower: sauropod biomechanics, phylogeny, dietary mechanics and ecology … these things are NOT broad groups, and in many cases overlap most of their principle researchers. But using this quality (reviewers know each other in small disciplines), one can notice certain things “give away” the researcher:

    Personal qualities in one’s work and broadcasted research directions.

  26. Martin Hill Says:

    I agree that peer review is too slow and doesn’t do that quality filtering that some advocates claim from it. At the moment though, the only ‘fixes’ seem to be around trying to fix peer review, rather than approach the problem from first principles and borrow from other industries: ie, if you want quality, how do you control quality? How do people do it now? What transfers to research?

    There are existing mechanisms and some institutions implement them, but they are patchy. Those mechanisms should be at the fore-front, with the appropriate certificates, not a rather poorly defined ‘peer review’.

    So. How do we encourage that to happen?

  27. Martin Hill Says:

    Adding to that – it’s not enought to just publish openly, that just defers peer review to crowd review and adds to the vast clutter of stuff of varying quality that even more people have to read. We should be able to do better than that.


  28. […] “principle value of peer-review…opportunity for authors to demonstrate that they are prepared to undergo peer-review” https://svpow.com/2012/08/05/where-peer-review-went-wrong/ […]


  29. […] the effectiveness (or rather, ineffectiveness) of peer review and the standard publishing system (here, here and here are the most recent entries), and this included a small discussion on the form and […]


  30. […] Because it has not seen any peer review, and as much as some of my colleagues hate peer review (here, here and here), it does sort out a lot of papers with major flaws. And it shows a lot of signs of […]

  31. David Marjanović Says:

    I think the real problem with peer review as currently practiced lies elsewhere: there are usually only two reviewers in the case of biology journals.

    Even the linguists have gone to three.

    And yes, it’s quite stupid that you have to format your manuscript before you even submit it. Why not after it’s accepted?!? Comptes Rendus Palevol is moving in that direction, if it hasn’t already.


  32. […] H sur mon post il y a quelques temps, et à une série de posts intéressants de Mike Taylor (posts 1, 2, 3), je me suis demandé à quel point les différences de point de vue sur l’expertise […]


  33. […] manuscript in 108 words. So two words per page, or about 2/3 of a word per day of review time. But let’s not dwell on that.) Figure 6. Basic cervical vertebral architecture in archosaurs, in posterior and lateral views. […]


  34. […] about my increasing disillusionment with the traditional pre-publication peer-review process [post 1, post 2, post 3]. By coincidence, it was in between writing the second and third in that series of […]

  35. Mike Taylor Says:

    Six months on, I just re-read my own comment above:

    For peer-review to be scientific, it needs to be subject to examination and replication, which means that the following ALL need to be freely available along with the resulting paper: the submitted manuscript, the reviews themselves, the editor’s letter that results from them, the author’s response to the editor (which often contains explanations of why the reviewers’ suggestions should not be followed) and the paper that results from this process.

    How delightful that this is exactly what did happen, half a year later, with our PeerJ paper!


  36. […] that is a useful thing to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the […]


  37. […] that is a useful thing to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread […]


  38. […] as a mark of correctness, but of seriousness. Back in the original SV-POW! series on peer-review (Where peer-review went wrong, Some more of peer-review’s greatest mistakes, What is this peer-review process anyway?, Well, […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: