My new article is up at the Guardian. This time, I have taken off the Conciliatory Hat, and I’m saying it how I honestly believe it is: publishing your science behind a paywall is immoral. And the reasons we use to persuade ourselves it’s acceptable really don’t hold up.

Read Choose open access: publishing your science behind a paywall is immoral

Because for all that we rightly talk about the financial efficiencies of open access, when it comes right down to it OA is primarily a moral, or if you prefer idealogical, issue. It’s not really about saving money, though that’s a welcome side-effect. It’s about doing what’s right.

I’m expecting some kick-back on this one. Fire away; I’ll enjoy the discussion.

Counting beans

October 10, 2012

The reason most of my work is in the form of journal articles is that I didn’t know there were other ways to communicate. Now that I know that there are other and in some ways demonstrably better ways (arXiv, etc.), my enthusiasm for sending stuff to journals is flagging. Whereas before I was happy to do it and the tenure beans were a happy side-effect, now I can see that the tenure beans are in fact shackles preventing me from taking a better path.

I’ve recently written about my increasing disillusionment with the traditional pre-publication peer-review process [post 1, post 2, post 3]. By coincidence, it was in between writing the second and third in that series of posts that I had another negative peer-review experience — this time from the other side of the fence — which has left me even more ambivalent about the way we do things.

On 17 July I was asked to review a paper for Biology Letters. Having established that it was to be published as open access, I agreed, was sent the manuscript, and two days later sent a response that recommended acceptance after only minor revision. Eleven days later, I was sent a copy of the editor’s decision — a message that included all three reviewers’ comments. I can summarise those reviewers’ comments by directly quoting as follows:

Revewer 1: “It is good to have this data published with good histological images. I have only minor comments – I think the ms should generally be accepted as it is.”

Reviewer 2 (that’s me): “This is a strong paper that brings an important new insight into a long-running palaeobiological issue […] and should be published in essentially its current form.”

Reviewer 3: “This manuscript reports exciting results regarding sauropod biomechanics […] The only significant addition I feel necessary is to the concluding paragraph.”

So imagine my surprise when the decision letter said:

I am writing to inform you that your manuscript […] has been rejected for publication in Biology Letters.

This action has been taken on the advice of referees, who have recommended that substantial revisions are necessary. With this in mind we would like to invite a resubmission, provided the comments of the referees are taken into account. This is not a provisional acceptance.

The resubmission will be treated as a new manuscript.

I can’t begin to imagine how they turned three “accept with very minor revisions” reviews into  “your manuscript has been rejected … on the advice of referees, who have recommended that substantial revisions are necessary”.

In fact, let’s dump the “I can’t imagine how” euphemism and say it how it is: “reviewers recommended substantial revisions” is an outright lie. The reviewers recommended no such thing. The rejection can only be because it’s what the editor wanted to do in spite of the reviewers’ comments not because of them. It left me wondering why I bothered to waste my time offering them an opinion that they were only ever going to ignore.

Then six days ago I heard from the lead author, who had just had a revised version of the same manuscript accepted. (It had not come back to me for review, as the editor had said would happen with any resubmission).

The author wrote to me:

The paper will be published (open access) at the 3rd of Octobre. When I had submitted the corrected version of the ms acceptance was only a formality. So [name] was right, they just want to keep time between submission and publishing date short.

Well. We have a word for this. We call it “lying”. When the editor wrote “your manuscript […] has been rejected for publication in Biology Letters … With this in mind we would like to invite a resubmission … This is not a provisional acceptance. The resubmission will be treated as a new manuscript”, what she really meant was “your manuscript […] has been provisionally accepted, please sent a revision. The resubmission will not be treated as a new manuscript”.

I find this lack of honesty disturbing.

Because we’re not talking here about some shady, obscure little third-world publisher that no-one’s ever heard of with fictional people on the editorial board. We’re talking about the Royal Freaking Society of London. We’re talking about a journal (Biology Letters) that was calved off a journal (Proceedings B) that emerged from the oldest continuously published academic journal in the world (Philosophical Transactions). We’re talking about nearly three and a half centuries of academic heritage.

And they’re lying to us about their publication process.

When did they get the idea that this was acceptable?

And what else are they lying to us about? Can we trust (for example) that when editors or members submit papers, they are subjected to the same degree of rigorous filtering as every other submission? I would have assumed that, yes, of course they do. But I just don’t know any more.

Sampled specimens, sampling locations and cross sections of sauropod cervical ribs. (a) Anterior neck of Brachiosaurus brancai (Museum für Naturkunde, Berlin) with hyperelongated and overlapping cervical ribs. (b) Three cross sections were taken along the proximal part of the posterior process of a left mid-neck cervical rib of Mamenchisaurus sp. (SIPB 597) in ventral view. Note the medially pointed ventral part of the cervical rib. (c) Seven cross sections were taken along the left ninth cervical rib of B. brancai (MB.R.2181.90), which is figured in lateral view. (d) Neck of Diplodocus carnegi (cast in the Museum für Naturkunde, Berlin) with short cervical ribs. (e) Six cross sections were taken along the right mid-neck cervical rib of cf. Diplodocus sp. (Sauriermuseum Aathal, Aathal HQ2), which is figured in ventral view. Note the morphological differences of this cervical rib when compared with the hyperelongated cervical rib of B. brancai. (Klein et al. 2012:figure 1)

The paper in question is Klein et al.’s (2012) histological study confirming that the bony cervical ribs of sauropods are, as we suspected, ossified tendons — as we assumed in our recently arXiv’d sauropod-neck paper. I am delighted to be able to say that it is freely available. At the bottom of the first page, it says “Received 21 August 2012; Accepted 13 September 2012”, for a submission-to-acceptance time of 23 days. But I know that the initial submission — and remember, the final published version is essentially identical to that initial submission — was made before 17 July, because that’s when I was asked to provide a peer-review. Honest reporting would give a submission-to-acceptance time of 58 days, which is two and a half times as long as the claimed figure.

Now the only reason for a journal to report dates of submission and acceptance at all is to convey the speed of turnaround, and lying about that turnaround time completely removes any utility those numbers might have. It would be better to not report them at all than to fudge the data.

This is another way that the high-impact fast-turnaround publishing system is so ridiculously gamed that it actually hurts science. We have the journal lying to authors about the status of their manuscripts so that it can then lie to the readers about its turnaround times. That’s deeply screwed up. And it’s hard for authors to blow the whistle — they don’t want to alienate the journals and the editors who have some veto power over their tenure beans, and reviewers don’t usually have all the information. The obvious solution is to make the peer-review process more open, and to make editorial decisions more transparent.

That, really, is only what we’d respect from the Royal Society. Isn’t it?

Note. Nicole Klein did not know I was going to post about this. I want to make that clear so that no-one at the Royal Society thinks that she or any of her co-authors is making trouble. All the trouble is of my making (and, more to the point, the Royal Society’s). Someone really has to shine a light on this misbehaviour.

Update (12 March 2014)

I should have noted this before, but on 10 May 2013, the Royal Society sent me an update, explaining some improvements in their process. But as noted in my write-up, it doesn’t actually solve the problem. Doing so would simply require giving three dates: Received, Revised and Accepted. But as I write this, new Proc. B articles still only show Received and Accepted dates.


Subsequent posts discuss how this issue is developing:

Posting palaeo papers on arXiv

September 28, 2012

Over on Facebook, where Darren posted a note about our new paper, most of the discussion has not been about its content but about where it was published. We’re not too surprised by that, even though we’d love to be talking about the science. We did choose arXiv with our eyes open, knowing that there’s no tradition of palaeontology being published there, and wanting to start a new tradition of palaeontology being routinely published there. Having now made the step for the first time, I see no reason ever to not post a paper on arXiv, as soon as it’s ready, before — or maybe even instead of — submitting it to a journal.

(Instead of? Maybe. We’ll discuss that below.)

The key issue is this: science isn’t really science until it’s out there where it can be used. We wrote the bulk of the neck-anatomy paper back in 2008 — the year that we first submitted it to a journal. In the four years since then, all the observations and deductions that it contains have been unavailable to the world. And that is stupid. The work might just as well never have been done. Now that it’s on arXiv, that’s over. I was delighted to get an email less than 24 hours after the paper was published, from an author working on a related issue, thanking us for posting the paper, saying that he will now revise his own in-prep manucript in light of its findings, and cite our paper. Which of course is the whole point: to get our science out there where it can do some damage.

Because the alternative is horrible, really. Horribly wasteful, horribly dispiriting, horribly retarding for science. For example, a couple of weeks ago in his SVPCA talk, David Norman was lamenting again that he never got around to publishing the iguanodont systematic work that was in his dissertation, I-don’t-know-how-many-years-ago. The result of that interminable delay is that others have done other, conflicting iguanodont systematic work, and Norman is now trying belatedly to undo that and bring his own perspective. A terrible an unnecessary slowing of ornithopod science, and a waste of duplicated effort. (Thankfully it’s only ornithopods.)

And of course David Norman is very far from being alone. Pretty much any palaeontologist you talk to will tell you of a handful of papers — many more in some cases — that were finished many years previously but have never seen the light of day. (I still have a couple myself, but there is no point in resurrecting them now because progress has overtaken them.) I wonder what proportion of all Ph.D work ever sees the light of day? Half? Less? It’s crazy.

Figure 8. Sauropod cervical vertebrae showing anteriorly and posteriorly directed spurs projecting from neurapophyses. 1, cervical 5 of Sauroposeidon holotype OMNH 53062 in right lateral view, photograph by MJW. 2, cervical 9 of Mamenchisaurus hochuanensis holotype CCG V 20401 in left lateral view, reversed, from photograph by MPT. 3, cervical 7 or 8 of Omeisaurus junghsiensisYoung, 1939 holotype in right lateral view, after Young (1939, figure 2). (No specimen number was assigned to this material, which has since been lost. D. W. E. Hone personal communication, 2008.)

Publish now, publish later

So, please folks: we all need to be posting our work on preprint servers as soon as we consider it finished. It doesn’t mean that the posted versions can’t subsequently be obsoleted by improved versions that have gone through peer-review and been published in conventional journals. But it does mean that the world can know about the work, and build on it, and get the benefit of it, as soon as it’s done.

You see, we have a very fundamental problem in academia: publishing fulfils two completely separate roles. Its primary role (or at least the role that should be primary) is to make work available to the community; the secondary role is to provide a means of keeping score — something that can be used when making decisions about who to appoint to jobs, when to promote, who gets grants, who gets tenure and so on. I am not going to argue that the latter shouldn’t happen at all — clearly a functioning community needs some way to infer the standing of its participants. But I do think it’s ridiculous when the bean-counting function of publication trumps the actual publication role of publication. Yet we’ve all been in a position where we have essentially complete work that could easily go on a blog, or in the PalAss newsletter, or in a minor journal, or somewhere — but we hang onto it because we want to get it into a Big Journal.

Let me say again that I do realise how unusual and privileged my own position is: that a lot of my colleagues do need to play the Publication Prestige game for career reasons (though it terrifies my how much time some colleagues waste squeezing their papers into two-and-a-half-page format in the futile hope of rolling three sixes on the Science ‘n’ Nature 3D6). Let’s admit right now that most palaeontologists do need to try to get their work into Proc B, or Paleobiology, or what have you. Fair enough. They should feel free. But the crucial point is this: that is no reason not to post pre-prints so we can all get on with actually benefitting from your work in the mean time.

Actually, I feel pretty stupid that it’s taken me this long to realise that all my work should go up on arXiv.

Figure 11. Archosaur cervical vertebrae in posterior view, Showing muscle attachment points in phylogenetic context. Blue arrows indicate epaxial muscles attaching to neural spines, red arrows indicate epaxial muscles attaching to epipophyses, and green arrows indicate hypaxial muscles attaching to cervical ribs. While hypaxial musculature anchors consistently on the cervical ribs, the principle epaxial muscle migrate from the neural spine in crocodilians to the epipophyses in non-avial theropods and modern birds, with either or both sets of muscles being significant in sauropods. 1, fifth cervical vertebra of Alligator mississippiensis, MCZ 81457, traced from 3D scans by Leon Claessens, courtesy of MCZ. Epipophyses are absent. 2, eighth cervical vertebra ofGiraffatitan brancai paralectotype HMN SII, traced from Janensch (1950, figures 43 and 46). 3, eleventh cervical vertebra of Camarasaurus supremus, reconstruction within AMNH 5761/X, “cervical series I”, modified from Osborn and Mook (1921, plate LXVII). 4, fifth cervical vertebra of the abelisaurid theropod Majungasaurus crenatissimus,UA 8678, traced from O’Connor (2007, figures 8 and 20). 5, seventh cervical vertebra of a turkey, Meleagris gallopavo, traced from photographs by MPT.


So are there any special cases? Any kinds of papers that we should keep dry until they make it into actual journals? I can think of two classes that you could argue for — one of them convincingly, the other not.

First, the unconvincing one. When I discussed this with Matt (and half the fun of doing that is that usually neither of us really knows what we think about this stuff until we’re done arguing it through), he suggested to me that we couldn’t have put the Brontomerus paper on arXiv, because that would have leaked the name, creating a nomen nudum. My initial reaction was to agree with him that this is an exception. But when I thought about it a bit more, I realised there’s actually no compelling reason not to post such a paper on arXiv. So you create a nomen nudum? So what? Really: what is the negative consequence of that? I can’t think of one. OK, the name will appear on Wikipedia and mailing lists before the ICZN recognises it — but who does that hurt? No-one that I can think of. The only real argument against posting is that it could invite scooping. But is that a real threat? I doubt it. I can’t think of anyone who would be barefaced enough to scoop a taxon that had already been published on arXiv — and if they did, the whole world would know unambiguously exactly what had happened.

So what is the one real reason not to post a preprint? I think that might be a legitimate choice when publicity needs to be co-ordinated. So while nomenclatural issues should not have stopped us from arXiving the Brontomerus paper, publicity should. In preparation for that paper’s publication day, we did a lot of careful work with the UCL publicity team: writing non-specialist summaries, press-releases and FAQs, soliciting and preparing illustrations and videos, circulating materials under embargo, and so on. In general, mainsteam media are only interested in a story if it’s news, and that means you need to make sure it’s new when they first hear about it. Posting the article in advance on a publicly accessible archive would mess that up, and probably damage the work’s coverage in the press, TV and radio.

Publication venues are a continuum

It’s become apparent to us only gradually that there’s really no clear cut-off where a paper becomes “properly published”. There’s a continuum that runs from least to most formal and exclusive:

SV-POW! — arXiv — PLOS ONE — JVP — Nature

1. On SV-POW!, we write what we want and publish it when we want. We can promise you that it won’t go away, but you only have our word for it. But some of what we write here is still science, and has been cited in papers published in more formal venues — though, as far as I know, only by Matt and me so far.

2. On arXiv, there is a bit more of a barrier to clear: you have to get an existing arXiv user to endorse your membership application, and each article you submit is given a cursory check by staff to ensure that it really is a piece of scientific research rather than a diary entry, movie review or spam. Once it’s posted, the paper is guaranteed to remain at the same URL, unchanged, so long as arXiv endures (and it’s supported by Cornell). Crucially, the maths, physics and computer science communities that use arXiv uncontroversially consider this degree of filtering and permanence sufficient to constitute a published, citeable source.

3. At PLOS ONE, your paper only gets published if it’s been through peer-review — but the reviewing criteria pertain only to scientific soundness and do not attempt to evaluate likely impact or importance.

4. At JVP and other conventional journals, your paper has to make it through a two-pronged peer-review process: it has to be judged both sound scientifically (as at PLOS ONE) and also sufficiently on-topic and important to merit appearing in the journal.

5. Finally, at Nature and Science, your paper has to be sound and be judged sexy — someone has to guess that it’s going to prove important and popular.

Where along this continuum does the formal scientific record begin? We could make a case that all of it counts, provided that measures are taken to make the SV-POW! posts permanent and immutable. (This can be done submitting them to WebCite or to a service such as Nature Precedings used to provide.) But whether or not you accept that, it seems clear that arXiv and upwards is permanent, scientific and citeable.

This raises an interesting question: do we actually need to go ahead and publish our neck-anatomy paper in a more conventional venue? I’m honestly not sure at the moment, and I’d be interested to hear arguments in either direction. In terms of the progress of science, probably not: our actual work is out there, now, for the world to use as it sees fit. But from a career perspective, it’s probably still worth our while to get it into a journal, just so it can sit more neatly on our publication lists and help Matt’s tenure case more. And yet I don’t honestly expect any eventual journal-published version to be better in any meaningful way than the one on arXiv. After all, it’s already benefitted from two rounds of peer-review, three if you count the comments of my dissertation examiners. More likely, a journal will be less useful, as we have to cut length, eliminate illustrations, and so on.

So it seems to me that we have a hard choice ahead of us now. Call that paper done and more onto making more science? Or spend more time and effort on re-publishing it in exchange for prestige? I really don’t know.

For what it’s worth, it seems that standard practice in maths, physics and computer science is to republish arXiv articles in journals. But there are some scientists who routinely do not do this, instead allowing the arXiv version to stand as the only version of record. Perhaps that is a route best left to tenured greybeards rather than bright young things like Matt.

Figure 5. Simplified myology of that sauropod neck, in left lateral view, based primarily on homology with birds, modified from Wedel and Sanders (2002, figure 2). Dashed arrows indicate muscle passing medially behind bone. A, B. Muscles inserting on the epipophyses, shown in red. C, D, E. Muscles inserting on the cervical ribs, shown in green. F, G. Muscles inserting on the neural spine, shown in blue. H. Muscles inserting on the ansa costotransversaria (“cervical rib loop”), shown in brown. Specifically: A. M. longus colli dorsalis. B. M. cervicalis ascendens. C. M. flexor colli lateralis. D. M. flexor colli medialis. E. M. longus colli ventralis. In birds, this muscle originates from the processes carotici, which are absent in the vertebrae of sauropods. F. Mm. intercristales. G. Mm. interspinales. H. Mm. intertransversarii. Vertebrae modified from Gilmore (1936, plate 24).

Citing papers in arXiv

Finally, a practicality: since it’ll likely be a year or more before any journal-published version of our neck-anatomy paper comes out, people wanting to use it in their own work will need to know how to cite a paper in arXiv. Standard procedure seems to be just to use authors, year, title and arXiv ID. But in a conventional-journal citation, I like the way that the page-range gives you a sense of how long the paper is. So I think it’s worth appending page-count to the citations. And while you’re at it, you may as well throw in the figure and table counts, too, yielding the version that we’ve been using:

  • Taylor, Michael P., and Mathew J. Wedel. 2012. Why sauropods had long necks; and why giraffes have short necks. arXiv:1209.5439. 39 pages, 11 figures, 3 tables.

Let me begin with a digression. (Hey, we may as well start as we mean to go on.)

Citations in scientific writing are used for two very different reasons, but because the two cases have the same form we often confuse them. We may cite a work as an authority, to lend its weight to our own assertion, as in “Diplodocus carnegii seems to have had fifteen cervical vertebrae (Hatcher 1901)”; or we may cite a work to give it credit for an observation or concept, as in “… using the extant phylogenetic bracket (Witmer 1995)”.

The conflation of these two very different modes of citation causes some difficulty because, while many authors would never cite (say) a blog post as an authority, most would feel some obligation to cite it in order to give credit. You might not want to cite this SV-POW! post as authority for a length of 49 m for Amphicoelias fragillimus; but you would hardly use the sacrum illustration from this post in your own work without crediting the source.

So the fact citations do two rather different jobs causes confusion.

When it comes to peer-review, things are worse: not only it trying to do two different things at once (filtering manuscripts, and improving the ones it retains) but the filtering itself has two components — deciding (A) whether a manuscript is good science, and (B) whether it’s “a good fit” for the journal.

What does “a good fit” mean? Anything or nothing, unfortunately. In the case of Science ‘n’ Nature, of course, it means “is about a new feathered theropod”. For Paleobiology, it seems to mean “has lots of graphs and no specimen photos”. In the case of other journals, it’s much less predictable, and can often, it seems, come down to the whim of the individual reviewer. In many cases, infuriatingly, it can be a matter of whether the reviewer’s guess is that a work is “important” enough for the journal — something that you can never tell in advance, but which only becomes apparent in the years following publication.

As a result, a perfectly good piece of work — one which passes the peer-review process’s “is it good science?” filter with flying colours — can still get rejected, and indeed can be bounced around from journal to journal until the author loses interest, leaves the field or dies; or, of course, until the paper is accepted somewhere.

(The already disheartening process of shopping a paper around multiple journals until it finds a home is made utterly soul-crushing by the completely different formats that the different journals require their submissions in. But that’s a completely different rant.)

This is the Gordian knot that PLoS ONE set out to cut by simply declaring that all scientifically satisfactory work is a good fit. Reviewers for PLoS ONE are explicitly told not to make judgements about the probable impact of papers, and only judge whether the science is good. In this way, it’s left to the rest of world to evaluate the importance of the work — just as in fact it always does anyway (through citation, blog discussion, formal responses, and so on).

But eliminating the “good fit” criterion still leaves PLoS ONE reviewers with two jobs: judging whether a manuscript is scientifically sound, and helping the author to improve it. I find myself wondering whether there might be a way to decouple these functions, too.

Perhaps not: after all, they are somewhat intertwingled. The question “is this scientifically sound” does not always receive a yes-or-no answer. The answer might be “yes, provided that the author’s conclusions are corroborated by a phylogenetic analysis”, for example.

I don’t have any good answers to propose here. (Not yet, anyway.) At this stage, I am just trying to think clearly about the problem, not to come up with solutions. But I do think we can see what’s going on with more clarity if our minds can separate out all the different roles that reviewers play. To my mind, dumping “impact assessment” from the review process is PLoS ONE’s greatest achievement. If we can pick things apart yet further, there may well be even greater gains to be had.

I guess we should be asking ourselves this: what, when it comes right down to it, is peer-review for? Back in the day, the filtering aspect was crucial because paper printing and distribution meant that there was a strict limit on how many papers could be published. That’s not true any more, so that aspect of peer-review is no longer relevant. But what else has the Internet changed? What can we streamline?

Last time I argued that traditional pre-publication peer-review isn’t necessarily worth the heavy burden it imposes. I guess no-one who’s been involved in the review process — as an author, editor or reviewer — will deny that it imposes significant costs, both in the time of all the participants, and in the delay in getting new work to press. Where I expected more pushback was in the claim that the benefits are not great.

Remember that the benefits usually claimed for peer-review are in two broad categories: that it improves the quality of what gets published, and that it filters out what’s not good enough to be published at all. I’ll save the second of these claims for next time. This time I want to look at the first.

The immediate catalyst is the two brief reviews with which a manuscript of mine (with Matt as co-author) was rejected two days ago from a mid-to-low ranked palaeo journal. I’m going to quote two sentences that rankle from one of the anonymous reviewers:

The manuscript reads as a long “story” instead of a scientific manuscript. Material and methods, results, and interpretation are unfortunately not clearly separated.

This, in the eyes of the reviewer, was one of the two deficiencies of the manuscript that led him or her to recommend rejection. (The other was that the manuscript is “too long”.) But as I’ve said repeatedly here and elsewhere, all good writing is story-telling. Scientific ideas, like all others, need to be presented as stories in order to sink into our brains. So what’s happened here is that, so far as I’m concerned, the reviewer has praised our manuscript; but he or she thinks its a criticism.

It’s not clear what we should do in response to such a review. To get our paper published in this journal, we could restructure the paper. We could draw every observation, comparison, discussion and illustration out of its current position in a single, flowing argument; and instead cram them them all, out of their natural order, into the “scientific” structure that the reviewer evidently prefers. But I’m not willing to do this, because our judgement is that this would reduce the value of the resulting paper — making it harder to follow the argument. And I hate to do work with negative net value.

In this case, it’s even worse: the initial draft of this paper was in a much more conventional “scientific” structure. In this form, we submitted it to a different journal, whence it was rejected for completely different reasons which I won’t go into. Before submitting the new version, we re-tooled the paper to tell its story in the order that makes most sense. So the reviewer who recommended that the new version be rejected was (albeit unknowingly) requiring us to revert an older and inferior version of the manuscript.

So this is a case where reviewers’ comments really don’t help at all. (To be fair to the reviewer in question, he or she did also say a lot of positive and encouraging things. But they are really just decoration on the big, fat REJECT.)

And sadly this kind of thing is not too unusual. Other reviews I’ve been sent have (A) demanded that we add a phylogenetic analysis even though it could not possibly tell us anything; (B) demanded that we remove a phylogenetic analysis; (C) rejected a paper due to consistently misunderstanding our anatomical nomenclature; (D) rejected my manuscript because they’d never heard of me (ad hominem review); and and my all-time most hated favourite (E) basically told me to write a completely different paper instead.

(E) in particular is a disaster and makes me just want to throw my hands up in the air. Written a paper that analyses a diversity database to draw conclusions about changing clade sizes through time? Too bad, the reviewer wants you to write a literature review on how you assembled the database instead! Written a paper showing that two species are not conspecific? Tough luck, the reviewer wants you to write a paper on the meaning of “genus” instead! I just loathe this. I am increasingly of the opinion that the best response may be “I’ve written that manuscript too, it’s in review elsewhere” or similar.

So those kinds of reviews are a complete waste of time and energy.

Where is the value, then? Well, I’ve mentioned Jerry Harris several times before as someone whose reviews are full of detailed, helpful comments that really do improve papers. I aspire to review like Jerry when it falls to me to provide this service, and I hope I can be as constructive as he is. But really, this is a tiny minority. Most reviews that avoid the traps I mentioned above don’t have much of substance to say. Some “reviews” I’ve received have been only a few sentences long: in such cases it’s hard to believe that the reviewers have actually read the manuscript, and very hard to accept that they have life-or-death power over its fate.

I’ve been wary of writing this post because of the fear that it reads like a catalogue of whining about how hard-done-by I am. That’s not the intention: I want to talk about widespread problems with peer-review, and I wanted to make them concrete by giving real examples. And of course the examples available to me are the reviews my own work has received. Let me note once again that I have made it through the peer-review gauntlet many times, and that I’m therefore criticising the system from within. I’m not a malcontent claiming there’s a conspiracy to keep my work out of journals. I am a small-time but real publishing author who’s sick of jumping through hoops.

Next time: what peer-review is really for.

[Note: this post is by Mike. Matt hasn’t seen it, may not agree with it, and would probably have advised me not to post it if I’d asked him.]

The magic is going out of my love-affair with peer-review. When we started seeing each other, back in 2004, I was completely on board with the idea that peer-review improves the quality of the scientific record in two ways: by keeping bad science from getting published, and by improving the good science that does get published. Eight years on, I am not convinced that either of those things is as true as I initially thought, and I’m increasingly aware of the monumental amount of time and effort it soaks up. Do the advantages outweigh the disadvantages? Five years ago I would have unhesitatingly said yes. A year ago, I’d have been unsure. Now, I am pretty much convinced that peer-review — at least as we practice it — is an expensive hangover from a former age, and does more harm than good.

What’s that? Evidence, you say? There’s plenty. We all remember Arsenic Life: pre-publication peer-review didn’t protect us from that. We all know of papers in our own fields that should never have been published — for a small sample, google “failure of peer-review” in the Dinosaur Mailing List archives. Peer-review doesn’t protect us from these and they have to be sorted out after publication by criticism, rebuttal and refinement. In other words, by the usual processes of science.

So pre-publication peer-review is not getting the job done as a filter. What about its role in improving papers that do get published? This does happen, for sure; but speaking as a veteran of 30 submissions, my experience has been that no more than half of my reviews have had anything constructive to suggest at all, and most of those that have improved my manuscripts have done so only in pretty trivial ways. If I add up all the time I’ve spent handling and responding to reviewer comments and balance it up against the improvement in the papers, my honest judgement is that it’s not been worth it. The improvement in the published papers is certainly not worth as much as all the extra science I could have been making instead of jumping through hoops.

And that of course is ignoring the long delays that peer-review imposes even in the best-case scenario of resubmit-with-revisions, and the much longer ones that result when reviews result in your having to reformat a manuscript and start the whole wretched process again at another journal.

All of this is much to my surprise, having been a staunch advocate of peer-review until relatively recently. But here’s where I’ve landed up, despite myself: I think the best analogy for our current system of pre-publication peer-review is that it’s a hazing ritual. It doesn’t exist because of any intrinsic value it has, and it certainly isn’t there for the benefit of the recipient. It’s basically a way to draw a line between In and Out. Something for the inductee to endure as a way of proving he’s made of the Right Stuff.

So: the principal value of peer-review is that it provides an opportunity for authors to demonstrate that they are prepared to undergo peer-review.

It’s a way of separating the men from the boys. (And, yes, the women from the girls, though I can’t help thinking there’s something very stereotypically male about our confrontational and oppositional review system.)

Finally, I should address one more thing: is this just whining from an outsider who can’t get in and thinks it’s a conspiracy? I don’t think so. I’ve run the peer-review gauntlet quite a few times now — my publications are into double figures, which doesn’t make me a seasoned professional but does show that I am pretty serious. In other words, I am inside the system that I’m criticising.

For full disclosure, I should make it clear that I am writing this a day after having had a paper rejected by reviewers. So if you like, you can write it off as a bitter ramblings of a resentful man. But the truth is, while this was the immediate trigger for writing this post, the feeling has been building up for a while.

Next time: some of the details of why my paper was rejected, and why I think they’re dumb reasons. In part three: what peer-review should actually be for, and what I plan to do with the paper now.