Learn to be sceptical by seeing how the sausage is made
September 16, 2022
Years ago, when I was young and stupid, I used to read papers containing phylogenetic analyses and think, “Oh, right, I see now, Euhelopus is not a mamenchisaurid after all, it’s a titanosauriform”. In other words, I believed the result that the computer spat out. Some time after that, I learned how to use PAUP* and run my own phylogenetic analysis and realised how vague and uncertain such result are, and how easily changed by tweaking a few parameters.
These days good papers that present phylogenetic analysis are very careful to frame the results as the tentative hypotheses that they are. (Except when they’re in Glam Mags, of course: there’s no space for that kind of nuance in those venues.)
It’s common now for careful work to present multiple different and contradictory phylogenetic hypotheses, arrived at by different methods or based on different matrices. For just one example, see how Upchurch et al.’s (2015) redescription of Haestasaurus (= “Pelorosaurus“) becklesii presents that animal as a camarasaurid (figure 15, arrived at by modifying the matrix of Carballido at al 2011), as a very basal macronarian (figure 16, arrived at by modifying the continuous-and-discrete-characters matrix of Mannion et al. 2013), and as a basal titanosaur (figure 17, arrived at by modifying the discrete-characters-only matrix from the same paper). This is careful and courageous reporting, shunning the potential headline “World’s oldest titanosaur!” in favour of doing the work right.) [1]
But the thing that really makes you understand how fragile phylogenetic analyses are is running one yourself. There’s no substitute for getting your hands dirty and seeing how the sausage is made.
And I was reminded of this principle today, in a completely different context, by a tweet from Alex Holcombe:
Some of us lost our trust in science, and in peer review, in a journal club. There we saw how many problems a bunch of ECRs notice in the average article published in a fancy journal.
Alex relays (with permission) this anecdote from an anonymous student in his Good Science, Bad Science class :
In the introduction of the article, the authors lay forth four very specific predictions that, upon fulfillment, would support their hypothesis. In the journal club, one participant actually joked that it read very much as though the authors ran the analysis, derived these four key findings, and then copy-pasted them in to the introduction as though they were thought of a priori. I’m not an expert in this field and I don’t intend to insinuate that anything untoward was done in the paper, but I remember several participants agreeing that the introduction and general framework of the paper indeed felt very “HARKed“.
Here’s the problem: as the original tweet points out, this is about “problems a bunch of ECRs notice in the average article published in a fancy journal”. These are articles that have made it through the peer-review gauntlet and reached the promised land of publication. Yet still these foundational problems persist. In other words, peer-review did not resolve them.
I’m most certainly not suggesting that the peer-review filter should become even more obstructive than it is now. For my money it’s already swung way too far in that direction.
But I am suggesting we should all remain sceptical of peer-reviewed articles, just as we rightly are of preprints. Peer-review ain’t nuthin’ … but it ain’t much. We know from experiment that the chance of an article passing peer review is made up of one third article quality, one third how nice the reviewer is and one third totally random noise. More recently we found that papers with a prestigious author’s name attached are far more likely to be accepted, irrespective of the content (Huber et al. 2022).
We need to get away from a mystical or superstitious view of peer-review as a divine seal of approval. We need to push back against wise-sounding pronouncements such as “Good reporting would have noted that the paper has not yet been peer-reviewed” as though this one bit of information is worth much.
Yeah, I said it.
References
- Carballido, Jose L., Oliver W. M. Rauhut, Diego Pol and Leonardo Salgado. 2011. Osteology and phylogenetic relationships of Tehuelchesaurus benitezii (Dinosauria, Sauropoda) from the Upper Jurassic of Patagonia. Zoological Journal of the Linnean Society 163:605–662. doi:10.1111/j.1096-3642.2011.00723.x
- Huber, Juergen, Sabiou Inoua, Rudolf Kerschbamer, Christian König-Kersting, Stefan Palan and Vernon L. Smith. 2022. Nobel and novice: author prominence affects peer review. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4190976
- Mannion, Philip D., Paul Upchurch, Rosie N. Barnes and Octávio Mateus. 2013. Osteology of the Late Jurassic Portuguese sauropod dinosaur Lusotitan atalaiensis (Macronaria) and the evolutionary history of basal titanosauriforms. Zoological Journal of the Linnean Society 168(1):98–206. doi:10.1111/zoj.12029
- Upchurch, Paul, Philip D. Mannion and Michael P. Taylor. 2015. The Anatomy and Phylogenetic Relationships of “Pelorosaurus” becklesii (Neosauropoda, Macronaria) from the Early Cretaceous of England. PLOS ONE 10(6):e0125819. 51 pages. doi: 10.1371/journal.pone.0125819
Notes
- Although I am on the authorship of Upchurch et al. (2015), I can take none of the credit the the comprehensiveness and honesty of the phylogenetics section: all of that is Paul and Phil’s work.
New article in the Journal of Data and Information Science: I don’t peer-review for non-open journals, and neither should you
April 28, 2022
I have a new article out in the Journal of Data and Information Science (Taylor 2022), on a subject that will be familiar to long-time readers. It’s titled “I don’t peer-review for non-open journals, and neither should you”, and honestly if you’ve read the title, you’ve sort of read the paper :-)
But if you want the reasons why I don’t peer-review for non-open journals, and the reasons why you shouldn’t either, you can find them in the article, which is a quick and easy read of just three pages. I’ll be happy to discuss any disagreements in the comments (or indeed any agreements!).
Reference
How papers are published, in 343 words
February 7, 2022

Many aspects of scholarly publishing are presently in flux. But for most journals the process of getting a paper published remains essentially the same as decades ago, the main change being that documents are sent electronically rather than by post.
It begins with the corresponding author of the paper submitting a manuscript — sometimes, though not often, in response to an invitation from a journal editor. The journal assigns a handling editor to the manuscript, and that editor decides whether the submission meets basic criteria: is it a genuine attempt at scholarship rather than an advertisement? Is it written clearly enough to be reviewed? Is it new work not already published elsewhere?
Assuming these checks are passed, the editor sends the manuscript out to potential reviewers. Since review is generally unpaid and qualified reviewers have many other commitments, review invitations may be declined, and the editor may have to send many requests before obtaining the two or three reviews that are typically used.
Each reviewer returns a report assessing the manuscript in several aspects (soundness, clarity, novelty, perhaps perceived impact) and recommending a verdict. The handling editor reads these reports and sends them to the author along with a verdict: this may be rejection, in which case the paper is not published (and the author may try again at a different journal); acceptance, in which case the paper is typeset and published; or more often a request for revisions along the lines suggested by the reviewers.
The corresponding author (with the co-authors) then prepares a revised version of the manuscript and a response letter, the latter explaining what changes have been made and which have not: the authors can push back on reviewer requests that they do not agree with. These documents are returned to the handling editor, who may either make a decision directly, or send the revised manuscript out for another round of peer review (either with the original reviewers or less often with new reviewers). This cycle continues as many times as necessary to arrive at either acceptance or rejection.
I misconstrued the review history of the paper discussed in the last post: my apologies
May 21, 2021
Two days ago, I wrote about what seemed to be an instance of peer review gone very wrong. I’ve now heard from two of the four authors of the paper and from the reviewer in question — both by email, and in comments on the original post — and it’s apparent that I misinterpreted the situation. When the lead author’s tweet mentioned “pushing it through eight rounds of review”, I took this at face value as meaning eight rounds at the same journal with the same reviewers — whereas in fact the reviewer in question reviewed only four drafts. (That still seems like too many to me, but clearly it’s not as ludicrous as the situation as I misread it.) In this light, my assumption that the reviewer was being obstructive was not warranted.
I have decided to retract that article and I offer my apologies to the reviewer, Dave Grossnickle, who approached me very politely off-list to offer the corrections that you can now read in his comment.
When peer-review goes bad … really bad.
May 19, 2021
THIS POST IS RETRACTED. The reasons are explained in the next post. I wish I had never posted this, but you can’t undo what is done, especially on the Internet, so I am not deleting it but marking it as retracted. I suggest you don’t bother reading on, but it’s here if you want to.
Neil Brocklehurst, Elsa Panciroli, Gemma Louise Benevento and Roger Benson have a new paper out (Brocklehurst et al. 2021, natch), showing that the post-Cretaceous radiation of modern mammals was not primarily due to the removal of dinosaurs, as everyone assumed, but of more primitive mammal-relatives. Interesting stuff, and it’s open access. Congratulations to everyone involved!

Neil summarised the new paper in a thread of twelve tweets, but it was the last one in the thread that caught my eye:
Thanks to all my co-authors for their tireless work on this, pushing it through eight rounds of review (my personal best)
I’m impressed that Neil has maintained his equanimity about this — in public at least — but if he is not going to be furious about it then we, the community, need to be furious on his behalf. Pushed to explain, Neil laid it out in a further tweet:
Was just one reviewer who really didn’t seem to like certain aspects, esp the use of discrete character matrices. Fair enough, can’t please everyone, but the editor just kept sending it back even when two others said our responses to this reviewer should be fine.
Again, somehow this tweet is free of cursing. He is a better man than I would be in that situation. He also doesn’t call out the reviewer by name, nor the spineless handling editor, which again shows great restraint — though I am not at all sure it’s the right way to go.
There is so, so much to hate about this story:
- The obstructive peer reviewer, who seems to have to got away with his reputation unblemished by these repeated acts of vandalism. (I’m assuming he was one of the two anonymous reviewers, not the one who identified himself.)
- The handling editor who had half a dozen opportunities to put an end to the round-and-round, and passed on at least five of them. Do your job! Handle the manuscript! Don’t just keep kicking it back to a reviewer who you know by this stage is not acting in good faith.
- The failure of the rest of the journal’s editorial board to step in and bring some sanity to the situation.
- The normalization of this kind of thing — arguably not helped by Neil’s level-headed recounting of the story as though it’s basically reasonable — as someting authors should expect, and just have to put up with.
- The time wasted: the other research not done while the authors were pithering around back and forth with the hostile reviewer.
It’s the last of these that pains me the most. Of all the comforting lies we tell ourselves about conventionl peer review, the worst is that it’s worth all the extra time and effort because it makes the paper better.
It’s not worth it, is it?
Maybe Brocklehurst et al. 2021 is a bit better for having gone through the 3rd, 4th, 5th, 6th, 7th and 8th rounds of peer review. But if it is, then it’s a marginal difference, and my guess is that in fact it’s no better and no worse that what they submitted after the second round. All that time, they could have been looking at specimens, generating hypotheses, writing descriptions, gathering data, plotting graphs, writing blogs, drafting papers — instead they have been frittering away their time in a pointless and destructive conflict with someone whose only goal was to prevent the advancement of science because an aspect of the paper happened to conflict with a bee he had in his bonnet. We have to stop this waste.
This incident has reinforced my growing conviction that venues like Qeios, Peer Community in Paleontology and BiorXiv (now that it’s moving towards support for reviewing) are the way to go. Our own experience at Qeios has been very good — if it works this well the next time we use it, I think think it’s a keeper. Crucially, I don’t believe our paper (Taylor and Wedel 2021) would have been stronger if it had gone through the traditional peer-review gauntlet; instead, I think it’s stronger than it would have been, because it’s received reviews from more pairs of eyes, and each of them with a constructive approach. Quicker publication, less work for everyone involved, more collegial process, better final result — what’s not to like?
References
- Brocklehurst, Neil, Elsa Panciroli, Gemma Louise Benevento and Roger Benson. 2021. Mammaliaform extinctions as a driver of the morphological radiation of Cenozoic mammals. Current Biology. doi:10.1016/j.cub.2021.04.044
- Taylor, Michael P., and Mathew J. Wedel. 2021. Why is vertebral pneumaticity in sauropod dinosaurs so variable? Qeios 1G6J3Q. doi:10.32388/1G6J3Q
Good experiences of peer-review at Qeios
March 9, 2021
A month after I and Matt published our paper “Why is vertebral pneumaticity in sauropod dinosaurs so variable?” at Qeios, we were bemoaning how difficult it was to get anyone to review it. But what a difference the last nineteen days have made!
In that time, we’ve had five reviews, and posted three revisions: revision 2 in response to a review by Mark McMenamin, version 3 in response to a review by Ferdinand Novas, and version 4 in response to reviews by Leonardo Cotts, by Alberto Collareta, and by Eduardo Jiménez-Hidalgo.

Taylor and Wedel (2021: Figure 2). Proximal tail skeleton (first 13 caudal vertebrate) of LACM Herpetology 166483, a juvenile specimen of the false gharial Tomistoma schlegelii. A: close-up of caudal vertebrae 4–6 in right lateral view, red circles highlighting vascular foramina: none in Ca4, two in Ca5 and one in Ca6. B: right lateral view. C: left lateral view (reversed). D: close-up of caudal vertebrae 4–6 in left lateral view (reversed), red circles highlighting vascular foramina: one each in Ca4, Ca5 and Ca6. In right lateral view, vascular foramina are apparent in the centra of caudal vertebrae 5–7 and 9–11; they are absent or too small to make out in vertebrae 1–4, 8 and 12–13. In left lateral view (reversed), vascular foramina are apparent in the centra of caudal vertebrae 4–7 and 9; they are absent or too small to make out in vertebrae 1–3, 8, and 10–13. Caudal centra 5–7 and 9 are therefore vascularised from both sides; 4 and 10–11 from one side only; and 1–3, 8 and 12–13 not at all.
There are a few things to say about this.
First, this is now among our most reviewed papers. Thinking back across all my publications, most have been reviewed by two people; the original Xenoposeidon description was reviewed by three; the same was true of my reassessment of Xenoposeidon as a rebbachisaur, and there may have been one or two more that escape me at the moment. But I definitely can’t think of any papers that have been under five sets of eyes apart from this one in Qeios.
Now I am not at all saying that all five of the reviews on this paper are as comprehensive and detailed as a typical solicited peer review at a traditional journal. Some of them have detailed observations; others are much more cursory. But they all have things to say — which I will return to in my third point.
Second, Qeios has further decoupled the functions of peer review. Traditional peer review combines three rather separate functions: A, Checking that the science is sound before publishing it; B, assessing whether it’s a good fit for the journal (often meaning whether it’s sexy enough); and C, helping the authors to improve the work. When PLOS ONE introduced correctness-only peer-review, they discarded B entirely, reasoning correctly that no-one knows which papers will prove influential[1]. Qeios goes further by also inverting A. By publishing before the peer reviews are in (or indeed solicited), it takes away the gatekeeper role of the reviewers, leaving them with only function C, helping the authors to improve the work. Which means it’s no surprise that …
Third, all five reviews have been constructive. As Matt has written elsewhere, “There’s no way to sugar-coat this: getting reviews back usually feels like getting kicked in the gut”. This is true, and we both have a disgraceful record of allowing harshly-reviewed projects to sit fallow for far too long before doing the hard work of addressing the points made by the reviewers and resubmitting[2].
The contrast with the reviews from Qeios has been striking. Each one has sent me scampering back to the manuscript, keen to make (most of) the suggested changes — hence the three revised versions that I’ve posted in the last fortnight. I think there are at least two reasons for this, a big one and a small one.
- The big reason, I think, is that the reviewers know their only role is to improve the paper. Well, that’s not quite true: they also have some influence over its evaluation, both in what they write and in assigning a 1-to-5 star score. But they know when they’re writing their reviews that whatever happens, they won’t block publication. This means, firstly, that there is no point in their writing something like “This paper should not be published until the authors do X”; but equally importantly, I think it puts reviewers in a different and more constructive mindset. They feel themselves to be allies of the authors rather than (as can happen) adversaries.
- The smaller reason is it’s easier to deal with one review at a time. I understand why journals solicit multiple reviews: so the handling editor can consider them all in reaching a decision. I understand why the authors get all the reviews back at once. But that process can’t help but be discouraging: because, once the decision has been made, they’re all on hand and there’s no point in stringing them out. One at a time may not be better, exactly; but it’s emotionally easier.
Is this all upside? Well, it’s too early to say. We’ve only done this once. The experience has certainly been more pleasant — and, crucially, much more efficient — than the traditional publishing lifecycle. But I’m aware of at least two potential drawbacks:
First, the publish-first lifecycle could be exploited by cranks. If the willingness to undergo peer-review is the mark of seriousness in a researcher — and if non-serious researchers are unwilling to face that gauntlet — then a venue that lets you make an end-run around peer-review is an obvious loophole. How serious a danger is this? Only time will tell, but I am inclined to think maybe not too serious. Bad papers on a site like Qeios will attract negative reviews and low scores, especially if they start to get noticed in the mainsteam media. They won’t be seen as having the stamp of having passed peer-review; rather, they will be branded with having publicly failed peer-review.
Second, it’s still not clear where reviewers will come from. We wrote about this problem in some detail last month, and although it’s worked out really well for our present paper, that’s no guarantee that it will always work out this well. We know that Qeios itself approached at least one reviewer to solicit their comments: that’s great, and if they can keep doing this then it will certainly help. But it probably won’t scale, so either a different reviewing culture will need to develop, or we will need people who — perhaps only on an informal basis — take it on themselves to solicit reviews from others. We’re interested to see how this develops.
Anyway, Matt and I have found our first Qeios experience really positive. We’ve come out of it with what I think is a good paper, relatively painlessly, and with much less friction than the usual process. I hope that some of you will try it, too. To help get the process rolling, I personally undertake to review any Qeios article posted by an SV-POW! reader. Just leave a comment here to let me know about your article when it’s up.
Notes
[1] “No-one knows which papers will prove influential”. As purely anecdotal evidence for this claim: when I wrote “Sauropod dinosaur research: a historical review” for the Geological Society volume Dinosaurs: A Historical Perspective, I thought it might become a citation monster. It’s done OK, but only OK. Conversely, it never occurred to me that “Head and neck posture in sauropod dinosaurs inferred from extant animals” would be of more than specialist interest, but it’s turned out to be my most cited paper. I bet most researchers can tell similar stories.
[2] One example: my 2015 preprint on the incompleteness of sauropod necks was submitted for publication in October 2015, and the reviews[3] came back that same month. Five and a half years later, I am only now working on the revision and resubmission. If you want other examples, we got ’em. I am not proud of this.
[3] I referred above to “harsh reviews” but in fact the reviews for this paper were not harsh; they were hard, but 100% fair, and I found myself agreeing with about 90% of the criticisms. That has certainly not been true of all the reviews I have found disheartening!
How can we get post-publication peer-review to happen?
February 20, 2021
Today marks the one-month anniversary of my and Matt’s paper in Qeios about why vertebral pneumaticity in sauropods is so variable. (Taylor and Wedel 2021). We were intrigued to publish on this new platform that supports post-publication peer-review, partly just to see what happened.

Taylor and Wedel (2021: figure 3). Brontosaurus excelsus holotype YPM 1980, caudal vertebrae 7 and 8 in right lateral view. Caudal 7, like most of the sequence, has a single vascular foramen on the right side of its centrum, but caudal 8 has two; others, including caudal 1, have none.
So what has happened? Well, as I write this, the paper has been viewed 842 times, downloaded a healthy 739 times, and acquired an altmetric score 21, based rather incestuously on two SV-POW! blog-posts, 14 tweets and a single Mendeley reader.
What hasn’t happened is even a single comment on the paper. Nothing that could be remotely construed as a post-publication peer-review. And therefore no progress towards our being able to count this as a peer-reviewed publication rather than a preprint — which is how I am currently classifying it in my publications list.
This, despite our having actively solicited reviews both here on SV-POW!, in the original blog-post, and in a Facebook post by Matt. (Ironically, the former got seven comments and the latter got 20, but the actual paper none.)
I’m not here to complain; I’m here to try to understand.
On one level, of course, this is easy to understand: writing a more-than-trivial comment on a scholarly article is work, and it garners very little of the kind of credit academics care about. Reputation on the Qeios site is nice, in a that-and-two-bucks-will-buy-me-a-coffee kind of way, but it’s not going to make a difference to people’s CVs when they apply for jobs and grants — not even in the way that “Reviewed for JVP” might. I completely understand why already overworked researchers don’t elect to invest a significant chunk of time in voluntarily writing a reasoned critique of someone else’s work when they could be putting that time into their own projects. It’s why so very few PLOS articles have comments.
On the other hand, isn’t this what we always do when we write a solicited peer-review for a regular journal?
So as I grope my way through this half-understood brave new world that we’re creating together, I am starting to come to the conclusion that — with some delightful exceptions — peer-review is generally only going to happen when it’s explicitly solicited by a handling editor, or someone with an analogous role. No-one’s to blame for this: it’s just reality that people need a degree of moral coercion to devote that kind of effort to other people’s project. (I’m the same; I’ve left almost no comments on PLOS articles.)
Am I right? Am I unduly pessimistic? Is there some other reason why this paper is not attracting comments when the Barosaurus preprint did? Teach me.
References
A funny thing happened on the way to the Shiny Digital Future
February 4, 2021

Picture is unrelated. Seriously. I’m just allergic to posts with no visuals. Stand by for more random brachiosaurs.
Here’s something I’ve been meaning to post for a while, about my changing ideas about scholarly publishing. On one hand, it’s hard to believe now that the Academic Spring was almost a decade ago. On the other, it’s hard for me to accept that PeerJ will be only 8 years old next week–it has loomed so large in my thinking that it feels like it has been around much longer. The very first PeerJ Preprints went up on April 4, 2013, just about a month and a half after the first papers in PeerJ. At that time it felt like things were moving very quickly, and that the landscape of scholarly publishing might be totally different in just a few years. Looking back now, it’s disappointing how little has changed. Oh, sure, there are more OA options now — even more kinds of OA options, and things like PCI Paleo and Qeios feel genuinely envelope-pushing — but the big barrier-based publishers are still dug in like ticks, and very few journals have fled from those publishers to re-establish themselves elsewhere. APCs are ubiquitous now, and mostly unjustified and ruinously expensive. Honestly, the biggest changes in my practice are that I use preprint servers to make my conference talks available, and I use SciHub instead of interlibrary loan.
But I didn’t sit down to write this post so I could grumble about the system like an old hippie. I’ve learned some things in the past few years, about what actually works in scholarly publishing (at least for me), and about my preferences in some areas, which turn out to be not what I expected. I’ll focus on just two areas today, peer review, and preprints.
How I Stopped Worrying and Learned to Love Peer Review
Surprise #1: I’m not totally against peer review. I realize that the way it is implemented in many places is deeply flawed, and that it’s no guarantee of the quality of a paper, but I also recognize its value. This is not where I was 8 years ago; at the time, I was pretty much in agreement with Mike’s post from November, 2012, “Well, that about wraps it up for peer-review”. But then in 2014 I became an academic editor at PeerJ. And as I gained first-hand experience from the other side of the editorial desk, I realized a few things:
- Editors have broad remits in terms of subject areas, and without the benefit of peer reviews by people who specialize in areas other than my own, I’m not fit to handle papers on topics other than Early Cretaceous North American sauropods, skeletal pneumaticity, and human lower extremity anatomy.
- Even at PeerJ, which only judges papers based on scientific soundness, not on perceived importance, it can be hard to tell where the boundary is. I’ve had to reject a few manuscripts at PeerJ, and I would not have felt confident about doing that without the advice of peer reviewers. Even with no perceived importance criterion, there is definitely a lower bound on what counts as a publishable observation. If you find a mammoth toe bone in Nebraska, or a tyrannosaur tooth in Montana, there should probably be something more interesting to say about it, beyond the bare fact of its existence, if it’s going to be the subject of a whole paper.
- In contentious fields, it can be valuable to get a diversity of opinions. And sometimes, frankly, I need to figure out if the author is a loony, or if it’s actually Reviewer #2 that’s off the rails. Although I think PeerJ generally attracts fairly serious authors, a handful of things that get submitted are just garbage. From what I hear, that’s the case at almost every journal. But it’s not always obvious what’s garbage, what’s unexciting but methodologically sound, and what’s seemingly daring but also methodologically sound. Feedback from reviewers helps me make those calls. Bottom line, I do think the community benefits from having pre-publication filters in place.
- Finally, I think editors have a responsibility to help authors improve their work, and reviewers catch a lot of stuff that I would miss. And occasionally I catch something that the reviewers missed. We are collectively smarter and more helpful than any of us would be in isolation, and it’s hard to see that as anything other than a good thing.
The moral here probably boils down to, “white guy stops bloviating about Topic X when he gains actual experience”, which doesn’t look super-flattering for me, but that’s okay.
You may have noticed that my pro-peer-review comments are rather navel-gaze-ly focused on the needs of editors. But who needs editors? Why not chuck the whole system? Set up an outlet called Just Publish Everything, and let fly? My answer is that my time in the editorial trenches has convinced me that such a system will silt up with garbage papers, and as a researcher I already have a hard enough time keeping up with all of the emerging science that I need to. From both perspectives, I want there to be some kind of net to keep out the trash. It doesn’t have to be a tall net, or strung very tight, but I’d rather have something than nothing.
What would I change about peer review? Since it launched, PeerJ has let reviewers either review anonymously, or sign their reviews, and it has let authors decide whether or not to publish the reviews alongside the paper. Those were both pretty daring steps at the time, but if I could I’d turn both of those into mandates rather than options. Sunlight is the best disinfectant, and I think almost all of the abuses of the peer review system would evaporate if reviewers had to sign their reviews, and all reviews were published alongside the papers. There will always be a-holes in the world, and some of them are so pathological that they can’t rein in their bad behavior, but if the system forced them to do the bad stuff in the open, we’d all know who they are and we could avoid them.

Femur of Apatosaurus and right humerus Brachiosaurus altithorax holotype on wooden pedestal (exhibit) with labels and 6 foot ruler for scale, Geology specimen, Field Columbian Museum, 1905. (Photo by Charles Carpenter/Field Museum Library/Getty Images)
Quo Vadis, Preprints?
Maybe the advent of preprints was more drawn out than I know, but to me it felt like preprints went from being Not a Thing, Really, in 2012, to being ubiquitous in 2013. And, I thought at the time, possibly transformative. They felt like something genuinely new, and when Mike and I posted our Barosaurus preprint and got substantive, unsolicited review comments in just a day or two, that was pretty awesome. Which is why I did not expect…
Surprise #2: I don’t have much use for preprints, at least as they were originally intended. When I first confessed this to Mike, in a Gchat, he wrote, “You don’t have a distaste for preprints. You love them.” And if you just looked at the number of preprints I’ve created, you might get that impression. But the vast majority of my preprints are conference talks, and using a preprint server was just the simplest way to the get the abstract and the slide deck up where people could find them. In terms of preprints as early versions of papers that I expect to submit soon, only two really count, neither more recent than 2015. (I’m not counting Mike’s preprint of our vertebral orientation paper from 2019; he’s first author, and I didn’t mind that he posted a preprint, but neither is it something I’d have done if the manuscript was mine alone.)
My thoughts here are almost entirely shaped by what happened with our Barosaurus preprint. We put it up on PeerJ Preprints back in 2013, we got some useful feedback right away, and…we did nothing for a long time. Finally in 2016 we revised the manuscript and got it formally submitted. I think we both expected that since the preprint had already been “reviewed” by commenters, and we’d revised it accordingly, that formal peer review would be very smooth. It was not. And the upshot is that only now, in 2021, are we finally talking about dealing with those reviews and getting the manuscript resubmitted. We haven’t actually done this, mind, we’re just talking about planning to make a start on it. (Non-committal enough for ya?)
Why has it taken us so long to deal with this one paper? We’re certainly capable — the two of us got four papers out in 2013, each of them on a different topic and each of them substantial. So why can’t we climb Mount Barosaurus? I think a big part of it is that we know the world is not waiting for our results, because our results are already out in the world. We’re the only ones being hurt by our inaction — we’re denying ourselves the credit and the respect that go along with having a paper finally and formally published in a peer-reviewed journal. But we can comfort ourselves with the thought that if someone needs our observations to make progress on their own project, we’re not holding them up. Just having the preprint out there has stolen some of our motivation to the get the paper done and out, apparently enough to keep us from doing it at all.
Mike pointed out that according to Google Scholar, our Barosaurus preprint has been cited five times to date, once in its original version and four times in its revised version. But to me, the fact that the Baro manuscript has been cited five times is a fail. Because all of my peer-reviewed papers from 2014-2016, which have been out for less long, have been cited more. So I read that as people not wanting to cite it. And who can blame them? Even I thought it would be supplanted by the formally-published, peer-reviewed paper within a few weeks or months.
Mike then pointed me to his 2015 post, “Four different reasons to post preprints”, and asked how many of those arguments still worked for me now. Number 2 is good, posting material that would otherwise never see the light of day — it’s basically what I did when I put my dissertation on arXiv. Ditto for 4, which is posting conference presentations. I’m not moved by either 1 or 3. Number 3 is getting something out to the community as quickly as possible, just because you want to, and number 1 is getting feedback as quickly as possible. The reason that neither of those move me is that they’re solved to my satisfaction by existing peer-reviewed outlets. I don’t know of any journals that let reviewers take 2-4 months to review a paper anymore. I don’t know how much credit for the acceleration should go to PeerJ, which asks for reviews in 10 to 14 days, but surely some. And I don’t usually have a high enough opinion of my own work to think that the community will suffer if it takes a few months for a paper to come out through the traditional process.
(If it seems like I’m painting Mike as relentlessly pro-preprint, it’s not my intent. Rather, I’d dropped a surprising piece of news on him, and he was strategically probing to determine the contours of my new and unexpected stance. Then I left the conversation to come write this post while the ideas were all fresh in my head. I hope to find out what he thinks about this stuff in the comments, or ideally in a follow-up post.)
Back to task: at least for me, a preprint of a manuscript I’m going to submit anyway is a mechanism to get extra reviews I don’t want*, and to lull myself into feeling like the work is done when it’s not. I don’t anticipate that I will ever again put up a preprint for one of my own manuscripts if there’s a plausible path to traditional publication.
* That sounds awful. To people who have left helpful comments on my preprints: I’m grateful, sincerely. But not so grateful that I want to do the peer review process a second time for zero credit. I didn’t know that when I used to file preprints of manuscripts, but I know it now, and the easiest way for me to not make more work for both of us is to not file preprints of things I’m planning to submit somewhere anyway.
So much for my preprints; what about those of other people? Time for another not-super-flattering confession: I don’t read other people’s preprints. Heck, I don’t have time to keep up with the peer-reviewed literature, and I have always been convinced by Mike’s dictum, “The real value of peer-review is not as a mark of correctness, but of seriousness” (from this 2014 post). If other people want me to part with my precious time to engage with their work, they can darn well get it through peer review. And — boomerang thought — that attitude degrades my respect for my own preprint manuscripts. I wouldn’t pay attention to them if someone else had written them, so I don’t really expect anyone else to pay attention to the ones that I’ve posted. In fact, it’s extremely flattering that they get read and cited at all, because by my own criteria, they don’t deserve it.
I have to stress how surprising I find this conclusion, that I regard my own preprints as useless at best, and simultaneously extra-work-making and motivation-eroding at worst, for me, and insufficiently serious to be worthy of other people’s time, for everyone else. It’s certainly not where I expected to end up in the heady days of 2013. But back then I had opinions, and now I have experience, and that has made all the difference.
The comment thread is open. What do you think? Better still, what’s your experience?