How can we get post-publication peer-review to happen?
February 20, 2021
Today marks the one-month anniversary of my and Matt’s paper in Qeios about why vertebral pneumaticity in sauropods is so variable. (Taylor and Wedel 2021). We were intrigued to publish on this new platform that supports post-publication peer-review, partly just to see what happened.

Taylor and Wedel (2021: figure 3). Brontosaurus excelsus holotype YPM 1980, caudal vertebrae 7 and 8 in right lateral view. Caudal 7, like most of the sequence, has a single vascular foramen on the right side of its centrum, but caudal 8 has two; others, including caudal 1, have none.
So what has happened? Well, as I write this, the paper has been viewed 842 times, downloaded a healthy 739 times, and acquired an altmetric score 21, based rather incestuously on two SV-POW! blog-posts, 14 tweets and a single Mendeley reader.
What hasn’t happened is even a single comment on the paper. Nothing that could be remotely construed as a post-publication peer-review. And therefore no progress towards our being able to count this as a peer-reviewed publication rather than a preprint — which is how I am currently classifying it in my publications list.
This, despite our having actively solicited reviews both here on SV-POW!, in the original blog-post, and in a Facebook post by Matt. (Ironically, the former got seven comments and the latter got 20, but the actual paper none.)
I’m not here to complain; I’m here to try to understand.
On one level, of course, this is easy to understand: writing a more-than-trivial comment on a scholarly article is work, and it garners very little of the kind of credit academics care about. Reputation on the Qeios site is nice, in a that-and-two-bucks-will-buy-me-a-coffee kind of way, but it’s not going to make a difference to people’s CVs when they apply for jobs and grants — not even in the way that “Reviewed for JVP” might. I completely understand why already overworked researchers don’t elect to invest a significant chunk of time in voluntarily writing a reasoned critique of someone else’s work when they could be putting that time into their own projects. It’s why so very few PLOS articles have comments.
On the other hand, isn’t this what we always do when we write a solicited peer-review for a regular journal?
So as I grope my way through this half-understood brave new world that we’re creating together, I am starting to come to the conclusion that — with some delightful exceptions — peer-review is generally only going to happen when it’s explicitly solicited by a handling editor, or someone with an analogous role. No-one’s to blame for this: it’s just reality that people need a degree of moral coercion to devote that kind of effort to other people’s project. (I’m the same; I’ve left almost no comments on PLOS articles.)
Am I right? Am I unduly pessimistic? Is there some other reason why this paper is not attracting comments when the Barosaurus preprint did? Teach me.
References
February 21, 2021 at 9:16 pm
I tend to go along your “On the one hand”: it’s carrots and sticks, time is money, everyone struggles, I put the effort then you profit and outcompete me, etc.
HOWEVER, StackExchange and Wikipedia (to name a few well-known) make me believe in that good things can be made and people will show up to contribute AND profit from them.
I don’t have answers but I guess it has A LOT to do with communities, platforms and critical mass. And on the reasons/motives that community is built around. If you only manage to attract because money (or CV lines that will eventually get it), then it’ll be no surprise people will not show (anymore) when money is over (or a peer-review “doesn’t pay”).
February 21, 2021 at 10:59 pm
I think you’re right, Fair Miles: the key here is community. We’re much more likely to volunteer reviews for works by people who we know or whose work we respect.
February 22, 2021 at 3:09 pm
Indeed, bit I suspect it’s bigger than that. The biological explanation for (inclusive/reciprocal) altruism in a close (related) group [“I scratch your back, you’ll scratch mine”] is classical nowadays. But those are examples of communities in a new [global] scale based on (almost) anonymous peers in which the expectation for a reward is much more subtle. Maybe our psychological mechanisms are the same, I don’t know, but at least in scale those networks are getting farther and farther away from tribalism. I suspect some key issues are a general feeling of purpose, a pack of strong “believers”, thousands of enthusiasts thay may come and go as see fit expecting no more reward than a “thank you” or an arbitrary+fun “reputation counter”, some critical mass for the inmediate reward (“the product”) to be perceived by users, …
Problem in the academic context (e.g., for a peer-review platform) is that we were/are trained to compete in a scarce environment (“the rat race”). In fact, we were filtered at some point by our abilities to collect some form of coinage (e.g., papers × IF). Of course each of us individually can get somehow apart from those rules and exercise altruism, but only as much as survival allows, and that depends on carrying capacities and (increasing) saturation.
So I guess we are not a very nice bunch in a very good environment for communities other than “show-me the-money” ones to flourish…
February 23, 2021 at 12:13 am
I suspect the short answer is “publish something worse”.
Will I comment if I see really bad science being pushed? Probably. Will I comment to say, “Pretty good, but I wish you had worded paragraph 6 differently?” No.
My experience with peer-review is that most comments are don’t change the paper very much, unless the paper is bad or the reviewer is bad. They just clean it up some. And that’s not worth writing a comment for, most of the time.
February 23, 2021 at 9:16 am
That’s a really interesting thought, Eric. We have a stronger impulse to corect something wrong than we have to affirm something correct — not just out of contrariness, but because it feels like making a more substantial contribution. Whereas, of course, as authors we crave the affirming comments at least as much as the correcting ones! :-)
I’m not sure what the answer is here. Purely as a matter of tactics — I am trying to be honest here — I am keen to get a couple of comments on our manuscript so I can promote it from the category of “preprint” to the category of “paper”. So I wonder if the smart play would have been to leave one very obvious error in there as comment-bait. Then we could have waited for someone to point it out, issue an updated version of the manuscript, and say “Look, it’s been peer-reviewed!”
Yeah. I don’t like that any more than you do. I hate the idea of deliberately polluting the scientific record with known-wrong information, even if only for a short time until the revision is out. I hate the idea of waiting around for someone to point out the mistake, feeling the weight of the known-wrong information until I can correct it in a way that lets me claim the coveted “peer-reviewed” stamp.
We definitely have some perverse incentives at work here. I really don’t know what the solution is.
February 23, 2021 at 6:42 pm
To be honest, I did look at your paper, but the reason I didn’t try reviewing it is that I don’t have any expertise in that area, and I didn’t think anything I could say would be helpful. I don’t think that you would want post-publishing peer review to just turn into a Youtube comments section — the expertise of the reviewer is important too.
February 23, 2021 at 7:21 pm
I think this is really a problem with the post-publication peer review model, at least as far as scientific publication goes. Theoretically, the primary goal of peer-review in science is to prevent inaccurate or shoddy work from entering the scientific record. If the work as already entered the “official” record, why publish a comment on a paper with content issues when you can publish a responding paper that is more likely to be cited?
February 23, 2021 at 8:43 pm
why publish a comment on a paper with content issues when you can publish a responding paper that is more likely to be cited?
I think that is an excellent point. For all that we talk about open peer review allowing reviewers to get credit for their work, no-one acts like a published peer review counts for anything like as much as a published paper. If someone else’s paper is sufficiently flawed to inspire you to write a detailed critique, then it’s only a little more work to submit that as a stand-alone paper instead of posting it as a review, for a HUGE return in visibility and credit.
In an old post from 2014, Mike wrote that, “the real value of peer-review is not as a mark of correctness, but of seriousness”. That immediately struck me as correct, and I’ve never seen any persuasive case to the contrary. Similarly, although those of us who are pro-peer-review say we value the reviews, what we actually value is work that has been through the review process, not the reviews themselves. I barely have time to keep up with the crucial papers in my field, I’m definitely not going to take the time to read reviews of other people’s papers.
So if a review is going to be substantial, the reviewer has every incentive to turn the review into a paper. And if the review is not going to be substantial, why go to the effort?
February 24, 2021 at 2:05 am
The problem here reminds me a lot of the problem of grading discussion board assignments (which seem to be oh-so-popular in remote COVID-19 learning environments). What’s actually happening is that there is an ongoing dialog about a topic which we want to see participants add original thoughts (and, in science, analysis) to. However, we also want to assign some sort of credit to participants. It’s hard to say, “This thought wasn’t original enough,” at least in an environment where a pre-med student will drag a vague grade to the Dean to get it “fixed”, and so we settle on sub-par things like, “Must start one new thread and respond to one thread, at least five sentences.”
Similarly, scientific papers originated out of written correspondence (dialog) about science and then we developed a way to “grade” contributions to this dialog. The problem is two-fold:
1) The grading is fairly arbitrary, with a sharp cut-off between “counts” and “doesn’t count”. A substantial contribution might not count, but a slight push might make it count.
2) The grading system has been in place longer than the generation time of scientists, and so everyone has optimized strategies around that grading scheme.
In this way of thinking post-publication peer-review includes things like other people citing you favorably or disagreeing with you in their own publications. The problem is that we rely on peer-review to act as a filter because there are a lot of more engaged in the scientific dialog now than there were in 1700, and “just read all the papers that cite this work” is impractical (the data isn’t centralized, much of it is paywalled) and takes even more time.
March 1, 2021 at 11:05 am
A couple of thoughts. In the author-guided post-publication peer review model that we have been advocating for more than 10 years and that we had the opportunity to test in several initiatives, authors send personalised invitations to specific colleagues to comment on concrete aspects of the manuscript. Something like: “Dear Dr. X, could you please review the methodology section and tell me if you consider the statistical model to be appropriate”, or “do you find the interpretation of the results plausible or should I consider an alternative explanation?” The fact that the invitation is personalised and does not vaguely ask for a review of the article could significantly increase the likelihood of getting a positive response.
As for incentives to perform a review rather than publish a new article, it is beyond doubt that reviews should be treated as stand-alone items with their own DOIs linked via appropriate metadata tags to the original manuscripts. CVs should include a whole new category for these reviews which should be considered as valuable scientific contributions. At the same time, reviews should also accept comments or other reviews that could form the basis of a reviewer reputation system like the one we developed on this initiative: https://www.openscholar.org.uk/open-peer-review-module-for-repositories/
Having said that, I also tend to think that a new system has to look as much like the old one as possible. Big leaps are far more difficult compared to incremental changes. That is why we are currently supporting the transition of existing journals, with their editorial teams and accumulated reputation, to overlay journals in institutional repositories that implement open post-publication peer review. In this way, reviewers continue to respond to editorial requests, but we get rid of publishers (including publishing platforms other than institutional repositories) and gradually create a culture of post-publication peer review that could eventually start to function even without the mediation of publishers.
March 2, 2021 at 10:19 pm
Plenty of interesting thoughts in the comments here — thank you, all. I will be writing a followup post soon where I’ll try to properly address them.
March 9, 2021 at 2:06 pm
[…] is vertebral pneumaticity in sauropod dinosaurs so variable?” at Qeios, we were bemoaning how difficult it was to get anyone to review it. But what a difference the last nineteen days have […]