I was a bit shaken to read this short article, Submit It Again! Learning From Rejected Manuscripts (Campbell et al. 2022), recently posted on Mastodon by open-access legend Peter Suber.

For example:

Journals may reject manuscripts because the paper is not in the scope of the journal, because they recently published a similar article, because the formatting of the article is incorrect, or because the paper is not noteworthy. In addition, editors may reject a paper expecting authors to make their work more compelling.

Let’s pick this apart a bit.

“Because they recently published a similar article”? What is this nonsense. Does the Journal of Vertebrate Paleontology reject a paper on, say, ornithopod ontogeny because “we published something on ornithopod ontogeny a few months ago”? No, it doesn’t because it’s a serious journal.

“Because the formatting of the article is incorrect”? What is this idiocy? If the formatting is incorrect, the job of the publisher is to correct it. That’s literally what they’re there for.

“Expecting authors to make their work more compelling”. This is code for sexing up the results, maybe dropping that inconvenient outlier, getting p below 0.05 … in short, fraud. The very last thing we need more of.

Elsewhere this paper suggests:

… adjusting an original research paper to a letter to the editor or shifting the focus to make the same content into a commentary or narrative essay.

Needless to say, this is putting the cart before the horse. Once we start prioritising what kind of content a journal would like to have ahead of what our work actually tells us, we’re not scientists any more.

Then there is this:

Most manuscripts can eventually Ynd a home in a PubMed-indexed journal if the authors continually modify the manuscript to the specifications of the editors.

I’m not saying this is incorrect. I’m not even saying it’s not good advice. But I worry about the attitude that it communicates — that editors are capricious gods whose whims are to be satisfied. Editors should be, and good editors are, partners in the process of bringing a work to publication, not barriers.

Next up:

Studies confirming something already well known and supported might not be suitable for publication, but looking for a different perspective or a new angle to make it a new contribution to the literature may be useful.

In other words, if you run an experiment, however well you do the work and however well you write the paper, you should expect to have it rejected if the result doesn’t excite the editor. But if you can twist it into something that does excite the editor, you might be OK. Is this really how we want to encourage researchers to behave?

I’ve seen studies like this. I have seen projects that set out to determine how tibia shape correlates with lifestyle in felids, find out the rather important fact that there is no correlation, and instead report the Principle Component 1, which explains 4.2% of the morphological difference, sort of shows a slight grouping if you squint hard and don’t mind all your groups overlapping. (Note: all details changed to protect the guilty. I know nothing of felid tibiae.) I don’t wish to see more such reporting. I want to know what a study actually showed, not what an editor thought might be exciting.

But here is why I am so unhappy about this paper.

It’s that the authors seem so cheerful about all this. That they serenely accept it as a law of the universe that perfectly good papers can be rejected for the most spurious of reasons, and that the proper thing to do is smile broadly and take your ass to the next ass-kicking station.

It doesn’t seem to occur to them that there are other ways of doing scientific communication: ways that are constructive rather than adversarial, ways the aim to get at the truth rather than aiming at being discussed in a Malcolm Gladwell book[1], ways that make the best use of researchers’ work instead of discarding what is inconvenient.

Folks, we have to do better. Those of us in senior positions have to make sure we’re not teaching out students that the psychopathic systems we had to negotiate are a law of the universe.

References

Campbell, Kendall M., Judy C. Washington, Donna Baluchi and José E. Rodríguez. 2022. Submit It Again! Learning From Rejected Manuscripts. PRiMER. 6:42. doi:10.22454/PRiMER.2022.715584

Notes

  1. I offer the observation that any finding reported and discussed in a Malcolm Gladwell book seems to have about an 80% chance of being shown to be incorrect some time in the next ten years. In the social sciences, particularly, a good heuristic for guessing whether or not a given result is going to replicate is to ask: has it been in a Gladwell book?

 

Years ago, when I was young and stupid, I used to read papers containing phylogenetic analyses and think, “Oh, right, I see now, Euhelopus is not a mamenchisaurid after all, it’s a titanosauriform”. In other words, I believed the result that the computer spat out. Some time after that, I learned how to use PAUP* and run my own phylogenetic analysis and realised how vague and uncertain such result are, and how easily changed by tweaking a few parameters.

These days good papers that present phylogenetic analysis are very careful to frame the results as the tentative hypotheses that they are. (Except when they’re in Glam Mags, of course: there’s no space for that kind of nuance in those venues.)

It’s common now for careful work to present multiple different and contradictory phylogenetic hypotheses, arrived at by different methods or based on different matrices. For just one example, see how Upchurch et al.’s (2015) redescription of Haestasaurus (= “Pelorosaurus“) becklesii presents that animal as a camarasaurid (figure 15, arrived at by modifying the matrix of Carballido at al 2011), as a very basal macronarian (figure 16, arrived at by modifying the continuous-and-discrete-characters matrix of Mannion et al. 2013), and as a basal titanosaur (figure 17, arrived at by modifying the discrete-characters-only matrix from the same paper). This is careful and courageous reporting, shunning the potential headline “World’s oldest titanosaur!” in favour of doing the work right.) [1]

But the thing that really makes you understand how fragile phylogenetic analyses are is running one yourself. There’s no substitute for getting your hands dirty and seeing how the sausage is made.

And I was reminded of this principle today, in a completely different context, by a tweet from Alex Holcombe:

Some of us lost our trust in science, and in peer review, in a journal club. There we saw how many problems a bunch of ECRs notice in the average article published in a fancy journal.

Alex relays (with permission) this anecdote from an anonymous student in his Good Science, Bad Science class :

In the introduction of the article, the authors lay forth four very specific predictions that, upon fulfillment, would support their hypothesis. In the journal club, one participant actually joked that it read very much as though the authors ran the analysis, derived these four key findings, and then copy-pasted them in to the introduction as though they were thought of a priori. I’m not an expert in this field and I don’t intend to insinuate that anything untoward was done in the paper, but I remember several participants agreeing that the introduction and general framework of the paper indeed felt very “HARKed“.

Here’s the problem: as the original tweet points out, this is about “problems a bunch of ECRs notice in the average article published in a fancy journal”. These are articles that have made it through the peer-review gauntlet and reached the promised land of publication. Yet still these foundational problems persist. In other words, peer-review did not resolve them.

I’m most certainly not suggesting that the peer-review filter should become even more obstructive than it is now. For my money it’s already swung way too far in that direction.

But I am suggesting we should all remain sceptical of peer-reviewed articles, just as we rightly are of preprints. Peer-review ain’t nuthin’ … but it ain’t much. We know from experiment that the chance of an article passing peer review is made up of one third article quality, one third how nice the reviewer is and one third totally random noise. More recently we found that papers with a prestigious author’s name attached are far more likely to be accepted, irrespective of the content (Huber et al. 2022).

Huber et al. 2022, figure 1.

We need to get away from a mystical or superstitious view of peer-review as a divine seal of approval. We need to push back against wise-sounding pronouncements such as “Good reporting would have noted that the paper has not yet been peer-reviewed” as though this one bit of information is worth much.

Yeah, I said it.

References

Notes

  1. Although I am on the authorship of Upchurch et al. (2015), I can take none of the credit the the comprehensiveness and honesty of the phylogenetics section: all of that is Paul and Phil’s work.

 

I have a new article out in the Journal of Data and Information Science (Taylor 2022), on a subject that will be familiar to long-time readers. It’s titled “I don’t peer-review for non-open journals, and neither should you”, and honestly if you’ve read the title, you’ve sort of read the paper :-)

But if you want the reasons why I don’t peer-review for non-open journals, and the reasons why you shouldn’t either, you can find them in the article, which is a quick and easy read of just three pages. I’ll be happy to discuss any disagreements in the comments (or indeed any agreements!).

Reference

The peer-review cycle as it works at most established journals. Green lines show the positive path; red lines show the negative path; amber lines show the path of delay. Modified from Taylor and Wedel (in press: figure 1).

Many aspects of scholarly publishing are presently in flux. But for most journals the process of getting a paper published remains essentially the same as decades ago, the main change being that documents are sent electronically rather than by post.

It begins with the corresponding author of the paper submitting a manuscript — sometimes, though not often, in response to an invitation from a journal editor. The journal assigns a handling editor to the manuscript, and that editor decides whether the submission meets basic criteria: is it a genuine attempt at scholarship rather than an advertisement? Is it written clearly enough to be reviewed? Is it new work not already published elsewhere?

Assuming these checks are passed, the editor sends the manuscript out to potential reviewers. Since review is generally unpaid and qualified reviewers have many other commitments, review invitations may be declined, and the editor may have to send many requests before obtaining the two or three reviews that are typically used.

Each reviewer returns a report assessing the manuscript in several aspects (soundness, clarity, novelty, perhaps perceived impact) and recommending a verdict. The handling editor reads these reports and sends them to the author along with a verdict: this may be rejection, in which case the paper is not published (and the author may try again at a different journal); acceptance, in which case the paper is typeset and published; or more often a request for revisions along the lines suggested by the reviewers.

The corresponding author (with the co-authors) then prepares a revised version of the manuscript and a response letter, the latter explaining what changes have been made and which have not: the authors can push back on reviewer requests that they do not agree with. These documents are returned to the handling editor, who may either make a decision directly, or send the revised manuscript out for another round of peer review (either with the original reviewers or less often with new reviewers). This cycle continues as many times as necessary to arrive at either acceptance or rejection.

As I was clearing out some old documents, I stumbled on this form from 2006:

This was back when Paul Upchurch’s dissertation, then only 13 years old, contained much that still unpublished in more formal venues, notably the description of what was then “Pelorosaurusbecklesii. As a fresh young sauropod researcher I was keen to read this and other parts of what was then almost certainly the most important and comprehensive single publication about sauropods.

I remember contacting Paul directly to ask if he could send a copy, but he didn’t have it in electronic form. So I wrote (on his advice, I think) to Cambridge University Library to request a copy from them. The form is what I got back, saying sure I could have a copy — for £160.08 if I wanted a photocopy, or £337.80 for a microfilm. Since inflation in the UK has run at about 2.83% per year since 2006, that price of £160.08 back then is equivalent to about £243 in today’s money.

Needless to say, I didn’t pursue this any further (and to my shame I’m not even sure I bothered replying to say no thanks). To this day, I have never read Paul’s dissertation — though 28 years after it was completed, it’s obviously less relevant now than it was back in the day.

What is the point of this story? What information pertains?

My point isn’t really “Look how exploitative Cambridge University Library’s pricing is” — I have no experience in running a library, and no realistic sense of what it costs in staff time and materials to make a photocopy of a substantial dissertation. Perhaps the price was only covering costs.

The point instead is: look how things have changed. Newly minted Ph.Ds now routinely deposit copies of their dissertations in repositories where they can be freely read by anyone, anywhere. Here is one recent example from my own university: Logan King’s 2021 dissertation, Macroevolutionary and Ontogenetic Trends in the Anatomy and Morphology of the Non-Avian Dinosaur Endocranium.

This is important. I often think about the Library Loon’s 2012 blog-post Framing Incremental Gains. It’s easy, if you’re in open-access advocacy, to feel the burden of how ssslllooowwwlllyyy things seem to change. Sometimes I’ve heard people claim that nothing has changed in the last 30 years. I completely understand the frustration that leads people to say such things. But it’s not true. Things have changed a lot, and are still changing fast. We seem to be past the tipping point where more than half of all newly published papers are open access. There is a lot to celebrate.

Of course that’s not to deny that there’s plenty more work to be done, and plenty of other ways our present scholarly publishing infrastructure desperately needs changing. But in pushing for that, let’s not neglect how much things have improved.

Last time, we looked at the difference between cost, value and price, and applied those concepts to simple markets like the one for chairs, and the complex market that is scholarly publication. We finished with the observation that the price our community pays for the publication of a paper (about $3,333 on average) is about 3–7 times as much as its costs to publish ($500-$1000)?

How is this possible? One part of the answer is that the value of a published paper to the commnity is higher still: were it not so, no-one would be paying. But that can’t be the whole reason.

In an efficient market, competing providers of a good will each try to undercut each other until the prices they charge approach the cost. If, for example, Elsevier and Springer-Nature were competing in a healthy free market, they would each be charging prices around one third of what they are charging now, for fear of being outcompeted by their lower-priced competitor. (Half of those price-cuts would be absorbed just by decreasing the huge profit margins; the rest would have to come from streamlining business processes, in particular things like the costs of maintaining paywalls and the means of passing through them.)

So why doesn’t the Invisible Hand operate on scholarly publishers? Because they are not really in competition. Subscriptions are not substitutable goods because each published article is unique. If I need to read an article in an Elsevier journal then it’s no good my buying a lower-priced Springer-Nature subscription instead: it won’t give me access to the article I need.

(This is one of the reasons why the APC-based model — despite its very real drawbacks — is better than the subscription model: because the editorial-and-publication services offered by Elsevier and Springer-Nature are substitutable. If one offers the service for $3000 and the other for $2000, I can go to the better-value provider. And if some other publisher offers it for $1000 or $500, I can go there instead.)

The last few years have seen huge and welcome strides towards establishing open access as the dominant mode of publication for scholarly works, and currently output is split more or less 50/50 between paywalled and open. We can expect OA to dominate increasingly in future years. In many respects, the battle for OA is won: we’ve not got to VE Day yet, but the D-Day Landings have been accomplished.

Yet big-publisher APCs still sit in the $3000–$5000 range instead of converging on $500-$1000. Why?

Björn Brembs has been writing for years about the fact that every market has a luxury segment: you can buy a perfectly functional wristwatch for $10, yet people spend thousands on high-end watches. He’s long been concerned that if scholarly publishing goes APC-only, then people will be queuing up to pay the €9,500 APC for Nature in what would become a straightforward pay-for-prestige deal. And he’s right: given the outstandingly stupid way we evaluate reseachers for jobs, promotion and tenure, lots of people will pay a 10x markup for the “I was published in Nature” badge even though Nature papers are an objectively bad way to communicate research.

But it feels like something stranger is happening here. It’s almost as though the whole darned market is a luxury segment. The average APC funded by the Wellcome Trust in 2018/19 was £2,410 — currently about $3,300. Which is almost exactly the average article cost of $3,333 that we calculated earlier. What’s happening is that the big publishers have landed on APCs at rates that preserve the previous level of income. That is understandable on their part, but what I want to know is why are we still paying them? Why are all Wellcome’s grantees not walking away from Elsevier and Springer-Nature, and publishing in much cheaper alternatives?

Why, in other words, are market forces not operating here?

I can think of three reasons why researchers prefer to spend $3000 instead of $1000:

  1. It could be that they are genuinely getting a three-times-better service from the big publishers. I mention this purely for completeness, as no evidence supports the hypothesis. There seems to be absolutely no correlation between price and quality of service.
  2. Researchers are coasting on sheer inertia, continuing to submit to the journals they used to submit to back in the bad old days of subscriptions. I am not entirely without sympathy for this: there is comfort in familiarity, and convenience in knowing a journal’s flavour, expectations and editorial board. But are those things worth a 200% markup?
  3. Researchers are buying prestige — or at least what they perceive as prestige. (In reality, I am not convinced that papers in non-exceptional Elsevier or Springer-Nature journals are at all thought of as more prestigous than those in cheaper but better born-OA journals. But for this to happen, it only needs people to think the old journals are more prestigious, it doesn’t need them to be right.)

But underlying all these reasons to go to a more expensive publishers is one very important reason not to bother going to a cheaper publisher: researchers are spending other people’s money. No wonder they don’t care about the extra few thousand pounds.

How can funders fix this, and get APCs down to levels that approximate publishing cost? I see at least three possibilities.

First, they could stop paying APCs for their grantees. Instead, they could add a fixed sum onto all grants they make — $1,500, say — and leave it up to the researchers whether to spend more on a legacy publisher (supplementing the $1,500 from other sources of their own) or to spend less on a cheaper born-OA publisher and redistribute the excess elsewhere.

Second, funders could simply publish the papes themselves. To be fair several big funders are doing this now, so we have Wellcome Open Research, Gates Open Research, etc. But doesn’t it seem a bit silly to silo research according to what body awarded the grant that funded it? And what about authors who don’t have a grant from one of these bodies, or indeed any grant at all?

That’s why I think the third solution is best. I would like to see funders stop paying APCs and stop building their own publishing solutions, and instead collaborate to build and maintain a global publishing solution that all researchers could use irrespective of grant-recipient status. I have much to say on what such a solution should look like, but that is for another time.

We have a tendency to be sloppy about language in everyday usage, so that words like “cost”, “value” and “price” are used more or less interchangeably. But economists will tell you that the words have distinct meanings, and picking them apart is crucial to understand economic transaction. Suppose I am a carpenter and I make chairs:

  • The cost of the chair is what it costs me to make it: raw materials, overheads, my own time, etc.
  • The value of the chair is what it’s worth to you: how much it adds to your lifestyle.
  • The price of the chair is how much you actually pay me for it.

In a functioning market, the value is more than the cost. Say it costs me £60 to make the chair, and it’s worth £100 to you. Then there is a £40 range in which the price could fall and we would both come out of the deal ahead. If you buy the chair for £75, then I have made £15 more than what it cost me to make, so I am happy; and you got it for £25 less than it was worth to you, so you’re happy, too.

(If the value is less than the cost, then there is no happy outcome. The best I can do is dump the product on the market at below cost, in the hope of making back at least some of my outlay.)

So far, so good.

Now let’s think about scientific publications.

There is a growing consensus that the cost of converting a scientific manuscript into a published paper — peer-reviewed, typeset, made machine-readable, references extracted, archived, indexed, sustainably hosted — is on the order of $500-$1000.

The value of a published paper to the world is incredibly hard to estimate, but let’s for now just say that it’s high. (We’ll see evidence of this in a moment.)

The price of a published paper is easier to calculate. According to the 2018 edition of the STM Report (which seems to be the most recent one available), “The annual revenues generated from English-language STM journal publishing are estimated at about $10 billion in 2017 […] collectively publishing over 3 million articles a year” (p5). So, bundling together subscription revenues, APCs, offsets deals and what have you, the average revenue accruing from a paper is $10,000,000,000/3,000,000 = $10,000/3 = $3,333.

(Given that these prices are paid, we can be confident that the value is at least as much, i.e. somewhere north of $3,333 — which is why I was happy earlier to characterise the value as “high”.)

Why is it possible for the price of a paper to be 3–7 times as high as its cost? One part of the answer is that the value is higher still. Were it not so, no-one would be paying. But that can’t be the whole reason.

Tune in next time to find out the exciting reason why the price of scholarly publishing is so much higher than the cost!

Two days ago, I wrote about what seemed to be an instance of peer review gone very wrong. I’ve now heard from two of the four authors of the paper and from the reviewer in question — both by email, and in comments on the original post — and it’s apparent that I misinterpreted the situation. When the lead author’s tweet mentioned “pushing it through eight rounds of review”, I took this at face value as meaning eight rounds at the same journal with the same reviewers — whereas in fact the reviewer in question reviewed only four drafts. (That still seems like too many to me, but clearly it’s not as ludicrous as the situation as I misread it.) In this light, my assumption that the reviewer was being obstructive was not warranted.

I have decided to retract that article and I offer my apologies to the reviewer, Dave Grossnickle, who approached me very politely off-list to offer the corrections that you can now read in his comment.

THIS POST IS RETRACTED. The reasons are explained in the next post. I wish I had never posted this, but you can’t undo what is done, especially on the Internet, so I am not deleting it but marking it as retracted. I suggest you don’t bother reading on, but it’s here if you want to.

 


Neil Brocklehurst, Elsa Panciroli, Gemma Louise Benevento and Roger Benson have a new paper out (Brocklehurst et al. 2021, natch), showing that the post-Cretaceous radiation of modern mammals was not primarily due to the removal of dinosaurs, as everyone assumed, but of more primitive mammal-relatives. Interesting stuff, and it’s open access. Congratulations to everyone involved!

Neil Brocklehurt’s “poster” explaining the new paper in broad detail. From the tweet linked below.

Neil summarised the new paper in a thread of twelve tweets, but it was the last one in the thread that caught my eye:

Thanks to all my co-authors for their tireless work on this, pushing it through eight rounds of review (my personal best)

I’m impressed that Neil has maintained his equanimity about this — in public at least — but if he is not going to be furious about it then we, the community, need to be furious on his behalf. Pushed to explain, Neil laid it out in a further tweet:

Was just one reviewer who really didn’t seem to like certain aspects, esp the use of discrete character matrices. Fair enough, can’t please everyone, but the editor just kept sending it back even when two others said our responses to this reviewer should be fine.

Again, somehow this tweet is free of cursing. He is a better man than I would be in that situation. He also doesn’t call out the reviewer by name, nor the spineless handling editor, which again shows great restraint — though I am not at all sure it’s the right way to go.

There is so, so much to hate about this story:

  • The obstructive peer reviewer, who seems to have to got away with his reputation unblemished by these repeated acts of vandalism. (I’m assuming he was one of the two anonymous reviewers, not the one who identified himself.)
  • The handling editor who had half a dozen opportunities to put an end to the round-and-round, and passed on at least five of them. Do your job! Handle the manuscript! Don’t just keep kicking it back to a reviewer who you know by this stage is not acting in good faith.
  • The failure of the rest of the journal’s editorial board to step in and bring some sanity to the situation.
  • The normalization of this kind of thing — arguably not helped by Neil’s level-headed recounting of the story as though it’s basically reasonable — as someting authors should expect, and just have to put up with.
  • The time wasted: the other research not done while the authors were pithering around back and forth with the hostile reviewer.

It’s the last of these that pains me the most. Of all the comforting lies we tell ourselves about conventionl peer review, the worst is that it’s worth all the extra time and effort because it makes the paper better.

It’s not worth it, is it?

Maybe Brocklehurst et al. 2021 is a bit better for having gone through the 3rd, 4th, 5th, 6th, 7th and 8th rounds of peer review. But if it is, then it’s a marginal difference, and my guess is that in fact it’s no better and no worse that what they submitted after the second round. All that time, they could have been looking at specimens, generating hypotheses, writing descriptions, gathering data, plotting graphs, writing blogs, drafting papers — instead they have been frittering away their time in a pointless and destructive conflict with someone whose only goal was to prevent the advancement of science because an aspect of the paper happened to conflict with a bee he had in his bonnet. We have to stop this waste.

This incident has reinforced my growing conviction that venues like Qeios, Peer Community in Paleontology and BiorXiv (now that it’s moving towards support for reviewing) are the way to go. Our own experience at Qeios has been very good — if it works this well the next time we use it, I think think it’s a keeper. Crucially, I don’t believe our paper (Taylor and Wedel 2021) would have been stronger if it had gone through the traditional peer-review gauntlet; instead, I think it’s stronger than it would have been, because it’s received reviews from more pairs of eyes, and each of them with a constructive approach. Quicker publication, less work for everyone involved, more collegial process, better final result — what’s not to like?

References

A month after I and Matt published our paper “Why is vertebral pneumaticity in sauropod dinosaurs so variable?” at Qeios, we were bemoaning how difficult it was to get anyone to review it. But what a difference the last nineteen days have made!

In that time, we’ve had five reviews, and posted three revisions: revision 2 in response to a review by Mark McMenamin, version 3 in response to a review by Ferdinand Novas, and version 4 in response to reviews by Leonardo Cotts, by Alberto Collareta, and by Eduardo Jiménez-Hidalgo.

Taylor and Wedel (2021: Figure 2). Proximal tail skeleton (first 13 caudal vertebrate) of LACM Herpetology 166483, a juvenile specimen of the false gharial Tomistoma schlegelii. A: close-up of caudal vertebrae 4–6 in right lateral view, red circles highlighting vascular foramina: none in Ca4, two in Ca5 and one in Ca6. B: right lateral view. C: left lateral view (reversed). D: close-up of caudal vertebrae 4–6 in left lateral view (reversed), red circles highlighting vascular foramina: one each in Ca4, Ca5 and Ca6. In right lateral view, vascular foramina are apparent in the centra of caudal vertebrae 5–7 and 9–11; they are absent or too small to make out in vertebrae 1–4, 8 and 12–13. In left lateral view (reversed), vascular foramina are apparent in the centra of caudal vertebrae 4–7 and 9; they are absent or too small to make out in vertebrae 1–3, 8, and 10–13. Caudal centra 5–7 and 9 are therefore vascularised from both sides; 4 and 10–11 from one side only; and 1–3, 8 and 12–13 not at all.

There are a few things to say about this.

First, this is now among our most reviewed papers. Thinking back across all my publications, most have been reviewed by two people; the original Xenoposeidon description was reviewed by three; the same was true of my reassessment of Xenoposeidon as a rebbachisaur, and there may have been one or two more that escape me at the moment. But I definitely can’t think of any papers that have been under five sets of eyes apart from this one in Qeios.

Now I am not at all saying that all five of the reviews on this paper are as comprehensive and detailed as a typical solicited peer review at a traditional journal. Some of them have detailed observations; others are much more cursory. But they all have things to say — which I will return to in my third point.

Second, Qeios has further decoupled the functions of peer review. Traditional peer review combines three rather separate functions: A, Checking that the science is sound before publishing it; B, assessing whether it’s a good fit for the journal (often meaning whether it’s sexy enough); and C, helping the authors to improve the work. When PLOS ONE introduced correctness-only peer-review, they discarded B entirely, reasoning correctly that no-one knows which papers will prove influential[1]. Qeios goes further by also inverting A. By publishing before the peer reviews are in (or indeed solicited), it takes away the gatekeeper role of the reviewers, leaving them with only function C, helping the authors to improve the work. Which means it’s no surprise that …

Third, all five reviews have been constructive. As Matt has written elsewhere, “There’s no way to sugar-coat this: getting reviews back usually feels like getting kicked in the gut”. This is true, and we both have a disgraceful record of allowing harshly-reviewed projects to sit fallow for far too long before doing the hard work of addressing the points made by the reviewers and resubmitting[2].

The contrast with the reviews from Qeios has been striking. Each one has sent me scampering back to the manuscript, keen to make (most of) the suggested changes — hence the three revised versions that I’ve posted in the last fortnight. I think there are at least two reasons for this, a big one and a small one.

  • The big reason, I think, is that the reviewers know their only role is to improve the paper. Well, that’s not quite true: they also have some influence over its evaluation, both in what they write and in assigning a 1-to-5 star score. But they know when they’re writing their reviews that whatever happens, they won’t block publication. This means, firstly, that there is no point in their writing something like “This paper should not be published until the authors do X”; but equally importantly, I think it puts reviewers in a different and more constructive mindset. They feel themselves to be allies of the authors rather than (as can happen) adversaries.
  • The smaller reason is it’s easier to deal with one review at a time. I understand why journals solicit multiple reviews: so the handling editor can consider them all in reaching a decision. I understand why the authors get all the reviews back at once. But that process can’t help but be discouraging: because, once the decision has been made, they’re all on hand and there’s no point in stringing them out. One at a time may not be better, exactly; but it’s emotionally easier.

Is this all upside? Well, it’s too early to say. We’ve only done this once. The experience has certainly been more pleasant — and, crucially, much more efficient — than the traditional publishing lifecycle. But I’m aware of at least two potential drawbacks:

First, the publish-first lifecycle could be exploited by cranks. If the willingness to undergo peer-review is the mark of seriousness in a researcher — and if non-serious researchers are unwilling to face that gauntlet — then a venue that lets you make an end-run around peer-review is an obvious loophole. How serious a danger is this? Only time will tell, but I am inclined to think maybe not too serious. Bad papers on a site like Qeios will attract negative reviews and low scores, especially if they start to get noticed in the mainsteam media. They won’t be seen as having the stamp of having passed peer-review; rather, they will be branded with having publicly failed peer-review.

Second, it’s still not clear where reviewers will come from. We wrote about this problem in some detail last month, and although it’s worked out really well for our present paper, that’s no guarantee that it will always work out this well. We know that Qeios itself approached at least one reviewer to solicit their comments: that’s great, and if they can keep doing this then it will certainly help. But it probably won’t scale, so either a different reviewing culture will need to develop, or we will need people who — perhaps only on an informal basis — take it on themselves to solicit reviews from others. We’re interested to see how this develops.

Anyway, Matt and I have found our first Qeios experience really positive. We’ve come out of it with what I think is a good paper, relatively painlessly, and with much less friction than the usual process. I hope that some of you will try it, too. To help get the process rolling, I personally undertake to review any Qeios article posted by an SV-POW! reader. Just leave a comment here to let me know about your article when it’s up.

 

Notes

[1] “No-one knows which papers will prove influential”. As purely anecdotal evidence for this claim: when I wrote “Sauropod dinosaur research: a historical review” for the Geological Society volume Dinosaurs: A Historical Perspective, I thought it might become a citation monster. It’s done OK, but only OK. Conversely, it never occurred to me that “Head and neck posture in sauropod dinosaurs inferred from extant animals” would be of more than specialist interest, but it’s turned out to be my most cited paper. I bet most researchers can tell similar stories.

[2] One example: my 2015 preprint on the incompleteness of sauropod necks was submitted for publication in October 2015, and the reviews[3] came back that same month. Five and a half years later, I am only now working on the revision and resubmission. If you want other examples, we got ’em. I am not proud of this.

[3] I referred above to “harsh reviews” but in fact the reviews for this paper were not harsh; they were hard, but 100% fair, and I found myself agreeing with about 90% of the criticisms. That has certainly not been true of all the reviews I have found disheartening!