Last time, we looked at the difference between cost, value and price, and applied those concepts to simple markets like the one for chairs, and the complex market that is scholarly publication. We finished with the observation that the price our community pays for the publication of a paper (about $3,333 on average) is about 3–7 times as much as its costs to publish ($500-$1000)?

How is this possible? One part of the answer is that the value of a published paper to the commnity is higher still: were it not so, no-one would be paying. But that can’t be the whole reason.

In an efficient market, competing providers of a good will each try to undercut each other until the prices they charge approach the cost. If, for example, Elsevier and Springer-Nature were competing in a healthy free market, they would each be charging prices around one third of what they are charging now, for fear of being outcompeted by their lower-priced competitor. (Half of those price-cuts would be absorbed just by decreasing the huge profit margins; the rest would have to come from streamlining business processes, in particular things like the costs of maintaining paywalls and the means of passing through them.)

So why doesn’t the Invisible Hand operate on scholarly publishers? Because they are not really in competition. Subscriptions are not substitutable goods because each published article is unique. If I need to read an article in an Elsevier journal then it’s no good my buying a lower-priced Springer-Nature subscription instead: it won’t give me access to the article I need.

(This is one of the reasons why the APC-based model — despite its very real drawbacks — is better than the subscription model: because the editorial-and-publication services offered by Elsevier and Springer-Nature are substitutable. If one offers the service for $3000 and the other for $2000, I can go to the better-value provider. And if some other publisher offers it for $1000 or $500, I can go there instead.)

The last few years have seen huge and welcome strides towards establishing open access as the dominant mode of publication for scholarly works, and currently output is split more or less 50/50 between paywalled and open. We can expect OA to dominate increasingly in future years. In many respects, the battle for OA is won: we’ve not got to VE Day yet, but the D-Day Landings have been accomplished.

Yet big-publisher APCs still sit in the $3000–$5000 range instead of converging on $500-$1000. Why?

Björn Brembs has been writing for years about the fact that every market has a luxury segment: you can buy a perfectly functional wristwatch for $10, yet people spend thousands on high-end watches. He’s long been concerned that if scholarly publishing goes APC-only, then people will be queuing up to pay the €9,500 APC for Nature in what would become a straightforward pay-for-prestige deal. And he’s right: given the outstandingly stupid way we evaluate reseachers for jobs, promotion and tenure, lots of people will pay a 10x markup for the “I was published in Nature” badge even though Nature papers are an objectively bad way to communicate research.

But it feels like something stranger is happening here. It’s almost as though the whole darned market is a luxury segment. The average APC funded by the Wellcome Trust in 2018/19 was £2,410 — currently about $3,300. Which is almost exactly the average article cost of $3,333 that we calculated earlier. What’s happening is that the big publishers have landed on APCs at rates that preserve the previous level of income. That is understandable on their part, but what I want to know is why are we still paying them? Why are all Wellcome’s grantees not walking away from Elsevier and Springer-Nature, and publishing in much cheaper alternatives?

Why, in other words, are market forces not operating here?

I can think of three reasons why researchers prefer to spend $3000 instead of $1000:

  1. It could be that they are genuinely getting a three-times-better service from the big publishers. I mention this purely for completeness, as no evidence supports the hypothesis. There seems to be absolutely no correlation between price and quality of service.
  2. Researchers are coasting on sheer inertia, continuing to submit to the journals they used to submit to back in the bad old days of subscriptions. I am not entirely without sympathy for this: there is comfort in familiarity, and convenience in knowing a journal’s flavour, expectations and editorial board. But are those things worth a 200% markup?
  3. Researchers are buying prestige — or at least what they perceive as prestige. (In reality, I am not convinced that papers in non-exceptional Elsevier or Springer-Nature journals are at all thought of as more prestigous than those in cheaper but better born-OA journals. But for this to happen, it only needs people to think the old journals are more prestigious, it doesn’t need them to be right.)

But underlying all these reasons to go to a more expensive publishers is one very important reason not to bother going to a cheaper publisher: researchers are spending other people’s money. No wonder they don’t care about the extra few thousand pounds.

How can funders fix this, and get APCs down to levels that approximate publishing cost? I see at least three possibilities.

First, they could stop paying APCs for their grantees. Instead, they could add a fixed sum onto all grants they make — $1,500, say — and leave it up to the researchers whether to spend more on a legacy publisher (supplementing the $1,500 from other sources of their own) or to spend less on a cheaper born-OA publisher and redistribute the excess elsewhere.

Second, funders could simply publish the papes themselves. To be fair several big funders are doing this now, so we have Wellcome Open Research, Gates Open Research, etc. But doesn’t it seem a bit silly to silo research according to what body awarded the grant that funded it? And what about authors who don’t have a grant from one of these bodies, or indeed any grant at all?

That’s why I think the third solution is best. I would like to see funders stop paying APCs and stop building their own publishing solutions, and instead collaborate to build and maintain a global publishing solution that all researchers could use irrespective of grant-recipient status. I have much to say on what such a solution should look like, but that is for another time.

We have a tendency to be sloppy about language in everyday usage, so that words like “cost”, “value” and “price” are used more or less interchangeably. But economists will tell you that the words have distinct meanings, and picking them apart is crucial to understand economic transaction. Suppose I am a carpenter and I make chairs:

  • The cost of the chair is what it costs me to make it: raw materials, overheads, my own time, etc.
  • The value of the chair is what it’s worth to you: how much it adds to your lifestyle.
  • The price of the chair is how much you actually pay me for it.

In a functioning market, the value is more than the cost. Say it costs me £60 to make the chair, and it’s worth £100 to you. Then there is a £40 range in which the price could fall and we would both come out of the deal ahead. If you buy the chair for £75, then I have made £15 more than what it cost me to make, so I am happy; and you got it for £25 less than it was worth to you, so you’re happy, too.

(If the value is less than the cost, then there is no happy outcome. The best I can do is dump the product on the market at below cost, in the hope of making back at least some of my outlay.)

So far, so good.

Now let’s think about scientific publications.

There is a growing consensus that the cost of converting a scientific manuscript into a published paper — peer-reviewed, typeset, made machine-readable, references extracted, archived, indexed, sustainably hosted — is on the order of $500-$1000.

The value of a published paper to the world is incredibly hard to estimate, but let’s for now just say that it’s high. (We’ll see evidence of this in a moment.)

The price of a published paper is easier to calculate. According to the 2018 edition of the STM Report (which seems to be the most recent one available), “The annual revenues generated from English-language STM journal publishing are estimated at about $10 billion in 2017 […] collectively publishing over 3 million articles a year” (p5). So, bundling together subscription revenues, APCs, offsets deals and what have you, the average revenue accruing from a paper is $10,000,000,000/3,000,000 = $10,000/3 = $3,333.

(Given that these prices are paid, we can be confident that the value is at least as much, i.e. somewhere north of $3,333 — which is why I was happy earlier to characterise the value as “high”.)

Why is it possible for the price of a paper to be 3–7 times as high as its cost? One part of the answer is that the value is higher still. Were it not so, no-one would be paying. But that can’t be the whole reason.

Tune in next time to find out the exciting reason why the price of scholarly publishing is so much higher than the cost!

A month after I and Matt published our paper “Why is vertebral pneumaticity in sauropod dinosaurs so variable?” at Qeios, we were bemoaning how difficult it was to get anyone to review it. But what a difference the last nineteen days have made!

In that time, we’ve had five reviews, and posted three revisions: revision 2 in response to a review by Mark McMenamin, version 3 in response to a review by Ferdinand Novas, and version 4 in response to reviews by Leonardo Cotts, by Alberto Collareta, and by Eduardo Jiménez-Hidalgo.

Taylor and Wedel (2021: Figure 2). Proximal tail skeleton (first 13 caudal vertebrate) of LACM Herpetology 166483, a juvenile specimen of the false gharial Tomistoma schlegelii. A: close-up of caudal vertebrae 4–6 in right lateral view, red circles highlighting vascular foramina: none in Ca4, two in Ca5 and one in Ca6. B: right lateral view. C: left lateral view (reversed). D: close-up of caudal vertebrae 4–6 in left lateral view (reversed), red circles highlighting vascular foramina: one each in Ca4, Ca5 and Ca6. In right lateral view, vascular foramina are apparent in the centra of caudal vertebrae 5–7 and 9–11; they are absent or too small to make out in vertebrae 1–4, 8 and 12–13. In left lateral view (reversed), vascular foramina are apparent in the centra of caudal vertebrae 4–7 and 9; they are absent or too small to make out in vertebrae 1–3, 8, and 10–13. Caudal centra 5–7 and 9 are therefore vascularised from both sides; 4 and 10–11 from one side only; and 1–3, 8 and 12–13 not at all.

There are a few things to say about this.

First, this is now among our most reviewed papers. Thinking back across all my publications, most have been reviewed by two people; the original Xenoposeidon description was reviewed by three; the same was true of my reassessment of Xenoposeidon as a rebbachisaur, and there may have been one or two more that escape me at the moment. But I definitely can’t think of any papers that have been under five sets of eyes apart from this one in Qeios.

Now I am not at all saying that all five of the reviews on this paper are as comprehensive and detailed as a typical solicited peer review at a traditional journal. Some of them have detailed observations; others are much more cursory. But they all have things to say — which I will return to in my third point.

Second, Qeios has further decoupled the functions of peer review. Traditional peer review combines three rather separate functions: A, Checking that the science is sound before publishing it; B, assessing whether it’s a good fit for the journal (often meaning whether it’s sexy enough); and C, helping the authors to improve the work. When PLOS ONE introduced correctness-only peer-review, they discarded B entirely, reasoning correctly that no-one knows which papers will prove influential[1]. Qeios goes further by also inverting A. By publishing before the peer reviews are in (or indeed solicited), it takes away the gatekeeper role of the reviewers, leaving them with only function C, helping the authors to improve the work. Which means it’s no surprise that …

Third, all five reviews have been constructive. As Matt has written elsewhere, “There’s no way to sugar-coat this: getting reviews back usually feels like getting kicked in the gut”. This is true, and we both have a disgraceful record of allowing harshly-reviewed projects to sit fallow for far too long before doing the hard work of addressing the points made by the reviewers and resubmitting[2].

The contrast with the reviews from Qeios has been striking. Each one has sent me scampering back to the manuscript, keen to make (most of) the suggested changes — hence the three revised versions that I’ve posted in the last fortnight. I think there are at least two reasons for this, a big one and a small one.

  • The big reason, I think, is that the reviewers know their only role is to improve the paper. Well, that’s not quite true: they also have some influence over its evaluation, both in what they write and in assigning a 1-to-5 star score. But they know when they’re writing their reviews that whatever happens, they won’t block publication. This means, firstly, that there is no point in their writing something like “This paper should not be published until the authors do X”; but equally importantly, I think it puts reviewers in a different and more constructive mindset. They feel themselves to be allies of the authors rather than (as can happen) adversaries.
  • The smaller reason is it’s easier to deal with one review at a time. I understand why journals solicit multiple reviews: so the handling editor can consider them all in reaching a decision. I understand why the authors get all the reviews back at once. But that process can’t help but be discouraging: because, once the decision has been made, they’re all on hand and there’s no point in stringing them out. One at a time may not be better, exactly; but it’s emotionally easier.

Is this all upside? Well, it’s too early to say. We’ve only done this once. The experience has certainly been more pleasant — and, crucially, much more efficient — than the traditional publishing lifecycle. But I’m aware of at least two potential drawbacks:

First, the publish-first lifecycle could be exploited by cranks. If the willingness to undergo peer-review is the mark of seriousness in a researcher — and if non-serious researchers are unwilling to face that gauntlet — then a venue that lets you make an end-run around peer-review is an obvious loophole. How serious a danger is this? Only time will tell, but I am inclined to think maybe not too serious. Bad papers on a site like Qeios will attract negative reviews and low scores, especially if they start to get noticed in the mainsteam media. They won’t be seen as having the stamp of having passed peer-review; rather, they will be branded with having publicly failed peer-review.

Second, it’s still not clear where reviewers will come from. We wrote about this problem in some detail last month, and although it’s worked out really well for our present paper, that’s no guarantee that it will always work out this well. We know that Qeios itself approached at least one reviewer to solicit their comments: that’s great, and if they can keep doing this then it will certainly help. But it probably won’t scale, so either a different reviewing culture will need to develop, or we will need people who — perhaps only on an informal basis — take it on themselves to solicit reviews from others. We’re interested to see how this develops.

Anyway, Matt and I have found our first Qeios experience really positive. We’ve come out of it with what I think is a good paper, relatively painlessly, and with much less friction than the usual process. I hope that some of you will try it, too. To help get the process rolling, I personally undertake to review any Qeios article posted by an SV-POW! reader. Just leave a comment here to let me know about your article when it’s up.

 

Notes

[1] “No-one knows which papers will prove influential”. As purely anecdotal evidence for this claim: when I wrote “Sauropod dinosaur research: a historical review” for the Geological Society volume Dinosaurs: A Historical Perspective, I thought it might become a citation monster. It’s done OK, but only OK. Conversely, it never occurred to me that “Head and neck posture in sauropod dinosaurs inferred from extant animals” would be of more than specialist interest, but it’s turned out to be my most cited paper. I bet most researchers can tell similar stories.

[2] One example: my 2015 preprint on the incompleteness of sauropod necks was submitted for publication in October 2015, and the reviews[3] came back that same month. Five and a half years later, I am only now working on the revision and resubmission. If you want other examples, we got ’em. I am not proud of this.

[3] I referred above to “harsh reviews” but in fact the reviews for this paper were not harsh; they were hard, but 100% fair, and I found myself agreeing with about 90% of the criticisms. That has certainly not been true of all the reviews I have found disheartening!

 

Picture is unrelated. Seriously. I’m just allergic to posts with no visuals. Stand by for more random brachiosaurs.

Here’s something I’ve been meaning to post for a while, about my changing ideas about scholarly publishing. On one hand, it’s hard to believe now that the Academic Spring was almost a decade ago. On the other, it’s hard for me to accept that PeerJ will be only 8 years old next week–it has loomed so large in my thinking that it feels like it has been around much longer. The very first PeerJ Preprints went up on April 4, 2013, just about a month and a half after the first papers in PeerJ. At that time it felt like things were moving very quickly, and that the landscape of scholarly publishing might be totally different in just a few years. Looking back now, it’s disappointing how little has changed. Oh, sure, there are more OA options now — even more kinds of OA options, and things like PCI Paleo and Qeios feel genuinely envelope-pushing — but the big barrier-based publishers are still dug in like ticks, and very few journals have fled from those publishers to re-establish themselves elsewhere. APCs are ubiquitous now, and mostly unjustified and ruinously expensive. Honestly, the biggest changes in my practice are that I use preprint servers to make my conference talks available, and I use SciHub instead of interlibrary loan.

But I didn’t sit down to write this post so I could grumble about the system like an old hippie. I’ve learned some things in the past few years, about what actually works in scholarly publishing (at least for me), and about my preferences in some areas, which turn out to be not what I expected. I’ll focus on just two areas today, peer review, and preprints.

How I Stopped Worrying and Learned to Love Peer Review

Surprise #1: I’m not totally against peer review. I realize that the way it is implemented in many places is deeply flawed, and that it’s no guarantee of the quality of a paper, but I also recognize its value. This is not where I was 8 years ago; at the time, I was pretty much in agreement with Mike’s post from November, 2012, “Well, that about wraps it up for peer-review”. But then in 2014 I became an academic editor at PeerJ. And as I gained first-hand experience from the other side of the editorial desk, I realized a few things:

  • Editors have broad remits in terms of subject areas, and without the benefit of peer reviews by people who specialize in areas other than my own, I’m not fit to handle papers on topics other than Early Cretaceous North American sauropods, skeletal pneumaticity, and human lower extremity anatomy.
  • Even at PeerJ, which only judges papers based on scientific soundness, not on perceived importance, it can be hard to tell where the boundary is. I’ve had to reject a few manuscripts at PeerJ, and I would not have felt confident about doing that without the advice of peer reviewers. Even with no perceived importance criterion, there is definitely a lower bound on what counts as a publishable observation. If you find a mammoth toe bone in Nebraska, or a tyrannosaur tooth in Montana, there should probably be something more interesting to say about it, beyond the bare fact of its existence, if it’s going to be the subject of a whole paper.
  • In contentious fields, it can be valuable to get a diversity of opinions. And sometimes, frankly, I need to figure out if the author is a loony, or if it’s actually Reviewer #2 that’s off the rails. Although I think PeerJ generally attracts fairly serious authors, a handful of things that get submitted are just garbage. From what I hear, that’s the case at almost every journal. But it’s not always obvious what’s garbage, what’s unexciting but methodologically sound, and what’s seemingly daring but also methodologically sound. Feedback from reviewers helps me make those calls. Bottom line, I do think the community benefits from having pre-publication filters in place.
  • Finally, I think editors have a responsibility to help authors improve their work, and reviewers catch a lot of stuff that I would miss. And occasionally I catch something that the reviewers missed. We are collectively smarter and more helpful than any of us would be in isolation, and it’s hard to see that as anything other than a good thing.

The moral here probably boils down to, “white guy stops bloviating about Topic X when he gains actual experience”, which doesn’t look super-flattering for me, but that’s okay.

You may have noticed that my pro-peer-review comments are rather navel-gaze-ly focused on the needs of editors. But who needs editors? Why not chuck the whole system? Set up an outlet called Just Publish Everything, and let fly? My answer is that my time in the editorial trenches has convinced me that such a system will silt up with garbage papers, and as a researcher I already have a hard enough time keeping up with all of the emerging science that I need to. From both perspectives, I want there to be some kind of net to keep out the trash. It doesn’t have to be a tall net, or strung very tight, but I’d rather have something than nothing.

What would I change about peer review? Since it launched, PeerJ has let reviewers either review anonymously, or sign their reviews, and it has let authors decide whether or not to publish the reviews alongside the paper. Those were both pretty daring steps at the time, but if I could I’d turn both of those into mandates rather than options. Sunlight is the best disinfectant, and I think almost all of the abuses of the peer review system would evaporate if reviewers had to sign their reviews, and all reviews were published alongside the papers. There will always be a-holes in the world, and some of them are so pathological that they can’t rein in their bad behavior, but if the system forced them to do the bad stuff in the open, we’d all know who they are and we could avoid them.

Femur of Apatosaurus and right humerus Brachiosaurus altithorax holotype on wooden pedestal (exhibit) with labels and 6 foot ruler for scale, Geology specimen, Field Columbian Museum, 1905. (Photo by Charles Carpenter/Field Museum Library/Getty Images)

Quo Vadis, Preprints?

Maybe the advent of preprints was more drawn out than I know, but to me it felt like preprints went from being Not a Thing, Really, in 2012, to being ubiquitous in 2013. And, I thought at the time, possibly transformative. They felt like something genuinely new, and when Mike and I posted our Barosaurus preprint and got substantive, unsolicited review comments in just a day or two, that was pretty awesome. Which is why I did not expect…

Surprise #2: I don’t have much use for preprints, at least as they were originally intended. When I first confessed this to Mike, in a Gchat, he wrote, “You don’t have a distaste for preprints. You love them.” And if you just looked at the number of preprints I’ve created, you might get that impression. But the vast majority of my preprints are conference talks, and using a preprint server was just the simplest way to the get the abstract and the slide deck up where people could find them. In terms of preprints as early versions of papers that I expect to submit soon, only two really count, neither more recent than 2015. (I’m not counting Mike’s preprint of our vertebral orientation paper from 2019; he’s first author, and I didn’t mind that he posted a preprint, but neither is it something I’d have done if the manuscript was mine alone.)

My thoughts here are almost entirely shaped by what happened with our Barosaurus preprint. We put it up on PeerJ Preprints back in 2013, we got some useful feedback right away, and…we did nothing for a long time. Finally in 2016 we revised the manuscript and got it formally submitted. I think we both expected that since the preprint had already been “reviewed” by commenters, and we’d revised it accordingly, that formal peer review would be very smooth. It was not. And the upshot is that only now, in 2021, are we finally talking about dealing with those reviews and getting the manuscript resubmitted. We haven’t actually done this, mind, we’re just talking about planning to make a start on it. (Non-committal enough for ya?)

Why has it taken us so long to deal with this one paper? We’re certainly capable — the two of us got four papers out in 2013, each of them on a different topic and each of them substantial. So why can’t we climb Mount Barosaurus? I think a big part of it is that we know the world is not waiting for our results, because our results are already out in the world. We’re the only ones being hurt by our inaction — we’re denying ourselves the credit and the respect that go along with having a paper finally and formally published in a peer-reviewed journal. But we can comfort ourselves with the thought that if someone needs our observations to make progress on their own project, we’re not holding them up. Just having the preprint out there has stolen some of our motivation to the get the paper done and out, apparently enough to keep us from doing it at all.

Mike pointed out that according to Google Scholar, our Barosaurus preprint has been cited five times to date, once in its original version and four times in its revised version. But to me, the fact that the Baro manuscript has been cited five times is a fail. Because all of my peer-reviewed papers from 2014-2016, which have been out for less long, have been cited more. So I read that as people not wanting to cite it. And who can blame them? Even I thought it would be supplanted by the formally-published, peer-reviewed paper within a few weeks or months.

Mike then pointed me to his 2015 post, “Four different reasons to post preprints”, and asked how many of those arguments still worked for me now. Number 2 is good, posting material that would otherwise never see the light of day — it’s basically what I did when I put my dissertation on arXiv. Ditto for 4, which is posting conference presentations. I’m not moved by either 1 or 3. Number 3 is getting something out to the community as quickly as possible, just because you want to, and number 1 is getting feedback as quickly as possible. The reason that neither of those move me is that they’re solved to my satisfaction by existing peer-reviewed outlets. I don’t know of any journals that let reviewers take 2-4 months to review a paper anymore. I don’t know how much credit for the acceleration should go to PeerJ, which asks for reviews in 10 to 14 days, but surely some. And I don’t usually have a high enough opinion of my own work to think that the community will suffer if it takes a few months for a paper to come out through the traditional process.

(If it seems like I’m painting Mike as relentlessly pro-preprint, it’s not my intent. Rather, I’d dropped a surprising piece of news on him, and he was strategically probing to determine the contours of my new and unexpected stance. Then I left the conversation to come write this post while the ideas were all fresh in my head. I hope to find out what he thinks about this stuff in the comments, or ideally in a follow-up post.)

Back to task: at least for me, a preprint of a manuscript I’m going to submit anyway is a mechanism to get extra reviews I don’t want*, and to lull myself into feeling like the work is done when it’s not. I don’t anticipate that I will ever again put up a preprint for one of my own manuscripts if there’s a plausible path to traditional publication.

* That sounds awful. To people who have left helpful comments on my preprints: I’m grateful, sincerely. But not so grateful that I want to do the peer review process a second time for zero credit. I didn’t know that when I used to file preprints of manuscripts, but I know it now, and the easiest way for me to not make more work for both of us is to not file preprints of things I’m planning to submit somewhere anyway.

So much for my preprints; what about those of other people? Time for another not-super-flattering confession: I don’t read other people’s preprints. Heck, I don’t have time to keep up with the peer-reviewed literature, and I have always been convinced by Mike’s dictum, “The real value of peer-review is not as a mark of correctness, but of seriousness” (from this 2014 post). If other people want me to part with my precious time to engage with their work, they can darn well get it through peer review. And — boomerang thought — that attitude degrades my respect for my own preprint manuscripts. I wouldn’t pay attention to them if someone else had written them, so I don’t really expect anyone else to pay attention to the ones that I’ve posted. In fact, it’s extremely flattering that they get read and cited at all, because by my own criteria, they don’t deserve it.

I have to stress how surprising I find this conclusion, that I regard my own preprints as useless at best, and simultaneously extra-work-making and motivation-eroding at worst, for me, and insufficiently serious to be worthy of other people’s time, for everyone else. It’s certainly not where I expected to end up in the heady days of 2013. But back then I had opinions, and now I have experience, and that has made all the difference.

The comment thread is open. What do you think? Better still, what’s your experience?

We’ve noted many times over the years how inconsistent pneumatic features are in sauropod vertebra. Fossae and formamina vary between individuals of the same species, and along the spinal column, and even between the sides of individual vertebrae. Here’s an example that we touched on in Wedel and Taylor (2013), but which is seen in all its glory here:

Taylor and Wedel (2021: Figure 5). Giraffatitan brancai tail MB.R.5000, part of the mounted skeleton at the Museum für Naturkunde Berlin. Caudal vertebrae 24–26 in left lateral view. While caudal 26 has no pneumatic features, caudal 25 has two distinct pneumatic fossae, likely excavated around two distinct vascular foramina carrying an artery and a vein. Caudal 24 is more shallowly excavated than 25, but may also exhibit two separate fossae.

But bone is usually the least variable material in the vertebrate body. Muscles vary more, nerves more again, and blood vessels most of all. So why are the vertebrae of sauropods so much more variable than other bones?

Our new paper, published today (Taylor and Wedel 2021) proposes an answer! Please read it for the details, but here’s the summary:

  • Early in ontogenly, the blood supply to vertebrae comes from arteries that initially served the spinal cord, penetrating the bone of the neural canal.
  • Later in ontegeny, additional arteries penetrate the centra, leaving vascular foramina (small holes carrying blood vessels).
  • This hand-off does not always run to completion, due to the variability of blood vessels.
  • In extant birds, when pneumatic diverticula enter the bone they do so via vascular foramina, alongside blood vessels.
  • The same was probaby true in sauropods.
  • So in vertebrae that got all their blood supply from vascular foramina in the neural canal, diverticula were unable to enter the centra from the outside.
  • So those centra were never pneumatized from the outside, and no externally visible pneumatic cavities were formed.

Somehow that pretty straightforward argument ended up running to eleven pages. I guess that’s what you get when you reference your thoughts thoroughly, illustrate them in detail, and discuss the implications. But the heart of the paper is that little bullet-list.

Taylor and Wedel (2021: Figure 6). Domestic duck Anas platyrhynchos, dorsal vertebrae 2–7 in left lateral view. Note that the two anteriormost vertebrae (D2 and D3) each have a shallow pneumatic fossa penetrated by numerous small foramina.

(What is the relevance of these duck dorsals? You will need to read the discussion in the paper to find out!)

Our choice of publication venue

The world moves fast. It’s strange to think that only eleven years ago my Brachiosaurus revision (Taylor 2009) was in the Journal of Vertebrate Palaeontology, a journal that now feels very retro. Since then, Matt and I have both published several times in PeerJ, which we love. More recently, we’ve been posting preprints of our papers — and indeed I have three papers stalled in peer-review revisions that are all available as preprints (two Taylor and Wedels and a single sole-authored one). But this time we’re pushing on even further into the Shiny Digital Future.

We’ve published at Qeios. (It’s pronounced “chaos”, but the site doesn’t tell you that; I discovered it on Twitter.) If you’ve not heard of it — I was only very vaguely aware of it myself until this evening — it runs on the same model as the better known F1000 Research, with this very important difference: it’s free. Also, it looks rather slicker.

That model is: publish first, then filter. This is the opposite of the traditional scholarly publishing flow where you filter first — by peer reviewers erecting a series of obstacles to getting your work out — and only after negotiating that course to do get to see your work published. At Qeios, you go right ahead and publish: it’s available right off the bat, but clearly marked as awaiting peer-review:

And then it undergoes review. Who reviews it? Anyone! Ideally, of course, people with some expertise in the relevant fields. We can then post any number of revised versions in response to the reviews — each revision having its own DOI and being fixed and permanent.

How will this work out? We don’t know. It is, in part, an experiment. What will make it work — what will impute credibility to our paper — is good, solid reviews. So if you have any relevant expertise, we do invite you to get over there and write a review.

And finally …

Matt noted that I first sent him the link to the Qeios site at 7:44 pm my time. I think that was the first time he’d heard of it. He and I had plenty of back and forth on where to publish this paper before I pushed on and did it at Qeios. And I tweeted that our paper was available for review at 8:44 — one hour exactly after Matt learned that the venue existed. Now here we are at 12:04 my time, three hours and 20 minutes later, and it’s already been viewed 126 times and downloaded 60 times. I think that’s pretty awesome.

References

  • Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrate Paleontology 29(3):787-806. [PDF]
  • Taylor, Michael P., and Mathew J. Wedel. 2021. Why is vertebral pneumaticity in sauropod dinosaurs so variable? Qeios 1G6J3Q. doi: 10.32388/1G6J3Q [PDF]
  • Wedel, Mathew J., and Michael P. Taylor 2013b. Caudal pneumaticity and pneumatic hiatuses in the sauropod dinosaurs Giraffatitan and Apatosaurus. PLOS ONE 8(10):e78213. 14 pages. doi: 10.1371/journal.pone.0078213 [PDF]

Cool URIs don’t change

November 26, 2020

It’s now 22 years since Tim Berners-Lee, inventor of the World Wide Web, wrote the classic document Cool URIs don’t change [1]. It’s core message is simple, and the title summarises it. Once an organization brings a URI into existence, it should keep it working forever. If the document at that URI moves, then the old URI should become a redirect to the new. This really is Web 101 — absolute basics.

So imagine my irritation when I went to point a friend to Matt’s and my 2013 paper on whether neural-spine bifurcation is an ontogenetic character (spoiler: no), only to find that the paper no longer exists.

Wedel and Taylor (2013b: figure 15). An isolated cervical of cf. Diplodocus MOR 790 8-10-96-204 (A) compared to D. carnegii CM 84/94 C5 (B), C9 (C), and C12 (D), all scaled to the same centrum length. Actual centrum lengths are 280 mm, 372 mm, 525 mm, and 627 mm for A-D respectively. MOR 790 8-10-96-204 modified from Woodruff & Fowler (2012: figure 2B), reversed left to right for ease of comparison; D. carnegii vertebrae from Hatcher (1901: plate 3).

Well — it’s not quite that bad. I was able to go to the web-site’s home page, navigate to the relavant volume and issue, and find the new location of our paper. So it does still exist, and I was able to update my online list of publications accordingly.

But seriously — this is a really bad thing to do. How many other links might be out there to our paper? All of them are now broken. Every time someone out there follows a link to a PalArch paper — maybe wondering whether that journal would be a good match for their own work — they are going to run into a 404 that says “We can’t run our website properly and can’t be trusted with your work”.

“But Mike, we need to re-organise our site, and —” Ut! No. Let’s allow Sir Tim to explain:

We just reorganized our website to make it better.

Do you really feel that the old URIs cannot be kept running? If so, you chose them very badly. Think of your new ones so that you will be able to keep then running after the next redesign.

Well, we found we had to move the files…

This is one of the lamest excuses. A lot of people don’t know that servers such as Apache give you a lot of control over a flexible relationship between the URI of an object and where a file which represents it actually is in a file system. Think of the URI space as an abstract space, perfectly organized. Then, make a mapping onto whatever reality you actually use to implement it. Then, tell your server.

If you are a responsible organization, then one of the things you are responsible for is ensuring that you don’t break inbound links. If you want to reorganize, fine — but add the redirects.

And look, I’m sorry, I really don’t want to pick on PalArch, which is an important journal. Our field really needs diamond OA journals: that is, venues where vertebrate paleontology articles are free to read and also free to authors. It’s a community-run journal that is not skimming money out of academia for shareholders, and Matt’s and my experience with their editorial handling was nothing but good. I recommend them, and will proabably publish there again (despite my current irritation). But seriously, folks.

And by the way, there are much worse offenders than PalArch. Remember Aetogate, the plagiarism-and-claim-jumping scandal in New Mexico that the SVP comprehensively fudged its investigation of? The documents that the SVP Ethics Committee produced, such they were, were posted on the SVP website in early 2008, and my blog-post linked to them. By July, they had moved, and I updated my links. By July 2013, they had moved again, and I updated my links again. By October 2015 they had moved for a third time: I both updated my links, and made my own copy in case they vanished. Sure enough, by February 2019 they had gone again — either moved for a fourth time or just quietly discarded. This is atrocious stewardship by the flagship society of our discipline, and they should be heartily ashamed that in 2020, anyone who wants to know what they concluded about the Aetogate affair has to go and find their documents on a third-party blog.

Seriously, people! We need to up our game on this!

Cool URIs don’t change.

 

 


[1] Why is this about URIs instead of URLs? In the end, no reason. Technically, URIs are a broader category than URLs, and include URNs. But since no-one anywhere in the universe has ever used a URN, in practice URL and URI are synonymous; and since TBL wrote his article in 1998, “URL” has clearly won the battle for hearts and minds and “URI” has diminished and gone into the West. If you like, mentally retitle the article “Cool URLs don’t change”.

We’re currently in open access week, and one of the things I’ve noticed has been a rash of tweets of the form “I support #OpenAccess because …”. Here is a random collection.

We support #OpenAccess because #OpenScience needs good infrastructures.
— @ZB_MED

We support #OpenAccess because we believe that research results made possible by public funds should be accessible to everyone.
— @TIBHannover

We support #openaccess because it is a powerful means to opening #knowledge to everyone, no matter the structural support of the recipient.
— @openaccessnet

I support #openaccess because it offers the chance to use published research optimally for the benefit of all – using free licenses. And open access has a really nice community.
— @tullney

We support #openaccess because it makes possible “the world-wide electronic distribution of the peer-reviewed literature and free and unrestricted access to it by all scientists, scholars, teachers, students, and other curious minds.”
— @opensciencebern

I support #openaccess because it is key to academic and scientific dialogue.
— @silkebellanger

I support #openaccess because I really believe that it contributes to a better world and does not cost more money but simply coordination.
— @chgutknecht

I support #openaccess because even as a researcher at a well-equipped university you often do not have (official) access to the required scientific literature (e.g. from another discipline).
@dhuerlimann

I support #openaccess because it “accelerates research and all the goods that depend on research, such as new medicines, useful technologies, solved problems, informed decisions, improved policies, and beautiful understanding.”
— @petersuber

I support #openaccess because no one can predict who will want to use what piece of research when, and we should make sure there aren’t legacy restrictions in the way. We can do better.
— @researchremix

I support #OpenAccess because it is simply silly to spend years working hard to create new knowledge, then hide it in vaults where only a privileged few can see it.
— @MikeTaylor

I love that there are so many different reasons to support open access, from the most practical to the most fundamentally ethical. I love that the reach of open access is now so increased that even senior Elsevier staff like Head of communications P@ul Abrahams have the open-access-@-sign in their Twitter names. I love that cancelling Big Deals is no longer news — so many universities have done it, often with the help of organizations like Unsub who have a lot of experience in figuring out the financial implications.

It’s ridiculous that open access was ever a fight. But it was; and the thing is, it’s a fight that we’re winning.

In the last post, I catalogued some of the reasons why Scientific Reports, in its cargo-cult attempts to ape print journals such as its stablemate Nature, is an objectively bad journal that removes value from the papers submitted to it: the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, the pointless abbreviating of author names and journal titles.

This is particularly odd when you consider the prices of the obvious alternative megajournals:

So to have your paper published in Scientific Reports costs 10% more than in PLOS ONE, or 56% more than in PeerJ; and results in an objectively worse product that slices the paper up and dumps chunks of it in the back lot, compresses and combines the illustrations, and messes up the narrative.

So why would anyone choose to publish in it?

Well, the answer is depressingly obvious. As a colleague once expressed it to me “until I have a more stable job I’ll need the highest IFs I can pull off to secure a position somewhere“.

It’s as simple as that. PeerJ‘s impact factor at the time of writing is 2.353; PLOS ONE‘s is ‎2.776; That of Scientic Reports is ‎4.525. And so, it in the idiotic world we live in, it’s better for an author’s career to pay more for a worse version of his article in Scientific Reports than it is to pay less for a better version in PeerJ or PLOS ONE. Because it looks better to have got into Scientific Reports.

BUT WAIT A MINUTE. These three journals are all “megajournals”. They all have the exact same editorial criteria, which is that they accept any paper that is scientifically sound. They make no judgement about novelty, perceived importance or likely significance of the work. They are all completely up front about this. It’s how they work.

In other words, “getting into” Scientific Reports instead of PeerJ says absolutely nothing about the quality of your work, only that you paid a bigger APC.

Can we agree it’s insane that our system rewards researchers for paying a bigger APC to get a less scientifically useful version of their work?

Let me say in closing that I intend absolutely no criticism of Daniel Vidal or his co-authors for placing their Spinophorosaurus posture paper in Scientific Reports. He is playing the ball where it lies. We live, apparently, in a world where spending an extra $675 and accepting a scientifically worse result is good for your career. I can’t criticise Daniel for doing what it takes to get on in that world.

The situation is in every respect analogous to the following: before you attend a job interview, you are told by a respected senior colleague that your chances of getting the post are higher if you are wearing designer clothing. So you take $675 and buy a super-expensive shirt with a prominent label. If you get the job, you’ll consider it as bargain.

But you will never have much respect for the search committee that judged you on such idiotic criteria.

As I was figuring out what I thought about the new paper on sauropod posture (Vidal et al. 2020) I found the paper uncommonly difficult to parse. And I quickly came to realise that this was not due to any failure on the authors’ part, but on the journal it was published in: Nature’s Scientific Reports.

A catalogue of pointless whining

A big part of the problem is that the journal inexplicably insists on moving important parts of the manuscript out of the main paper and into supplementary information. So for example, as I read the paper, I didn’t really know what Vidal et al. meant by describing a sacrum as wedged: did it mean non-parallel anterior and posterior articular surfaces, or just that those surfaces are not at right angles to the long axis of the sacrum? It turns out to be the former, but I only found that out by reading the supplementary information:

The term describes marked trapezoidal shape in the
centrum of a platycoelous vertebrae in lateral view or in the rims of a condyle-cotyle (procoelous or opisthocoelous) centrum type.

This crucial information is nowhere in the paper itself: you could read the whole thing and not understand what the core point of the paper is due to not understanding the key piece of terminology.

And the relegation of important material to second-class, unformatted, maybe un-reviewed supplementary information doesn’t end there, by a long way. The SI includes crucial information, and a lot of it:

  • A terminology section of which “wedged vertebrae” is just one of ten sub-sections, including a crucial discussion of different interpretation of what ONP means.
  • All the information about the actual specimens the work is based on.
  • All the meat of the methods, including how the specimens were digitized, retro-deformed and digitally separated.
  • How the missing forelimbs, so important to the posture, were interpreted.
  • How the virtual skeleton was assembled.
  • How the range of motion of the neck was assessed.
  • Comparisons of the sacra of different sauropods.

And lots more. All this stuff is essential to properly understanding the work that was done and the conclusions that were reached.

And there’s more: as well as the supplementary information, which contains six supplementary figures and three supplementary tables, there is an additonal supplementary supplementary table, which could quite reasonably have gone into the supplementary information.

In a similar vein, even within the highly compressed actual paper, the Materials and Methods are hidden away at the back, after the Results, Discussion and Conclusion — as though they are something to be ashamed of; or, at best, an unwelcome necessity that can’t quite be omitted altogether, but need not be on display.

Then we have the disappointingly small illustrations: even the “full size” version of the crucial Figure 1 (which contains both the full skeleton and callout illustrations of key bones) is only 1000×871 pixels. (That’s why the illustration of the sacrum that I pulled out of the paper for the previous post was so inadequate.)

Compare that with, for example, the 3750×3098 Figure 1 of my own recent Xenoposeidon paper in PeerJ (Taylor 2018) — that has more than thirteen times as much visual information. And the thing is, you can bet that Vidal et al. submitted their illustration in much higher resolution than 1000×871. The journal scaled it down to that size. In 2020. That’s just crazy.

And to make things even worse, unrelated images are shoved into multi-part illustrations. Consider the ridiculousness of figure 2:

Vidal et al. (2020: figure 2). The verticalization of sauropod feeding envelopes. (A) Increased neck range of motion in Spinophorosaurus in the dorso-ventral plane, with the first dorsal vertebra as the vertex and 0° marking the ground. Poses shown: (1) maximum dorsiflexion; (2) highest vertical reach of the head (7.16 m from the ground), with the neck 90° deflected; (3) alert pose sensu Taylor Wedel and Naish13; (4) osteological neutral pose sensu Stevens14; (5) lowest vertical reach of the head (0.72 m from the ground at 0°), with the head as close to the ground without flexing the appendicular elements; (6) maximum ventriflexion. Blue indicates the arc described between maximum and minimum head heights. Grey indicates the arc described between maximum dorsiflexion and ventriflexion. (B) Bivariant plot comparing femur/humerus proportion with sacrum angle. The proportion of humerus and femur are compared as a ratio of femur maximum length/humerus maximum length. Sacrum angle measures the angle the presacral vertebral series are deflected from the caudal series by sacrum geometry in osteologically neutral pose. Measurements and taxa on Table 1. Scale = 1000 mm.

It’s perfectly clear that parts A and B of this figure have nothing to do with each other. It would be far more sensible for them to appear as two separate figures — which would allow part B enough space to convey its point much more clearly. (And would save us from a disconcertingly inflated caption).

And there are other, less important irritants. Authors’ given names not divulged, only initials. I happen to know that D. Vidal is Daniel, and that J. L. Sanz is José Luis Sanz; but I have no idea what the P in P. Mocho, the A in A. Aberasturi or the F in F. Ortega stand for. Journal names in the bibliography are abbreviated, in confusing and sometimes ludicrous ways: is there really any point in abbreviating Palaeogeography Palaeoclimatology Palaeoecology to Palaeogeogr. Palaeoclimatol. Palaeoecol?

The common theme

All of these problems — the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, even the abbreviating of author names and journal titles — have this in common: that they are aping how Science ‘n’ Nature appear in print.

They present a sort of cargo cult: a superstitious belief that extreme space pressures (such as print journals legitimately wrestle with) are somehow an indicator of quality. The assumption that copying the form of prestigious journals will mean that the content is equally revered.

And this is simply idiotic. Scientific Reports is an open-access web-only journal that has no print edition. It has no rational reason to compress space like a print journal does. In omitting the “aniel” from “Daniel Vidal” it is saving nothing. All it’s doing is landing itself with the limitations of print journals in exchange for nothing. Nothing at all.

Why does this matter?

This squeezing of a web-based journal into a print-sized pot matters because it’s apparent that a tremendous amount of brainwork has gone into Vidal et al.’s research; but much of that is obscured by the glam-chasing presentation of Scientific Reports. It reduces a Pinter play to a soap-opera episode. The work deserved better; and so do readers.

References

 

No, not his new Brachiosaurus humerus — his photograph of the Chicago Brachiosaurus mount, which he cut out and cleaned up seven years ago:

This image has been on quite a journey. Since Matt published this cleaned-up photo, and furnished it under the Creative Commons Attribution (CC By) licence, it has been adopted as the lead image of Wikipedia’s Brachiosaurus page [archvied]:

Consequently (I assume) it has now become Google’s top hit for brachiosaurus skeleton:

Last Saturday, Fiona and I went to Birdland, a birds-only zoo in the Cotswolds, about an hour away from where we live. The admission price also includes “Jurassic Journey”, a walking tour of a dozen or so not-very-good dinosaur models. In an interpretive centre in this area, I found this Brachiosaurus skeletal reconstruction stencilled on the wall:

I immediately knew it was the Chicago mount due to the combination of Giraffatitan anterior dorsals and Brachiosaurus posterior dorsals; but I found it more hauntingly familiar than that. A quick hunt turned up Matt’s seven-year-old post, and when I told Matt about my discovery he filled me in on its use in Wikipedia.

So this is 99% of a good story: we’re delighted that this work is out there, and has resulted in a much better Brachiosaurus image at Birdland than the rather sad-looking Stegosaurus next to it. The only slight disappointment is that I couldn’t find any sign of credit, which they really should have included given that Matt put the image out under CC By rather than in the public domain.

But as Matt said: “Even though I didn’t get credited, I’m always chuffed to see my stuff out in the world.” So true.