In the last post, I catalogued some of the reasons why Scientific Reports, in its cargo-cult attempts to ape print journals such as its stablemate Nature, is an objectively bad journal that removes value from the papers submitted to it: the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, the pointless abbreviating of author names and journal titles.

This is particularly odd when you consider the prices of the obvious alternative megajournals:

So to have your paper published in Scientific Reports costs 10% more than in PLOS ONE, or 56% more than in PeerJ; and results in an objectively worse product that slices the paper up and dumps chunks of it in the back lot, compresses and combines the illustrations, and messes up the narrative.

So why would anyone choose to publish in it?

Well, the answer is depressingly obvious. As a colleague once expressed it to me “until I have a more stable job I’ll need the highest IFs I can pull off to secure a position somewhere“.

It’s as simple as that. PeerJ‘s impact factor at the time of writing is 2.353; PLOS ONE‘s is ‎2.776; That of Scientic Reports is ‎4.525. And so, it in the idiotic world we live in, it’s better for an author’s career to pay more for a worse version of his article in Scientific Reports than it is to pay less for a better version in PeerJ or PLOS ONE. Because it looks better to have got into Scientific Reports.

BUT WAIT A MINUTE. These three journals are all “megajournals”. They all have the exact same editorial criteria, which is that they accept any paper that is scientifically sound. They make no judgement about novelty, perceived importance or likely significance of the work. They are all completely up front about this. It’s how they work.

In other words, “getting into” Scientific Reports instead of PeerJ says absolutely nothing about the quality of your work, only that you paid a bigger APC.

Can we agree it’s insane that our system rewards researchers for paying a bigger APC to get a less scientifically useful version of their work?

Let me say in closing that I intend absolutely no criticism of Daniel Vidal or his co-authors for placing their Spinophorosaurus posture paper in Scientific Reports. He is playing the ball where it lies. We live, apparently, in a world where spending an extra $675 and accepting a scientifically worse result is good for your career. I can’t criticise Daniel for doing what it takes to get on in that world.

The situation is in every respect analogous to the following: before you attend a job interview, you are told by a respected senior colleague that your chances of getting the post are higher if you are wearing designer clothing. So you take $675 and buy a super-expensive shirt with a prominent label. If you get the job, you’ll consider it as bargain.

But you will never have much respect for the search committee that judged you on such idiotic criteria.

As I was figuring out what I thought about the new paper on sauropod posture (Vidal et al. 2020) I found the paper uncommonly difficult to parse. And I quickly came to realise that this was not due to any failure on the authors’ part, but on the journal it was published in: Nature’s Scientific Reports.

A catalogue of pointless whining

A big part of the problem is that the journal inexplicably insists on moving important parts of the manuscript out of the main paper and into supplementary information. So for example, as I read the paper, I didn’t really know what Vidal et al. meant by describing a sacrum as wedged: did it mean non-parallel anterior and posterior articular surfaces, or just that those surfaces are not at right angles to the long axis of the sacrum? It turns out to be the former, but I only found that out by reading the supplementary information:

The term describes marked trapezoidal shape in the
centrum of a platycoelous vertebrae in lateral view or in the rims of a condyle-cotyle (procoelous or opisthocoelous) centrum type.

This crucial information is nowhere in the paper itself: you could read the whole thing and not understand what the core point of the paper is due to not understanding the key piece of terminology.

And the relegation of important material to second-class, unformatted, maybe un-reviewed supplementary information doesn’t end there, by a long way. The SI includes crucial information, and a lot of it:

  • A terminology section of which “wedged vertebrae” is just one of ten sub-sections, including a crucial discussion of different interpretation of what ONP means.
  • All the information about the actual specimens the work is based on.
  • All the meat of the methods, including how the specimens were digitized, retro-deformed and digitally separated.
  • How the missing forelimbs, so important to the posture, were interpreted.
  • How the virtual skeleton was assembled.
  • How the range of motion of the neck was assessed.
  • Comparisons of the sacra of different sauropods.

And lots more. All this stuff is essential to properly understanding the work that was done and the conclusions that were reached.

And there’s more: as well as the supplementary information, which contains six supplementary figures and three supplementary tables, there is an additonal supplementary supplementary table, which could quite reasonably have gone into the supplementary information.

In a similar vein, even within the highly compressed actual paper, the Materials and Methods are hidden away at the back, after the Results, Discussion and Conclusion — as though they are something to be ashamed of; or, at best, an unwelcome necessity that can’t quite be omitted altogether, but need not be on display.

Then we have the disappointingly small illustrations: even the “full size” version of the crucial Figure 1 (which contains both the full skeleton and callout illustrations of key bones) is only 1000×871 pixels. (That’s why the illustration of the sacrum that I pulled out of the paper for the previous post, was so inadequate.)

Compare that with, for example, the 3750×3098 Figure 1 of my own recent Xenoposeidon paper in PeerJ (Taylor 2018) — that has more than thirteen times as much visual information. And the thing is, you can bet that Vidal et al. submitted their illustration in much higher resolution that 1000×871. The journal scaled it down to that size. In 2020. That’s just crazy.

And to make things even worse, unrelated images are shoved into multi-part illustrations. Consider the ridiculousness of figure 2:

Vidal et al. (2020: figure 2). The verticalization of sauropod feeding envelopes. (A) Increased neck range of motion in Spinophorosaurus in the dorso-ventral plane, with the first dorsal vertebra as the vertex and 0° marking the ground. Poses shown: (1) maximum dorsiflexion; (2) highest vertical reach of the head (7.16 m from the ground), with the neck 90° deflected; (3) alert pose sensu Taylor Wedel and Naish13; (4) osteological neutral pose sensu Stevens14; (5) lowest vertical reach of the head (0.72 m from the ground at 0°), with the head as close to the ground without flexing the appendicular elements; (6) maximum ventriflexion. Blue indicates the arc described between maximum and minimum head heights. Grey indicates the arc described between maximum dorsiflexion and ventriflexion. (B) Bivariant plot comparing femur/humerus proportion with sacrum angle. The proportion of humerus and femur are compared as a ratio of femur maximum length/humerus maximum length. Sacrum angle measures the angle the presacral vertebral series are deflected from the caudal series by sacrum geometry in osteologically neutral pose. Measurements and taxa on Table 1. Scale = 1000 mm.

It’s perfectly clear that parts A and B of this figure have nothing to do with each other. It would be far more sensible for them to appear as two separate figures — which would allow part B enough space to convey its point much more clearly. (And would save us from a disconcertingly inflated caption).

And there are other, less important irritants. Authors’ given names not divulged, only initials. I happen to know that D. Vidal is Daniel, and that J. L. Sanz is José Luis Sanz; but I have no idea what the P in P. Mocho, the A in A. Aberasturi or the F in F. Ortega stand for. Journal names in the bibliography are abbreviated, in confusing and sometimes ludicrous ways: is there really any point in abbreviating Palaeogeography Palaeoclimatology Palaeoecology to Palaeogeogr. Palaeoclimatol. Palaeoecol?

The common theme

All of these problems — the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, even the abbreviating of author names and journal titles — have this in common: that they are aping how Science ‘n’ Nature appear in print.

They present a sort of cargo cult: a superstitious belief that extreme space pressures (such as print journals legitimately wrestle with) are somehow an indicator of quality. The assumption that copying the form of prestigious journals will mean that the content is equally revered.

And this is simply idiotic. Scientific Reports is an open-access web-only journal that has no print edition. It has no rational reason to compress space like a print journal does. In omitting the “aniel” from “Daniel Vidal” it is saving nothing. All it’s doing is landing itself with the limitations of print journals in exchange for nothing. Nothing at all.

Why does this matter?

This squeezing of a web-based journal into a print-sized pot matters because it’s apparent that a tremendous amount of brainwork has gone into Vidal et al.’s research; but much of that is obscured by the glam-chasing presentation of Scientific Reports. It reduces a Pinter play to a soap-opera episode. The work deserved better; and so do readers.

References