In the last post, I catalogued some of the reasons why Scientific Reports, in its cargo-cult attempts to ape print journals such as its stablemate Nature, is an objectively bad journal that removes value from the papers submitted to it: the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, the pointless abbreviating of author names and journal titles.

This is particularly odd when you consider the prices of the obvious alternative megajournals:

So to have your paper published in Scientific Reports costs 10% more than in PLOS ONE, or 56% more than in PeerJ; and results in an objectively worse product that slices the paper up and dumps chunks of it in the back lot, compresses and combines the illustrations, and messes up the narrative.

So why would anyone choose to publish in it?

Well, the answer is depressingly obvious. As a colleague once expressed it to me “until I have a more stable job I’ll need the highest IFs I can pull off to secure a position somewhere“.

It’s as simple as that. PeerJ‘s impact factor at the time of writing is 2.353; PLOS ONE‘s is ‎2.776; That of Scientic Reports is ‎4.525. And so, it in the idiotic world we live in, it’s better for an author’s career to pay more for a worse version of his article in Scientific Reports than it is to pay less for a better version in PeerJ or PLOS ONE. Because it looks better to have got into Scientific Reports.

BUT WAIT A MINUTE. These three journals are all “megajournals”. They all have the exact same editorial criteria, which is that they accept any paper that is scientifically sound. They make no judgement about novelty, perceived importance or likely significance of the work. They are all completely up front about this. It’s how they work.

In other words, “getting into” Scientific Reports instead of PeerJ says absolutely nothing about the quality of your work, only that you paid a bigger APC.

Can we agree it’s insane that our system rewards researchers for paying a bigger APC to get a less scientifically useful version of their work?

Let me say in closing that I intend absolutely no criticism of Daniel Vidal or his co-authors for placing their Spinophorosaurus posture paper in Scientific Reports. He is playing the ball where it lies. We live, apparently, in a world where spending an extra $675 and accepting a scientifically worse result is good for your career. I can’t criticise Daniel for doing what it takes to get on in that world.

The situation is in every respect analogous to the following: before you attend a job interview, you are told by a respected senior colleague that your chances of getting the post are higher if you are wearing designer clothing. So you take $675 and buy a super-expensive shirt with a prominent label. If you get the job, you’ll consider it as bargain.

But you will never have much respect for the search committee that judged you on such idiotic criteria.

As I was figuring out what I thought about the new paper on sauropod posture (Vidal et al. 2020) I found the paper uncommonly difficult to parse. And I quickly came to realise that this was not due to any failure on the authors’ part, but on the journal it was published in: Nature’s Scientific Reports.

A catalogue of pointless whining

A big part of the problem is that the journal inexplicably insists on moving important parts of the manuscript out of the main paper and into supplementary information. So for example, as I read the paper, I didn’t really know what Vidal et al. meant by describing a sacrum as wedged: did it mean non-parallel anterior and posterior articular surfaces, or just that those surfaces are not at right angles to the long axis of the sacrum? It turns out to be the former, but I only found that out by reading the supplementary information:

The term describes marked trapezoidal shape in the
centrum of a platycoelous vertebrae in lateral view or in the rims of a condyle-cotyle (procoelous or opisthocoelous) centrum type.

This crucial information is nowhere in the paper itself: you could read the whole thing and not understand what the core point of the paper is due to not understanding the key piece of terminology.

And the relegation of important material to second-class, unformatted, maybe un-reviewed supplementary information doesn’t end there, by a long way. The SI includes crucial information, and a lot of it:

  • A terminology section of which “wedged vertebrae” is just one of ten sub-sections, including a crucial discussion of different interpretation of what ONP means.
  • All the information about the actual specimens the work is based on.
  • All the meat of the methods, including how the specimens were digitized, retro-deformed and digitally separated.
  • How the missing forelimbs, so important to the posture, were interpreted.
  • How the virtual skeleton was assembled.
  • How the range of motion of the neck was assessed.
  • Comparisons of the sacra of different sauropods.

And lots more. All this stuff is essential to properly understanding the work that was done and the conclusions that were reached.

And there’s more: as well as the supplementary information, which contains six supplementary figures and three supplementary tables, there is an additonal supplementary supplementary table, which could quite reasonably have gone into the supplementary information.

In a similar vein, even within the highly compressed actual paper, the Materials and Methods are hidden away at the back, after the Results, Discussion and Conclusion — as though they are something to be ashamed of; or, at best, an unwelcome necessity that can’t quite be omitted altogether, but need not be on display.

Then we have the disappointingly small illustrations: even the “full size” version of the crucial Figure 1 (which contains both the full skeleton and callout illustrations of key bones) is only 1000×871 pixels. (That’s why the illustration of the sacrum that I pulled out of the paper for the previous post, was so inadequate.)

Compare that with, for example, the 3750×3098 Figure 1 of my own recent Xenoposeidon paper in PeerJ (Taylor 2018) — that has more than thirteen times as much visual information. And the thing is, you can bet that Vidal et al. submitted their illustration in much higher resolution that 1000×871. The journal scaled it down to that size. In 2020. That’s just crazy.

And to make things even worse, unrelated images are shoved into multi-part illustrations. Consider the ridiculousness of figure 2:

Vidal et al. (2020: figure 2). The verticalization of sauropod feeding envelopes. (A) Increased neck range of motion in Spinophorosaurus in the dorso-ventral plane, with the first dorsal vertebra as the vertex and 0° marking the ground. Poses shown: (1) maximum dorsiflexion; (2) highest vertical reach of the head (7.16 m from the ground), with the neck 90° deflected; (3) alert pose sensu Taylor Wedel and Naish13; (4) osteological neutral pose sensu Stevens14; (5) lowest vertical reach of the head (0.72 m from the ground at 0°), with the head as close to the ground without flexing the appendicular elements; (6) maximum ventriflexion. Blue indicates the arc described between maximum and minimum head heights. Grey indicates the arc described between maximum dorsiflexion and ventriflexion. (B) Bivariant plot comparing femur/humerus proportion with sacrum angle. The proportion of humerus and femur are compared as a ratio of femur maximum length/humerus maximum length. Sacrum angle measures the angle the presacral vertebral series are deflected from the caudal series by sacrum geometry in osteologically neutral pose. Measurements and taxa on Table 1. Scale = 1000 mm.

It’s perfectly clear that parts A and B of this figure have nothing to do with each other. It would be far more sensible for them to appear as two separate figures — which would allow part B enough space to convey its point much more clearly. (And would save us from a disconcertingly inflated caption).

And there are other, less important irritants. Authors’ given names not divulged, only initials. I happen to know that D. Vidal is Daniel, and that J. L. Sanz is José Luis Sanz; but I have no idea what the P in P. Mocho, the A in A. Aberasturi or the F in F. Ortega stand for. Journal names in the bibliography are abbreviated, in confusing and sometimes ludicrous ways: is there really any point in abbreviating Palaeogeography Palaeoclimatology Palaeoecology to Palaeogeogr. Palaeoclimatol. Palaeoecol?

The common theme

All of these problems — the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, even the abbreviating of author names and journal titles — have this in common: that they are aping how Science ‘n’ Nature appear in print.

They present a sort of cargo cult: a superstitious belief that extreme space pressures (such as print journals legitimately wrestle with) are somehow an indicator of quality. The assumption that copying the form of prestigious journals will mean that the content is equally revered.

And this is simply idiotic. Scientific Reports is an open-access web-only journal that has no print edition. It has no rational reason to compress space like a print journal does. In omitting the “aniel” from “Daniel Vidal” it is saving nothing. All it’s doing is landing itself with the limitations of print journals in exchange for nothing. Nothing at all.

Why does this matter?

This squeezing of a web-based journal into a print-sized pot matters because it’s apparent that a tremendous amount of brainwork has gone into Vidal et al.’s research; but much of that is obscured by the glam-chasing presentation of Scientific Reports. It reduces a Pinter play to a soap-opera episode. The work deserved better; and so do readers.

References

 

Diverticulum, diverticula

November 4, 2018

This is not ‘Nam. This is Latin. There are rules.

The term for a small growth off an organ or body is diverticulum, singular, or diverticula, plural. There are no diverticulae or God forbid diverticuli, no matter what you might read in some papers. Diverticuli is a word – it’s the genitive form of diverticulum. But I’ve never seen it used that way in an anatomy or paleo paper. Diverticuli and diverticulae as alt-plurals for diverticulum are abominations that must be stomped out with extreme prejudice. If you want to get cute with alternative spellings, Wiktionary says you can use deverticulum. Wiktionary does not warn you that you will be mocked for doing so, but it is true nonetheless.

Stop jacking up straightforward anatomical terms, authors who should know better.

Here’s a swan. Unlike diverticuli and diverticulae, this unlikely morphology is real.

 

It’s common to come across abstracts like this one, from an interesting paper on how a paper’s revision history influences how often it gets cited (Rigby, Cox and Julian 2018):

Journal peer review lies at the heart of academic quality control. This article explores the journal peer review process and seeks to examine how the reviewing process might itself contribute to papers, leading them to be more highly cited and to achieve greater recognition. Our work builds on previous observations and views expressed in the literature about (a) the role of actors involved in the research and publication process that suggest that peer review is inherent in the research process and (b) on the contribution reviewers themselves might make to the content and increased citation of papers. Using data from the journal peer review process of a single journal in the Social Sciences field (Business, Management and Accounting), we examine the effects of peer review on papers submitted to that journal including the effect upon citation, a novel step in the study of the outcome of peer review. Our detailed analysis suggests, contrary to initial assumptions, that it is not the time taken to revise papers but the actual number of revisions that leads to greater recognition for papers in terms of citation impact. Our study provides evidence, albeit limited to the case of a single journal, that the peer review process may constitute a form of knowledge production and is not the simple correction of errors contained in submitted papers.

This tells us that a larger number of revisions leads to (or at least is correlated with) an increased citation-count. Interesting!

Immediately, I have two questions, and I bet you do, too:

1. What is the size of the effect?
2. How robust is it?

If their evidence says that each additional round of peer-review yields an dozen additional citations, I might be prepared to revise my growing conviction that multiple rounds of peer review are essentially a waste of time. If it says that each round yields 0.0001 additional citations, I won’t. And if the effect is statistically insignificant, I’ll ignore it completely.

But the abstract doesn’t tell me those simple and fundamental facts, which means the abstract is essentially useless. Unless the authors’ goal for the abstract was for it to be an advertisement for the paper — but that’s not what an abstract is for.

In the old days, authors didn’t write abstracts for their own papers. These were provided after the event — sometimes after publication — by third parties, as a service for those who did not have time to read the whole paper but were interested in its findings. The goal of an abstract is to act as a summary of the paper, a surrogate that a reader can absorb instead of the whole paper, and which summarises the main findings. (I find it interesting that in some fields, the term “précis” or “synopsis” is used: both are more explicit.)

Please, let’s all recognise the painful truth that most people who read abstracts of our papers will not go on to read the full manuscripts. Let’s write our abstracts for those short-on-time people, so they go away with a clear and correct understanding of what our findings were and how strongly they are supported.

References

Rigby, J., D. Cox and K. Julian. 2018. Journal peer review: a bar or bridge? An analysis of a paper’s revision history and turnaround time, and the effect on citation. Scientometrics 114:1087–1105. doi:10.1007/s11192-017-2630-5

 

Step 1: Include the Share-Alike provision in your Creative Commons license, as in the mysteriously popular CC BY-SA and CC BY-NC-SA.

Step 2: Listen to the crickets. You’re done. Congratulations! No-one will ever use your silhouette in a scientific paper, and they probably won’t use your stuff in talks or posters either. Luxuriate in your obscurity and wasted effort.

Pachyrhinosaurus canadensis by Andrew A. Farke, CC BY 3.0, courtesy of PhyloPic.org.

Background

PhyloPic is the incredibly useful thing that Mike Keesey made where makers upload silhouettes of organisms and then people can use them in papers, posters, talks, on t-shirts, bumper stickers, and so on.

At least, they can if the image license allows it. And tons of them don’t, because people include the stupid Non-Commercial (NC) and even stupider Share-Alike (SA) provisions in their image licenses. (Need a refresher on what those are? See the tutorial on licenses.)

Why are these things dumb? Well, you could make a case for NC, but it will still probably kill most potential uses of your images. Most journals are run by companies — well, most are run by incredibly rapacious corporations that extract insane profits from the collective suckerhood that is academia — and using such an image in a for-profit journal would break the Non-Commercial clause. Even open-access journals are a bit murky.

But Share-Alike is way, way worse. What it means is that any derivative works that use material released under CC-BY-SA have to be released under that license as well. Share-Alike came to us from the world of software, where it actually has some important uses, which Mike will expand upon in the next post. But when it comes to PhyloPic or pretty much any other quasi-academic arena, including the Share-Alike provision is misguided.

As of this writing, PhyloPic has two silhouettes of Panphagia. I can actually show you this one, because it doesn’t have the Share-Alike license attached. The other one is inaccessible. Image by Ricardo N. Martinez and Oscar A. Alcober, CC BY 3.0, courtesy of PhyloPic.org.

Why not Share-Alike?

Why is Share-Alike so dumb for PhyloPic? It’s a viral license that in this context accomplishes nothing for the creator. Because the downstream material must also be CC BY-SA (minimally, or CC BY-NC-SA), almost any conceivable use is prevented:

  • People can’t use the images in barrier-based journals, because they’re copyrighted.
  • People can’t use the images in almost all OA journals, because they’re CC BY, and authors can’t just impose a more restrictive license on them willy-nilly.
  • People can’t use the images in their talks or posters, unless they want to make their talks and posters CC BY-SA. Even people who do release their talks and posters out into the wild are probably going to use CC BY if they use anything; they care about being cited, not about forcing downstream users to adopt a pointlessly restrictive license.
  • People probably can’t use the images on t-shirts or bumper stickers; at least, I have a hard time imagining how a physical object could meet the terms of CC BY-SA, unless it’s being given away for free. And even if one could, most downstream creators probably won’t want the headache — they’ll grab a similar image released under a less restrictive license and move on.
  • I can’t even blog the CC BY-SA images because everything we put on this blog is CC BY (except where noted by a handful of more restrictive museum image use policies), and it would more than a little ironic to make this one post CC BY-SA, which it would have to be if it included CC BY-SA images.

You may think I’m exaggerating the problem. I’m not. If you look at the Aquilops paper (Farke et al. 2014), you’ll see a lot of ceratopsian silhouettes drawn by Andy Farke. We were making progress on the paper and when it came time to finish the illustrations, most of the silhouettes we needed had the Share-Alike provision, which made them useless to us. So Andy drew his own. And while he was doing that, I took some of my old sauropod drawings and converted them to silhouettes and uploaded them. Both of us used CC BY, because all we care about is getting cited. And now people are using — and citing! — Andy’s and my drawings in preference to others, some arguably better (at least for the sauropods), that have pointlessly restrictive licenses.

So we have this ridiculous situation where a ton of great images on PhyloPic are essentially unusable, because people put them up under a license that sounds cool but actually either outright blocks or at least has a chilling effect on almost any conceivable use.

Is this a good silhouette of Camarasaurus? Maybe, maybe not. But that’s beside the point: this is currently the only silhouette of Camarasaurus on PhyloPic that you can actually use. By Mathew Wedel, CC BY 3.0, courtesy of PhyloPic.org.

What I do about this

Here’s my take: I care about one thing and one thing only, which is credit. All I need is CC BY. If someone wants to take my stuff and put it in a product and charge a profit, I say go for it — because legally every copy of that product has to have my name on it somewhere, credited as the creator of the image. I may not be making any money off that product, but I’m at least getting exposure. If I go CC BY-NC, then I also don’t get any money, and now I don’t even get that exposure. Why would I hack my own foot off like that? And I don’t use CC BY-SA because I don’t write software, so it has only downsides to offer me.

Now, there are certainly artists in the world with sufficient talent to sell t-shirts and prints. But even for them I’m skeptical that CC BY-NC has much to offer for their PhyloPic silhouettes. I know we’re all nuts around here for monochrome filled outlines of dead animals, but let’s be real, they’re a niche market at best for clothing and lifestyle goods. Personally I’d rather get the citations than prevent someone in Birmingham or Bangkok from selling cladogram t-shirts with tiny copies of my drawings, and I think that would still be true even if I was a professional artist.

What you should do about this

I suspect that a lot of people reading this post are dinosaur enthusiasts. If you are, and you’d like to get your name into published scientific work (whether you pursue writing and publishing yourself or not), get drawin’, and upload those babies using CC-BY. Make sure it is your own original work, not just a skin thrown over someone else’s skeletal recon, and don’t spam PhyloPic with garbage. But if you can execute a technical drawing of a critter, there’s a good chance it will be used and cited. Not only because there are still holes in PhyloPic’s coverage, but because so many otherwise great images on PhyloPic are locked up behind restrictive licenses. To pick an example nearly at random, PhyloPic has two silhouettes of Pentaceratops, and both of them are useless because of the Share-Alike provision in their licenses. You have an opportunity here. Don’t tarry.

If you already uploaded stuff to PhyloPic using CC BY-SA for whatever reason (it sounded cool, Joe Chill murdered your folks, you didn’t realize that it was academic reuse equivalent of radioactive syphilis), change it or replace it. Because all it is doing right now is driving PhyloPic users to other people’s work. Really, honestly, all you are doing is wasting your time by uploading this stuff, and wasting the time of PhyloPic users who have to hover over your pictures to find out that they’re inaccessible.

You don’t get any credit if no-one ever uses your stuff. Or, more precisely, you get 100% of a pie that doesn’t exist. That’s dumb. Stop doing it.

Reference

Farke, A.A., Maxwell, W.D., Cifelli, R.L., and Wedel, M.J. 2014. A ceratopsian dinosaur from the Lower Cretaceous of Western North America, and the biogeography of Neoceratopsia. PLoS ONE 9(12): e112055. doi:10.1371/journal.pone.0112055

I’ve been on Twitter since April 2011 — nearly six years. A few weeks ago, for the first time, something I tweeted broke the thousand-retweets barrier. And I am really unhappy about it. For two reasons.

First, it’s not my own content — it’s a screen-shot of Table 1 from Edwards and Roy (2017):

c49rdmlweaaa4if

And second, it’s so darned depressing.

The problem is a well-known one, and indeed one we have discussed here before: as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work.

In fact, this phenomenon is so very well known and understood that it’s been given at least three different names by different people:

  • Goodhart’s Law is most succinct: “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s Law is the most explicit: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • The Cobra Effect refers to the way that measures taken to improve a situation can directly make it worse.

As I say, this is well known. There’s even a term for it in social theory: reflexivity. And yet we persist in doing idiot things that can only possibly have this result:

  • Assessing school-teachers on the improvement their kids show in tests between the start and end of the year (which obviously results in their doing all they can depress the start-of-year tests).
  • Assessing researchers by the number of their papers (which can only result in slicing into minimal publishable units).
  • Assessing them — heaven help us — on the impact factors of the journals their papers appear in (which feeds the brand-name fetish that is crippling scholarly communication).
  • Assessing researchers on whether their experiments are “successful”, i.e. whether they find statistically significant results (which inevitably results in p-hacking and HARKing).

What’s the solution, then?

I’ve been reading the excellent blog of economist Tim Harford, for a while. That arose from reading his even more excellent book The Undercover Economist (Harford 2007), which gave me a crash-course in the basics of how economies work, how markets help, how they can go wrong, and much more. I really can’t say enough good things about this book: it’s one of those that I feel everyone should read, because the issues are so important and pervasive, and Harford’s explanations are so clear.

In a recent post, Why central bankers shouldn’t have skin in the game, he makes this point:

The basic principle for any incentive scheme is this: can you measure everything that matters? If you can’t, then high-powered financial incentives will simply produce short-sightedness, narrow-mindedness or outright fraud. If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.

I think that last part is pretty much how academia used to be run a few decades ago. Now I don’t want to get all misty-eyed and rose-tinted and nostalgic — especially since I wasn’t even involved in academia back then, and don’t know from experience what it was like. But could it be … could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?

[Read on to Why do we manage academia so badly?]

References

Bonus

Here is a nicely formatted full-page version of the Edwards and Roy table, for you to print out and stick on all the walls of your university. My thanks to David Roberts for preparing it.

I have before me the reviews for a submission of mine, and the handling editor has provided an additional stipulation:

Authority and date should be provided for each species-level taxon at first mention. Please ensure that the nominal authority is also included in the reference list.

In other words, the first time I mention Diplodocus, I should say “Diplodocus Marsh 1878″; and I should add the corresponding reference to my bibliography.

Marsh (1878: plate VIII in part). The only illustration of Diplodocus material in the paper that named the genus.

Marsh (1878: plate VIII in part). The only illustration of Diplodocus material in the paper that named the genus.

What do we think about this?

I used to do this religiously in my early papers, just because it was the done thing. But then I started to think about it. To my mind, it used to make a certain amount of sense 30 years ago. But surely in 2016, if anyone wants to know about the taxonomic history of Diplodocus, they’re going to go straight to Wikipedia?

I’m also not sure what the value is in providing the minimal taxonomic-authority information rather then, say, morphological information. Anyone who wants to know what Diplodocus is would be much better to go to Hatcher 1901, so wouldn’t we serve readers better if we referred to “Diplodocus (Hatcher 1901)”

Now that I come to think of it, I included “Giving the taxonomic authority after first use of each formal name” in my list of
Idiot things that we we do in our papers out of sheer habit three and a half years ago.

Should I just shrug and do this pointless busywork to satisfy the handling editor? Or should I simply refuse to waste my time adding information that will be of no use to anyone?

References

  • Hatcher, Jonathan B. 1901. Diplodocus (Marsh): its osteology, taxonomy and probable habits, with a restoration of the skeleton. Memoirs of the Carnegie Museum 1:1-63 and plates I-XIII.
  • Marsh, O. C. 1878. Principal characters of American Jurassic dinosaurs, Part I. American Journal of Science, series 3 16:411-416.