Now that Matt and I have blogged various thoughts about how to orient vertebra (part 1, part 2, relevant digression 1, relevant digression 2, part 3) and presented a talk on the subject at the 1st Palaeontological Virtual Congress, it’s time for us to strike while the iron is hot and write the paper.

Figure A. NHMUK PV R2095, the holotype dorsal vertebra of Xenoposiedon proneneukos in left lateral view. A. In the canonical orientation that has been used in illustrations in published papers (Taylor and Naish 2007, Taylor 2018b, in blog-posts and on posters and mugs. B. Rotated 15° “backwards” (i.e. clockwise, with the dorsal portion displaced caudally), yielding a sub-vertical anterior margin in accordance the recommendation of Mannion (2018b). In both parts, the blue line indicates the horizontal axis, the green line indicates the vertical axis, and the red line indicates the slope of the neural arch as in Taylor (2018b: figure 3B, part 2). In part A, the slope (i.e. the angle between the red and green lines) is 35°; in part B, it is 20°.

We’re doing it totally in the open, on GitHub. You can always see the most recent version of the manuscript at https://github.com/MikeTaylor/palaeo-vo/blob/master/vo-manuscript.md and you can also review the history of its composition if you like — from trivial changes like substituting a true em-dash for a double hyphen, to significant additions like writing the introduction.

More than that, you can contribute! If you think there’s a mistake, or something missing that should be included, or if you just have a suggestion, you can file an issue on the project’s bug-tracker. If you’re feeling confident, you can go further and directly edit the manuscript. The result will be a tracked change that we’ll be notified of, and which we can accept into, or reject from, the master copy.

We hope, by making all this visible online, to demythologise the process of writing a paper. In a sense, there is no magic to it: you just start writing, do a section at a time, revise as you go, and eventually you’re done. It’s much like writing anything else. (Doing the referencing can make it much slower than regular writing, though!)

By the way, you may wonder why the illustration above is “Figure A” rather than “Figure 1”. In all my in-progress manuscripts, I just assign letters to each illustration as I add it, not worrying about ordering. Only when the manuscript is ready to be submitted do I take the order that the illustrations occur in (A, D, G, H, B, I, E, F, for example, with C having been dropped along the way) and replace them with consecutive numbers. So I save myself a lot of tedious and error-prone renumbering every time that, in the process of composition, I insert an illustration anywhere before the last existing one. This is really helpful when there are a lot of illustrations — as there tend to be in our papers, since they’re all in online-only open-access venues with no arbitrary limits. For example, our four co-authored papers from 2013 had a total of 69 illustrations (11 in Taylor and Wedel 2013a, 25 in Wedel and Taylor 2013a, 23 in Taylor and Wedel 2013b and 10 in Wedel and Taylor 2013b).

References

 

 

Advertisements

I know, I know — you never believed this day would come. And who could blame you? Nearly thirteen years after my 2005 SVPCA talkSweet Seventy-Five and Never Been Kissed, I am finally kicking the Archbishop descriptive work into gear. And I’m doing it in the open!

In the past, I’ve written my academic works in LibreOffice, submitted them for peer-review, and only allowed the world to see them after they’ve been revised, accepted and published. More recently, I’ve been using preprints to make my submitted drafts public before peer review. But there’s no compelling reason not to go more open than that, so I’ll be writing this paper out in the open, in a public GitHub repository than anyone can access. That also means anyone can file issues if they thing there’s something wrong or missing, and anyone can submit pull-requests if they have a correction to contribute.

I’ll be writing this paper in GitHub Flavoured Markdown so that it displays correctly right in the browser, and so that patches can be supported. That will make tables a bit more cumbersome, but it should be manageable.

Anyway, feel free to follow progress at https://github.com/MikeTaylor/palaeo-archbishop

The very very skeletal manuscript is at https://github.com/MikeTaylor/palaeo-archbishop/blob/master/archbishop-manuscript.md

It’s common to come across abstracts like this one, from an interesting paper on how a paper’s revision history influences how often it gets cited (Rigby, Cox and Julian 2018):

Journal peer review lies at the heart of academic quality control. This article explores the journal peer review process and seeks to examine how the reviewing process might itself contribute to papers, leading them to be more highly cited and to achieve greater recognition. Our work builds on previous observations and views expressed in the literature about (a) the role of actors involved in the research and publication process that suggest that peer review is inherent in the research process and (b) on the contribution reviewers themselves might make to the content and increased citation of papers. Using data from the journal peer review process of a single journal in the Social Sciences field (Business, Management and Accounting), we examine the effects of peer review on papers submitted to that journal including the effect upon citation, a novel step in the study of the outcome of peer review. Our detailed analysis suggests, contrary to initial assumptions, that it is not the time taken to revise papers but the actual number of revisions that leads to greater recognition for papers in terms of citation impact. Our study provides evidence, albeit limited to the case of a single journal, that the peer review process may constitute a form of knowledge production and is not the simple correction of errors contained in submitted papers.

This tells us that a larger number of revisions leads to (or at least is correlated with) an increased citation-count. Interesting!

Immediately, I have two questions, and I bet you do, too:

1. What is the size of the effect?
2. How robust is it?

If their evidence says that each additional round of peer-review yields an dozen additional citations, I might be prepared to revise my growing conviction that multiple rounds of peer review are essentially a waste of time. If it says that each round yields 0.0001 additional citations, I won’t. And if the effect is statistically insignificant, I’ll ignore it completely.

But the abstract doesn’t tell me those simple and fundamental facts, which means the abstract is essentially useless. Unless the authors’ goal for the abstract was for it to be an advertisement for the paper — but that’s not what an abstract is for.

In the old days, authors didn’t write abstracts for their own papers. These were provided after the event — sometimes after publication — by third parties, as a service for those who did not have time to read the whole paper but were interested in its findings. The goal of an abstract is to act as a summary of the paper, a surrogate that a reader can absorb instead of the whole paper, and which summarises the main findings. (I find it interesting that in some fields, the term “précis” or “synopsis” is used: both are more explicit.)

Please, let’s all recognise the painful truth that most people who read abstracts of our papers will not go on to read the full manuscripts. Let’s write our abstracts for those short-on-time people, so they go away with a clear and correct understanding of what our findings were and how strongly they are supported.

References

Rigby, J., D. Cox and K. Julian. 2018. Journal peer review: a bar or bridge? An analysis of a paper’s revision history and turnaround time, and the effect on citation. Scientometrics 114:1087–1105. doi:10.1007/s11192-017-2630-5

 

If you don’t get to give a talk at a meeting, you get bumped down to a poster. That’s what’s happened to Matt, Darren and me at this year’s SVPCA, which is coming up next week. My poster is about a weird specimen that Matt and I have been informally calling “Biconcavoposeidon” (which I remind you is not a formal taxonomic name).

Here it is, for those of you who won’t be at the meeting (or who just want a preview):

But wait — there’s more. The poster is now also formally published (Taylor and Wedel 2017) as part of the PeerJ preprint containing the conference abstract. It has a DOI and everything. I’m happy enough about it that I’m now citing it in my CV.

Do scientific posters usually get published? Well, no. But why not? I can’t offhand think of a single example of a published poster, though there must be some out there. They are, after all, legitimate research artifacts, and typically contain more information than published abstracts. So I’m happy to violate that norm.

Folks: it’s 2017. Publish your posters.

References

  • Taylor, Michael P., and Mathew J. Wedel. 2017. A unique Morrison-Formation sauropod specimen with biconcave dorsal vertebrae. p. 78 in: Abstract Volume: The 65th Symposium on Vertebrate Palaeontology and Comparative Anatomy & The 26th Symposium on Palaeontological Preparation and Conservation. University of Birmingham: 12th–15th September 2017. 79 pp. PeerJ preprint 3144v2. doi:10.7287/peerj.preprints.3144v2/supp-1

Lots of discussion online lately about unpaid peer reviews and whether this indicates a “degraded sense of community” in academia, improper commoditization of the unwritten responsibilities of academics, or a sign that we should rethink incentives in academia. (NB: that’s my galloping sound-bite-ization of those three posts, which you should go read in full.)

Part of this “reviewers don’t get paid” thing is good, because it indicates that academics broadly are waking up to how badly they’ve been had by commercial publishers. It’s part of that necessary anger that Scott Aaronson wrote about back when. But I can also understand why people are pushing back and saying, “Oh, if you don’t review you’re not supporting the academic community that (in part) makes your career possible. We should all pitch in and do the work.” Until recently, there was no way to separate those two strands: in doing peer reviews (and editing, etc.), one was both supporting the community as a good citizen, and also, unavoidably, helping commercial publishers line their pockets. But now that previously single path has bifurcated (no, not that way). Now it’s possible to be a good citizen for the community by editing and reviewing for OA journals, and stick it to the barrier-based publishers by not editing and reviewing for them (here’s how to politely decline, and see more discussion here).

Here’s how jacked the situation is: if you edit or review for a barrier-based publisher whose journals you also subscribe to or otherwise pay for, then in effect you are paying them for the privilege of reviewing. Put like that, it sounds insane. In any normal transaction, I give you X and you give me Y in return, because we’ve jointly agreed that these things are of roughly equal worth. In barrier-based publishing, academics give publishers (1) their papers, which publishers then exert copyright over, (2) their effort as editors and reviewers, and (3) their money, in subscriptions or other access fees, individually or collectively as institutions. And publishers sell the work back to us, retaining the copyright, and reap massive profits. There is no part of that sequence where academics – and indeed humanity at large – are getting the upside of the deal. The publishers are running the table on us, because for a long time, there were no other options. That’s not true anymore.

In his post on community, Zen Faulkes wrote, “I think people are refusing to do reviews in part because they don’t feel connected to the academic community.” Possibly. But maybe people are refusing to do reviews because they’re tired of being had. Has anyone done any work that would allow us to test those hypotheses? If so, I’d love to hear about it in the comments.

TL;DR: The separation of community goals and corporate profits shouldn’t be a fine theoretical point of discussion. It should be what we lead with. Yes, I will support the academic community. No, I won’t donate my time and effort to rapacious barrier-based publishers. It’s possible to achieve both of those things at once. And we should.

This morning, I was invited to review a paper — one very relevant to my interests — for a non-open-access journal owned by one of the large commercial barrier-based publishers. This has happened to me several times now; and I declined, as I have done ever since 2011.

I know this path is not for everyone. But for anybody who feels similarly to how I do but can’t quite think what to say to the handling editor and corresponding author, here are the messages that I sent to both.

First, to the handling editor (who in this case also happened to be the Editor-in-Chief):

Dear EDITOR NAME,

I’m writing to apologise for turning down your request that I review NAME OF PAPER. The reason is that I am wholly committed to the free availability of all scholarly research to everyone, and I cannot in good conscience give my time and expertise to a paper that is destined to end up behind PUBLISHER‘s paywall.

I know this can sound very self-righteous — I am sorry if it appears that way. I also recognise that there is serious collateral damage from limiting my reviewing efforts to open-access journals. My judgement is that, in the long term, that regrettable damage is a price worth paying, and I laid out my reasons a few years ago in this blog post: https://svpow.com/2011/10/17/collateral-damage-of-the-non-open-reviewing-boycott/

I hope you will understand my reasons for pushing hard towards an open-access future for all our scholarship; and I even hope that you might reconsider the time you yourself dedicate to PUBLISHER‘s journal, and wonder whether it might be more fruitfully spent in helping an open-access palaeontology journal to improve its profile and reputation.

Yours, with best wishes,

Mike.

Then, to the corresponding author, a similar message:

Dear AUTHOR NAME,

I was invited by JOURNAL to review your new manuscript NAME OF PAPER. I’m writing to apologise for turning down that request, and to explain why I did so.

The reason is that I am wholly committed to the free availability of all scholarly research to everyone, and I cannot in good conscience give my time and expertise to a paper that is destined to end up behind PUBLISHER‘s paywall.

I know this can sound very self-righteous — I am sorry if it appears that way. I also recognise that there is serious collateral damage from limiting my reviewing efforts to open-access journals. My judgement is that, in the long term, that regrettable damage is a price worth paying, and I laid out my reasons a few years ago in this blog post: https://svpow.com/2011/10/17/collateral-damage-of-the-non-open-reviewing-boycott/

I hope you will understand my reasons for pushing hard towards an open-access future for all our scholarship; and I even hope that you might consider withdrawing your work from JOURNAL, and instead submitting to one of the many fine open-access journals in our field. (Examples: Palaeontologia Electronica, Acta Palaeontologica Polonica, PLOS ONE, PeerJ, PalArch’s Journal of Vertebrate Paleontology, Royal Society Open Science.)

Yours, with apologies for the inconvenience and my best wishes,

Mike.

Anyone is welcome to use these messages as templates or inspiration if they are useful. Absolutely no rights reserved.

It’s baffled me for years that there is no open graph of scholarly citations — a set of machine-readable statements that (for example) Taylor et al. 2009 cites Stevens and Parrish 1999, which cites Alexander 1985 and Hatcher 1901.

With such a graph, you would be able to answer question like “what subsequent publications have cited my 2005 paper but not my 2007 paper?” and of course “Has paper X been rebutted in print, or do I need to do it?”

At a more basic level, it’s ridiculous that every one of us maintains our own citation database for our own work. It makes no sense that there isn’t a single, global, universally accessible citation database which all of us can draw from for our bibliographies.

Today we welcome the Initiative for Open Citations (I4OC), which is going to fix that. I’m delighted that someone is stepping up to the plate. It’s been a critical missing piece of scholarly infrastructure.

As far as I can see, I4OC is starting out by encouraging publishers to sign up for CrossRef’s existing Cited-by service. This is a great way to capture citation information going forward; but I hope they also have plans for back-filling the last few centuries’ citations. There are a lot of ways this could be done, but one would be crowdsourcing contributions. They have good people involved, so I’m optimistic that they’ll get on this.

By the way, this kind of thing — machine-readable data — is one area where preprints genuinely lose out compared to publisher-mediated versions of articles. Publishers on the whole don’t do nearly enough to earn their very high fees, but one very real contribution they do make is the process that is still, for historical reasons, known as “typesetting” — transforming a human-readable manuscript into a machine-readable one from which useful data can be extracted. I wonder whether preprint repositories of the future will have ways to match this function?