Short post today. Go and read this paper: Academic urban legends (Rekdal 2014). It’s open access, and an easy and fascinating read. It unfolds a tale of good intentions gone wrong, a chain of failure, illustrating an important single crucial point of academic behaviour: read what you cite.

References

Rekdal, Ole Bjørn. 2014. Academic urban legends. Social Studies of Science 44(4):638-654. doi: 10.1177/0306312714535679

 

Regulars will remember that nearly two years ago, I reviewed a paper for the Royal Society’s journal Biology Letters, recommended acceptance with only trivial changes (as did both other reviewers) and was astonished to see that it was rejected outright. There was an invitation to resubmit, with wording that made it clear that the resubmission would be treated as a brand new manuscript; but when the “resubmission” was made, it was accepted almost immediately without being sent to reviewers at all — proving that it was in fact a minor revision.

What’s worse, the published version gives the dates “Received August 21, 2012.
Accepted September 13, 2012″, for a submission-to-acceptance time of just 23 days. But my review was done before August 21. This is a clear falsifying of the true time taken to process the manuscript, a misrepresentation unworthy of the Royal Society, and which provoked Matt and me to declare that we would no longer provide peer-review for the Society until they fix this.

By the way, we should be clear that the Royal Society is not the only publisher that does this. For example, one commenter had had the same experience with Molecular Ecology. Misreporting the submission/revision cycle like this works to publishers’ benefit in two ways: it makes them look faster than they really are, and makes the rejection rate look higher (which a lot of people still use as a proxy for prestige).

To the Society’s credit, they were quick to get in touch, and I had what at time seemed like a fruitful conversation with Dr Stuart Taylor, their Commercial Director. The result was that they made some changes:

  • Editors now have the additional decision option of ‘revise’. This provides a middle way between ‘reject and resubmit’ and ‘accept with minor revisions’. [It’s hard to believe this didn’t exist before, but I guess it’s so.]
  • The Society now publicises ‘first decision’ times rather than ‘first acceptance’ times on their website.

As I noted at the time, while this is definitely progress, it doesn’t (yet) fix the problem.

A few days ago, I checked whether things have improved by looking at a recent article, and was disappointed to see that they had not. I posted two tweets:

Again, I want to acknowledge that the Royal Society is taking this seriously: less than a week later I heard from Phil Hurst at the Society:

I was rather surprised to read your recent tweets about us not fixing this bug. I thought it was resolved to your satisfaction.

I replied:

Because newly published articles still only have two dates (submitted and accepted) it’s impossible to tell whether the “submitted” date is that of the original submission (which would be honest) or that of the revision, styled “a new submission” even though it’s not, that follows a “reject and resubmit” verdict.

Also: if the journals are still issuing “reject and resubmit” and then accepting the supposed new submissions without sending them out for peer-review (I can’t tell whether this is the case) then that is also wrong.

Sorry to be so hard to satisfy :-) I hope you will see and agree that it comes from a desire to have the world’s oldest scientific society also be one that leads the way in transparency and honesty.

And Phil’s response (which I quote with his kind permission):

I feel the changes we have made provide transparency.

Now that the Editors have the ‘revise’ option, this revision time is now incorporated in the published acceptance times. If on the other hand the ‘reject and resubmit’ option is selected, the paper has clearly been rejected and the author may or may not re-submit. Clearly if a paper had been rejected from another journal and then submitted to us, we would not include the time spent at that journal, so I feel our position is logical.

We only advertise the average ‘receipt to first decision’ time. As stated previously, we feel this is more meaningful as it gives prospective authors an indication of the time, irrespective of decision.

After all that recapitulation, I am finally in a position to lay out what the problems are, as I perceive them, in how things currently stand.

  1. Even in recently published articles, only two dates are given: “Received May 13, 2014. Accepted July 8, 2014″. It’s impossible to tell whether the first of those dates is that of the original submission, or the “new submission” that is really a minor revision following a reject-and-resubmit verdict.
  2. It’s also impossible to tell what “receipt to first decision” time is in the journal’s statistics. Is “receipt” the date of the revision?
  3. We don’t know what the journals’ rejection rates mean. Do they include the rejections of articles that are in fact published a couple of weeks later?

So we have editorials like this one from 2012 that trumpet a rejection rate of 78% (as though wasting the time of 78% of their authors is something to be proud of), but we have no idea what that number represents. Maybe they reject all articles initially, then accept 44% of them immediately on resubmission, and call that a 22% acceptance rate. We just can’t tell.

All of this uncertainly comes from the same root cause: the use of “reject and resubmit” to mean “accept with minor revisions”.

What can the Royal Society do to fix this? Here is one approach:

  1. Each article should report three dates instead of two. The date of initial submission, the date of resubmission, and the date of acceptance. Omitting the date of initial submission is actively misleading.
  2. For each of the statistics they report, add prose that is completely clean on what is being measured. In particular, be clear about what “receipt” means.

But a much better and simpler and more honest approach is just to stop issuing “reject and resubmit” verdicts for minor revisions. All the problems just go away then.

“Minor revisions” should mean “we expect the editor to be able to make a final decision based on the changes you make”.

“Major revisions” should mean “we expect to send the revised manuscript back out to the reviewers, so they can judge whether you’ve made the necessary changes”.

And “reject and resubmit” should mean “this paper is rejected. If you want to completely retool it and resubmit, feel free”. It is completely inappropriate to accept a resubmitted paper without sending it out to peer review: doing so unambiguously gives the lie to the claim in the decision letter that “The resubmission will be treated as a new manuscript”.

Come on, Royal Society. You’ve been publishing science since 1665. Three hundred and forty-nine years should be long enough to figure out what “reject” means. You’re better than this.

And once the Royal Society gets this fixed, it will become much easily to persuade other publishers who’ve been indulging in this shady practice to mend their ways, too.

Illustration talk slide 47

Illustration talk slide 48

Illustration talk slide 49

Illustration talk slide 50

That last one really hurts. Here’s the original image, which should have gone in the paper with the interpretive trace next to it rather than on top of it:

Sauroposeidon C6-C7 scout

The rest of the series.

Papers referenced in these slides:

I was astonished yesterday to read Understanding and addressing research misconduct, written by Linda Lavelle, Elsevier’s General Counsel, and apparently a specialist in publication ethics:

While uncredited text constitutes copyright infringement (plagiarism) in most cases, it is not copyright infringement to use the ideas of another. The amount of text that constitutes plagiarism versus ‘fair use’ is also uncertain — under the copyright law, this is a multi-prong test.

So here (right in the first paragraph of Lavelle’s article) we see copyright infringement equated with plagiarism. And then, for good measure, the confusion is hammered home by the depiction of fair use (a defence against accusations of copyright violation) depicted as a defence against accusations of plagiarism.

This is flatly wrong. Plagiarism and copyright violation are not the same thing. Not even close.

First, plagiarism is a violation of academic norms but not illegal; copyright violation is illegal, but in truth pretty ubiquitous in academia. (Where did you get that PDF?)

Second, plagiarism is an offence against the author, while copyright violation is an offence against the copyright holder. In traditional academic publishing, they are usually not the same person, due to the ubiquity of copyright transfer agreements (CTAs).

Third, plagiarism applies when ideas are copied, whereas copyright violation occurs only when a specific fixed expression (e.g. sequence of words) is copied.

Fourth, avoiding plagiarism is about properly apportioning intellectual credit, whereas copyright is about maintaining revenue streams.

Let’s consider four cases (with good outcomes is green and bad ones in red):

  1. I copy big chunks of Jeff Wilson’s (2002) sauropod phylogeny paper (which is copyright the Linnean Society of London) and paste it into my own new paper without attribution. This is both plagiarism against Wilson and copyright violation against the Linnean Society.
  2. I copy big chunks of Wilson’s paper and paste it into mine, attributing it to him. This is not plagiarism, but copyright violation against the Linnean Society.
  3. I copy big chunks of Rigg’s (1904) Brachiosaurus monograph (which is out of copyright and in the public domain) into my own new paper without attribution. This is plagiarism against Riggs, but not copyright violation.
  4. I copy big chunks of Rigg’s paper and paste it into mine with attribution. This is neither plagiarism nor copyright violation.

Plagiarism is about the failure to properly attribute the authorship of copied material (whether copies of ideas or of text or images). Copyright violation is about failure to pay for the use of the material.

Which of the two issues you care more about will depend on whether you’re in a situation where intellectual credit or money is more important — in other words, whether you’re an author or a copyright holder. For this reason, researchers tend to care deeply when someone plagiarises their work but to be perfectly happy for people to violate copyright by distributing copies of their papers. Whereas publishers, who have no authorship contribution to defend, care deeply about copyright violation.

One of the great things about the Creative Commons Attribution Licence (CC By) is that it effectively makes plagiarism illegal. It requires that attribution be maintained as a condition of the licence; so if attribution is absent, the licence does not pertain; which means the plagiariser’s use of the work is not covered by it. And that means it’s copyright violation. It’s a neat bit of legal ju-jitsu.

References

  • Riggs, Elmer S. 1904. Structure and relationships of opisthocoelian dinosaurs. Part II, the Brachiosauridae. Field Columbian Museum, Geological Series 2:229-247, plus plates LXXI-LXXV.
  • Wilson, Jeffrey A. 2002. Sauropod dinosaur phylogeny: critique and cladistic analysis. Zoological Journal of the Linnean Society 136:217-276.

What is an ad-hominem attack?

September 4, 2013

I recently handled the revisions on a paper that hopefully will be in press very soon. One of the review comments was “Be very careful not to make ad hominem attacks”.

I was a bit surprised to see that — I wasn’t aware that I’d made any — so I went back over the manuscript, and sure enough, there were no ad homs in there.

There was criticism, though, and I think that’s what the reviewer meant.

Folks, “ad hominem” has a specific meaning. An “ad hominem attack” doesn’t just mean criticising something strongly, it means criticising the author rather than the work. The phrase is Latin for “to the man”. Here’s a pair of examples:

  • “This paper by Wedel is terrible, because the data don’t support the conclusion” — not ad hominem.
  • “Wedel is a terrible scientist, so this paper can’t be trusted” – ad hominem.

What’s wrong with ad hominem criticism? Simply, it’s irrelevant to evaluation of the paper being reviewed. It doesn’t matter (to me as a scientist) whether Wedel strangles small defenceless animals for pleasure in his spare time; what matters is the quality of his work.

Note that ad hominems can also be positive — and they are just as useless there. Here’s another pair of examples:

  • “I recommend publication of Naish’s paper because his work is explained carefully and in detail” — not ad hominem.
  • “I recommend publication of Naish’s paper because he is a careful and detailed worker” — ad hominem.

It makes no difference whether Naish is a careful and detailed worker, or if he always buys his wife flowers on their anniversary, or even if he has a track-record of careful and detailed work. What matters is whether this paper, the one I’m reviewing, is good. That’s all.

As it happens the very first peer-review I ever received — for the paper that eventually became Taylor and Naish (2005) on diplodocoid phylogenetic nomenclature — contained a classic ad hominem, which I’ll go ahead and quote:

It seems to me perfectly reasonable to expect revisers of a major clade to have some prior experience/expertise in the group or in phylogenetic taxonomy before presenting what is intended to be the definitive phylogenetic taxonomy of that group. I do not wish to demean the capabilities of either author – certainly Naish’s “Dinosaurs of the Isle of Wight” is a praiseworthy and useful publication in my opinion – but I question whether he and Taylor can meet their own desiderata of presenting a revised nomenclature that balances elegance, consistency, and stability.

You see what’s happening here? The reviewer was not reviewing the paper, but the authors. There was no need for him or her to question whether we could meet our desiderata: he or she could just have read the manuscript and found out.

(Happy ending: that paper was rejected at the journal we first sent it to, but published at PaleoBios in revised form, and bizarrely is my equal third most-cited paper. I never saw that coming.)

I just got this message from Rana Ashour of Paleontology Journal, an open-access journal published by Hindawi, who are generally felt to be a perfectly legitimate publisher:

Dear Dr. Taylor,

I am writing to invite you to submit an article to Paleontology Journal which is a peer-reviewed open access journal for original research articles as well as review articles in all areas of paleontology.

Paleontology Journal is published using an open access publication model, meaning that all interested readers are able to freely access the journal online  without the need for a subscription, and authors retain the copyright of their work. All manuscripts that are submitted to the journal during June 2013 will not be subject to any page charges, color charges, or article processing charges.

[snip]

(Apart from anything else, the waiving of APCs pretty clearly indicates that this is not a scam journal.)

I replied:

Hi, Rana. Thanks for this invitation. I am supportive of Hindawi as a good-quality, low-cost open-access publisher. In particular I want Paleontology Journal to do well: it has at least one colleague of mine among its editors. I am particularly pleased to see that no APCs are payable on submissions made during June 2013.

But as a matter of principle I never respond to “academic spam”. Messages sent as bulk mailings to a broad group of potential authors are at best impolite, and at worst actively damage the reputation of the journal and its publisher — see point M on Jefffey Beall’s Criteria for Determining Predatory Open-Access Publishers.

I urge you to use what influence you have to discontinue the use of spam to advertise Paleontology Journal. If the journal is good, it can be advertised by publicising the papers that appear in it.

Thanks,

Dr. Michael P. Taylor
Department of Earth Sciences
University of Bristol
ENGLAND

Let’s hope they go with it. I’d love them to build another low-cost, high-quality, journal in the palaeontology OA space, to compete with Acta Palaeontologica Polonica, Palaeontologia Electronica, PalArch and of course PLOS ONE and PeerJ. But they won’t do it by spamming.

Whenever I write a complicated document, such as my submission to the Select Committee on open access, I get Matt to do an editing pass before I finalise it. That’s always worthwhile, but I have to be careful not to just blindly hit the Accept All Changes button.

commercial-exploitation

Follow

Get every new post delivered to your Inbox.

Join 412 other followers