October 7, 2013
Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.
What would you do?
You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.
But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.
Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.
Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)
Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.
But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.
Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.
To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.
Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”
Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.
Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.
Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.
Everyone reading this will, I am sure, have recognised that I’m talking about John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.
Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.
Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.
Update (9 October)
Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.
October 3, 2013
An extraordinary study has come to light today, showing just how shoddy peer-review standards are at some journals.
Evidently fascinated by Science‘s eagerness to publish the fatally flawed Arsenic Life paper, John Bohannon conceived the idea of constructing a study so incredibly flawed that it didn’t even include a control. His plan was to see whether he could get it past the notoriously lax Science peer-review provided it appealed strongly enough to that journal’s desire for “impact” (designed as the ability to generate headlines) and pandered to its preconceptions (that its own publication model is the best one).
So Bohannon carried out the most flawed study he could imagine: submitting fake papers to open-access journals selected in part from Jeffrey Beall’s list of predatory publishers without sending any of his fake papers to subscription journals, noting that many of the journals accepted the papers, and drawing the flagrantly unsupported conclusion that open-access publishing is flawed.
It’s hard to know where Science can go from here. Having fallen for Bohannon’s sting, its credibility is shot to pieces. We can only assume that the AAAS will now be added to Beall’s list of predatory publishers.
Here are some other responses to the Science story:
- Michael Eisen: I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals
- Martin Eve: Flawed sting operation singles out open access journals (and his longer original version)
- Peter Suber: New “sting” of weak open-access journals
- The Library Loon: Which is it?
- Björn Brembs: Science Magazine Rejects Data, Publishes Anecdote
- Kausik Datta at SciLogs: What Science’s “Sting Operation” Reveals: Open Access Fiasco or Peer Review Hellhole?
- John Hawks: “Open access spam” and how journals sell scientific reputation
- Retraction Watch:
- OASPA: response to the recent article in Science entitled “Who’s Afraid of Peer Review?”
- Jeroen Bosman: Science Mag sting of OA journals: is it about Open Access or about peer review?
- Curt Rice: What Science — and the Gonzo Scientist — got wrong: open access will make research better (now also appearing at the Guardian)
- Michelle N. Meyer: The troubled peer-review system, the open-access wars, and the blurry line between human subjects research and investigative journalism
- Ernesto Priego: Who’s Afraid of Open Access?
- Marius Buliga: On John Bohannon article in Science
- DOAJ: response to the recent article in Science entitled “Who’s Afraid of Peer Review?
- Zen Faulkes: Open access or vanity press, the Science “sting” edition
- Graham Steel: Glam Mag fucks up, news at eleven
- Heather Joseph (SPARC): Science Magazine’s Open Access “Sting”
- Lenny Teytelman: What hurts science – rejection of good or acceptance of bad?
- Fabiana Kubke: Science gone bad; or, or the day after the sting
- Gunther Eysenbach: Unscientific spoof paper accepted by 157 “black sheep” open access journals – but the Bohannon study has severe flaws itself
- Jon Brock: This study lacked an appropriate control group: Two stars
- Me again, this time with the gloves off: Anti-tutorial: how to design and execute a really bad study
- Paul Basken (Chronicle of Higher Education): Critics Say Sting on Open-Access Journals Misses Larger Point
- Neurobonkers: Science’s Straw Man Sting
- The Winnower: The Real Peer Review: Post-Publication
- Sal Robinson: John Bohannon’s Open Access sting paper annoys many, scares the easily scared, accomplishes relatively little
- Peerage of Science: It’s gotta sting
- Peter Murray-Rust: The Bohannon “Sting”; Can we trust AAAS/Science or is this PRISM reemerging from the grave?
- Heather Morrison: Bohannon and Science: bogus articles and PR spin instead of peer review
- Barbara Fister (Inside Higher Ed): The Sting
- Jon Tennant (guesting at SciLogs): Peer Review Quality is Independent of Open Access
- Stuart Shieber: Lessons from the faux journal investigation
- DOAJ: Second response to the Bohannon article 2013-10-18
- Andreas Thoss: Peer review: how to distinguish the good from the bad?
September 24, 2013
I woke up this morning to find its third substantial review waiting for me.
That means that this paper has now accumulated as much useful feedback in the twenty-seven hours since I submitted it as any previous submission I’ve ever made.
It’s worth reviewing the timeline here:
- Monday 23rd September, 1:19 am: I completed the submission process.
- 7:03 am: the preprint was published. It took less than six hours.
- 10:52 am: received a careful, detailed review from Emanuel Tschopp. It took less than four hours from publication, and so of course less than ten from submission.
- About 5:00 pm: received a second review, this one from Mark Robinson. (I don’t know the exact time because PeerJ’s page doesn’t show an actual timestamp, just “21 hours ago”.)
- Tuesday 24th September, about 4:00 am: received a third review, this from ceratopsian-jockey and open-science guru Andy Farke.
Total time from submission to receiving three substantial reviews: about 27 hours.
It’s worth contrasting that with the times taken to get from submission to the receipt of reviews — usually only two of them — when going through the traditional journal route. Here are a few of mine:
- Diplodocoid phylogenetic nomenclature at the Journal of Paleontology, 2004-5 (the first reviews I ever received): three months and 14 days.
- Revised version of the same paper at PaleoBios, 2005 (my first published paper): one month and 10 days.
- Xenoposeidon description at Palaeontology, 2006: three months and 19 days, although that included a delay as the handling editor sent it to a third, tie-breaking, reviewer.
- Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2008: one month and 11 days.
- Sauropod neck anatomy (eventually to be published in a very different form in PeerJ) at Paleobiology: five months and two days.
- Trivial correction to the Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2010: five months and 11 days, bizarrely for a half-page paper.
Despite the wide variations in submission-to-review time at these journals, it’s clear that you can expect to wait at least a month before getting any feedback at all on your submission at traditional journals. Even PeerJ took 19 days to get the reviews of our neck-anatomy paper back to us.
So I am now pretty such sold on the pre-printing route. As well as getting this early version of the paper out there early so that other palaeontologists can benefit from it (and so that we can’t be pre-emptively plagiarised), issuing a preprint has meant that we’ve got really useful feedback very quickly.
I highly recommend this route.
By the way, in case anyone’s wondering, PeerJ Preprints is not only for manuscripts that are destined for PeerJ proper. They’re perfectly happy for you to use their service as a place to gather feedback for your work before submitting it elsewhere. So even if your work is destined for, say, JVP, there’s a lot to be gained by preprinting it first.
September 23, 2013
I was very pleased, on checking my email this morning, to see that my and Matt’s new paper, The neck of Barosaurus was not only longer but also wider than those of Diplodocus and other diplodocines, is now up as a PeerJ preprint!
I was pleased partly because of the very quick work on PeerJ’s part. I submitted the preprint at 1:22am last night, then went to bed. Almost immediately I got an automatic email from PeerJ saying:
Thank you for submitting your manuscript, “The neck of Barosaurus was not only longer but also wider than those of Diplodocus and other diplodocines” (#2013:09:838:0:0:CHECK:P) – it has now been received by PeerJ PrePrints.
Next, it will be checked by PeerJ staff, who will notify you if any alterations are required to the manuscript or accompanying files.
If the PrePrint successfully passes these checks, it will be made public.
You will receive notification by email at each stage of this process; you can also check the status of your manuscript at any time.
Lots to like here: the quickness of the response, the promise of automatic email updates, and the one-click link to check on progress (as opposed to the usual maze of Manuscript Central options to navigate).
Sure enough, a couple of hours later the next automatic email arrived, telling me that Matt had accepted PeerJ’s email invitation to be recognised as the co-author of the submission.
And one hour ago, just as I was crawling out of bed, I got the notification that the preprint is up. That simple.
I’m also pleased because we managed to get this baby written so quickly. It started life as our talk at SVPCA in Edinburgh (Taylor and Wedel 2013a), which we delivered 25 days ago having put it together mostly in a few days running up to the conference — so it’s zero to sixty in less than a month. Every year we promise ourselves that we’ll write up our talks, and we never seem to get around to it, but this year I started writing on the train back from Edinburgh. By the time I got home I had enough of a hunk of text to keep me working on it, and so we were able to push through in what, for us, is record time.
Now here’s what we’d like:
We want this paper’s time as a preprint to be time well spent — which means that we want to improve it. To do that, we need your reviews. Assuming we get some useful comments, we plan to release an updated version pretty soon; and after some number of iterations, we’ll submit the resulting paper as a full-fledged PeerJ paper.
So if you know anything about sauropods, about vertebra, about deformation, about ecology, or even about grammar or punctuation, please do us a favour: read the preprint, then get over to its PeerJ page and leave your feedback. You’ll be helping us to improve the scientific record. We’ll acknowledge substantial comments in the final paper, but even the pickiest comments are appreciated.
Because we want to encourage this approach to bringing papers to publication, we’d ask you please do not post comments about the paper here on SV-POW!. Please post them on the PeerJ preprint page. We’ve leaving comments here open for discussion of the preprinting processes, but not the scientific content.
- Taylor, Michael P., and Mathew J. Wedel. 2013a. Barosaurus revisited: the concept of Barosaurus (Dinosauria: Sauropoda) is based on erroneously referred specimens. (Talk given as: Barosaurus revisited: the concept of Barosaurus (Dinosauria: Sauropoda) is not based on erroneously referred specimens.) pp. 37-38 in Stig Walsh, Nick Fraser, Stephen Brusatte, Jeff Liston and Vicen Carrió (eds.), Programme and Abstracts, 61st Symposium on Vertebrae Palaeontology and Comparative Anatomy, Edinburgh, UK, 27th-30th August 2013. 33 pp.
- Taylor, Michael P., and Mathew J. Wedel. 2013b. The neck of Barosaurus was not only longer but also wider than those of Diplodocus and other diplodocines. PeerJ PrePrints 1:e67v1 http://dx.doi.org/10.7287/peerj.preprints.67v1
September 20, 2013
I was astonished yesterday to read Understanding and addressing research misconduct, written by Linda Lavelle, Elsevier’s General Counsel, and apparently a specialist in publication ethics:
While uncredited text constitutes copyright infringement (plagiarism) in most cases, it is not copyright infringement to use the ideas of another. The amount of text that constitutes plagiarism versus ‘fair use’ is also uncertain — under the copyright law, this is a multi-prong test.
So here (right in the first paragraph of Lavelle’s article) we see copyright infringement equated with plagiarism. And then, for good measure, the confusion is hammered home by the depiction of fair use (a defence against accusations of copyright violation) depicted as a defence against accusations of plagiarism.
This is flatly wrong. Plagiarism and copyright violation are not the same thing. Not even close.
First, plagiarism is a violation of academic norms but not illegal; copyright violation is illegal, but in truth pretty ubiquitous in academia. (Where did you get that PDF?)
Second, plagiarism is an offence against the author, while copyright violation is an offence against the copyright holder. In traditional academic publishing, they are usually not the same person, due to the ubiquity of copyright transfer agreements (CTAs).
Third, plagiarism applies when ideas are copied, whereas copyright violation occurs only when a specific fixed expression (e.g. sequence of words) is copied.
Fourth, avoiding plagiarism is about properly apportioning intellectual credit, whereas copyright is about maintaining revenue streams.
Let’s consider four cases (with good outcomes is green and bad ones in red):
- I copy big chunks of Jeff Wilson’s (2002) sauropod phylogeny paper (which is copyright the Linnean Society of London) and paste it into my own new paper without attribution. This is both plagiarism against Wilson and copyright violation against the Linnean Society.
- I copy big chunks of Wilson’s paper and paste it into mine, attributing it to him. This is not plagiarism, but copyright violation against the Linnean Society.
- I copy big chunks of Rigg’s (1904) Brachiosaurus monograph (which is out of copyright and in the public domain) into my own new paper without attribution. This is plagiarism against Riggs, but not copyright violation.
- I copy big chunks of Rigg’s paper and paste it into mine with attribution. This is neither plagiarism nor copyright violation.
Plagiarism is about the failure to properly attribute the authorship of copied material (whether copies of ideas or of text or images). Copyright violation is about failure to pay for the use of the material.
Which of the two issues you care more about will depend on whether you’re in a situation where intellectual credit or money is more important — in other words, whether you’re an author or a copyright holder. For this reason, researchers tend to care deeply when someone plagiarises their work but to be perfectly happy for people to violate copyright by distributing copies of their papers. Whereas publishers, who have no authorship contribution to defend, care deeply about copyright violation.
One of the great things about the Creative Commons Attribution Licence (CC By) is that it effectively makes plagiarism illegal. It requires that attribution be maintained as a condition of the licence; so if attribution is absent, the licence does not pertain; which means the plagiariser’s use of the work is not covered by it. And that means it’s copyright violation. It’s a neat bit of legal ju-jitsu.
- Riggs, Elmer S. 1904. Structure and relationships of opisthocoelian dinosaurs. Part II, the Brachiosauridae. Field Columbian Museum, Geological Series 2:229-247, plus plates LXXI-LXXV.
- Wilson, Jeffrey A. 2002. Sauropod dinosaur phylogeny: critique and cladistic analysis. Zoological Journal of the Linnean Society 136:217-276.
September 12, 2013
Paul Jump’s coverage of open-access issues in Times Higher Education continues with today’s post discussing the fallout from the new BIS report. That report says:
The Finch group, composed of representatives from publishers, universities, funders and libraries [...] was charged with determining a route to open access to which all interested parties could sign up.
There’s your problem, right there. Barrier-based publishers want the opposite to what everything else wants: to set the default to zero access. It’s fundamentally impossible to satisfy both researchers/students/doctors/businesses that want access, and publishers that want to deny them access.
The Finch Group — or BIS, if they can’t get it done — is going to have to grasp the nettle and accept that the UK’s solution on open access is going to make someone very unhappy. The only question is whether that Someone is going to be (A) barrier-based publishers, or (B) literally everyone else in the world.
September 10, 2013
I just read Mick Watson’s post Why I resigned as PLOS ONE academic editor on his blog opiniomics. Turns out his frustration with PLOS ONE is not to do with his editorial work but with the long silences he faced as an author at that journal when trying to get a bad decision appealed.
I can totally identify with that, though my most frustrating experiences along these lines have been with other journals. (yes, Paleobiology, I’m looking at you.) So here’s what I wrote in response (lightly edited from the version that appeared as a comment on the original blog).
There’s one thing that PLOS ONE could and should do to mitigate this kind of frustration: communicate. And so should all other journals.
At every step in the appeal process — and indeed the initial review process — an automated email should be sent to the author. So for the initial submission:
- “Your paper has been assigned an academic editor.”
- “Your paper has been sent out to a reviewer.”
- “An invited reviewer has declined to review; we will try another.”
- “An invited reviewer failed to accept or decline within two weeks; we will try another.”
- “A review has been submitted.”
- “A reviewer has failed to submit his report within four weeks; we are making contact again to ask for a quick response.”
- “A reviewer has failed to submit his report within six weeks; we have dropped that reviewer from this process and will try another.”
- “All reviews are in; the editor is considering the decision.”
- Decision letter.
And for the appeal:
- “Your appeal has been noted and is under consideration.”
- “We have contacted the original handling editor.”
- “The original handling editor has responded.”
- “The original handling editor has failed to respond after four weeks; we are escalating to a senior editor.”
- [perhaps] go back into some of all of the submission process.
- Decision letter.
Most if not all of these stages in the process already have workflow logic in the manuscript-handing system. There is no reason not to send the poor author emails when they happen — it’s no extra work for the editor or reviewers.
Speaking as the veteran of plenty of long-drawn-out silences from journals that I’ve submitted to, I know that getting these messages would have made a big difference to me.
September 9, 2013
You know how every time you point out a problem to legacy publishers — like when they’re caught misrepresenting their open-access offerings they explain that it’s very complicated and will take months to fix?
Here’s how that should work:
To summarise: I found a bug in the PeerJ system; I reported it in two tweets (total word-count: 32); 27 hours later, they had fixed it, and our article was showing the end-pages in its bibliography.
Are you watching, Elsevier? 27 hours.
Of course, we do realise that it’s much harder for you. PeerJ have all that manpower, those thousands of people working on their system, while you only have one or two techies, who have all sorts of other duties as well as finding bug-reports on Twitter and immediately fixing them. It’s always tough for the little guy, isn’t it?
August 19, 2013
July 9, 2013
Robin Osborne, professor of ancient history at King’s College, Cambridge, had an article in the Guardian yesterday entitled “Why open access makes no sense“. It was described by Peter Coles as “a spectacularly insular and arrogant argument”, by Peter Webster as an “Amazingly wrong-headed piece” and by Glyn Moody as “easily the most arrogant & dim-witted article I’ve ever read on OA”.
Here’s my response (posted as a comment on the original article):
At a time when the world as a whole is waking up to the open-access imperative, it breaks my heart to read this fusty, elitist, reactionary piece, in which Professor Osborne ends up arguing strongly for his own irrelevance. What a tragic lack of vision, and of ambition.
There is still a discussion to be had over what routes to take to universal open access, how quickly to move, and what other collateral changes need to be made (such as changing how research is evaluated for the purposes of job-searches and promotion). But Osborne’s entitled bleat is no part of that discussion. He has opted out.
The fundamental argument for providing open access to academic research is that research that is funded by the tax-payer should be available to the tax-payer.
That is not the fundamental argument for providing open access (although it’s certainly a compelling secondary one). The fundamental argument is that the job of a researcher is to create new knowledge and understanding; and that it’s insane to then take that new knowledge and understanding and lock it up where only a tiny proportion of the population can benefit from it. That’s true whether the research is funded publicly or by a private charity.
The problem is that the two situations are quite different. In the first case [academic research], I propose both the research questions and the dataset to which I apply them. In the second [commercial research] the company commissioning the work supplies the questions.
Osborne’s position here seem to be that because he is more privileged than a commercial researcher in one respect (being allowed to choose the subject of his research) he should also be more privileged in another (being allowed to choose to restrict his results to an elite). How can such an attitude be explained? I find it quite baffling. Why would allowing researchers to choose their own subjects mean that funders would be happy to allow the results to be hidden from the world?
Publishing research is a pedagogical exercise, a way of teaching others
Yes. Which is precisely why there is no justification for withholding it from those others.
At the end of the day the paper published in a Gold open access journal becomes less widely read. [...] UK scholars who are obliged to publish in Gold open access journals will end up publishing in journals that are less international and, for all that access to them is cost-free, are less accessed in fact. UK research published through Gold open access will end up being ignored.
As a simple matter of statistics, this is flatly incorrect. Open-access papers are read, and cited, significantly more than paywalled papers. The meta-analysis of Swan (2010) surveyed 31 previous studies of the open-access citation advantage, showing that 27 of them found advantages of between 45% are 600%. I did a rough-and-ready calculation on the final table of that report, averaging the citation advantages given for each of ten academic fields (using the midpoints of ranges when given), and found that on average open-access articles are cited 176% more often — that is, 2.76 times as often — as non-open.
There can be no such thing as free access to academic research. Academic research is not something to which free access is possible.
… because saying it twice makes it more true.
Like it or not, the primary beneficiary of research funding is the researcher, who has managed to deepen their understanding by working on a particular dataset.
Just supposing this strange assertion is true (which I don’t at all accept), I’m left wondering what Osborne thinks the actual purpose of his research is. On what basis does he think our taxes should pay him to investigate questions which (as he himself reminds us) he has chosen as being of interest to him? Does he honestly believe that the state owes him not just a living, but a living doing the work that he chooses on the subject that he chooses with no benefit accruing to anyone but him?
No, it won’t do. We fund research so that we can all be enriched by the new knowledge, not just an entited elite. Open access is not just an economic necessity, it’s a moral imperative.