December 6, 2013
Lots of researchers post PDFs of their own papers on their own web-sites. It’s always been so, because even though technically it’s in breach of the copyright transfer agreements that we blithely sign, everyone knows it’s right and proper. Preventing people from making their own work available would be insane, and the publisher that did it would be committing a PR gaffe of huge proportions.
Enter Elsevier, stage left. Bioinformatician Guy Leonard is just one of several people to have mentioned on Twitter this morning that Academia.edu took down their papers in response to a notice from Elsevier. Here’s a screengrab of the notification:
And here is the text (largely so search-engines can index it):
Unfortunately, we had to remove your paper, Resolving the question of trypanosome monophyly: a comparative genomics approach using whole genome data sets with low taxon sampling, due to a take-down notice from Elsevier.
Academia.edu is committed to enabling the transition to a world where there is open access to academic literature. Elsevier takes a different view, and is currently upping the ante in its opposition to academics sharing their own papers online.
Over the last year, more than 13,000 professors have signed a petition voicing displeasure at Elsevier’s business practices at www.thecostofknowledge.com. If you have any comments or thoughts, we would be glad to hear them.
The Academia.edu Team
(Kudos to the Academia.edu team, by the way, for saying it like it is: “upping the ante in its opposition to academics sharing their own papers online”. It would have been easy for them to give no opinion on this. Much better that they’ve nailed their colours to the mast.)
I was going to comment on Elsevier’s exceedingly short-sighted and mean-spirited manoeuvre, but happily the Twittersphere is on it already. Here are a few thoughts:
- David Winter wrote: Added value! Subs fees pay for lawyers to stop you sharing your work with colleagues…
- Rich FitzJohn speculated: I wonder what their long game is here; petty harassment like that makes me way less inclined to publish in an Elsevier journal.
- To which Rafael Maia responded: so silly…is it really worth it? its like they are proudly embracing being the dicks of academic publishing
- But Dr. Wrasse was more forthright.
This doesn’t directly affect me, of course, since I’ve had the good fortune not to have published in an Elsevier journal. But it’s another horrible example of how organisations that call themselves “publishers” do the exact opposite of publishing. The good people I know at Elsevier — people like Tom Reller, Alicia Wise and The Other Mike Taylor — must be completely baffled, and very frustrated, by this kind of thing.
Every time they start to persuade me that maybe – maybe – somewhere in the cold heart of legacy publishers, there lurks some real will to make a transition to actually serving the scholarly community, they do something like this. It’s like a sickness with them.
Do scholarly publishers really need to be reminded that “publish” means “make public”? Yes. Yes, they do. Apparently. Remember how I called legacy publishers “enemies of science” back at the start of 2012? Yup. Still true. And, astonishingly, as Rafael Maia noted, Elsevier seem determined to lead the way.
Have they learned nothing? Will they never?
October 7, 2013
Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.
What would you do?
You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.
But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.
Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.
Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)
Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.
But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.
Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.
To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.
Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”
Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.
Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.
Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.
Everyone reading this will, I am sure, have recognised that I’m talking about John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.
Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.
Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.
Update (9 October)
Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.
September 30, 2013
A few years ago, in my programming day-job, we had a customer who we were providing with software components and a bit of custom development. While this was going on, we had a sequence of meetings with them in which we pitched several possible system designs, explaining how we could help them use our components in various ways.
After this had been going on for a while, our contact at the customer had to take us to one side. He was gentle with us: “Look, you seem to have the idea that we’re looking for some kind of ongoing consultancy from you”, he said. “We’re really not. We like your tools, and we’re happy to pay for them, but that’s all we need from you. We’ll take it from there”.
And that’s what I think about whenever I read anything like this:
Elsevier is receiving an increasing number of content mining requests and we are developing solutions to meet customer needs. [...] We wish to understand our customers’ text mining requirements and as practically every content mining request has a different goal and there is not a common solution to provide this. Consequently we request that customers looking to mine our content should speak to their Elsevier Account Manager.
Even if we assume generously that this is a genuine attempt to be helpful and not just a land-grab, it’s WRONG WRONG WRONG WRONG WRONG.
No, Elsevier. Your customers’ text mining requirements are very, very simple. Every content mining request has exactly the same goal and there is a common solution to provide this. That solution is: get out of the way.
No-one needs Elsevier’s (or Wiley’s or Springer’s) help with text-mining. No-one wants them as partners. No-one needs their APIs. All anyone wants is to get hold of the papers. That’s all. The only role of the publisher in this process is not to impede it.
Publishers: your job is to publish (“make public”), then step aside and let the world make use of what you’ve published.
September 14, 2013
Suppose you’re working on a Wealden sauropod — for example, the disturbingly Camarasaurus-like isolated dorsal vertebra NHM R2523 — and for some reason you desperately want to publish your work in Cretaceous Research.
But it’s published by Elsevier, which means that if you’re committed to open access, you have to find an exorbitant $3300 for the APC. Since Elsevier’s profit margin is 37.3%, you know that $1230.90 of your APC is going to be sliced right off the top. I’ve heard it said (but don’t have a reference for this) that barrier-based publishers spend something like 40% of their costs on marketing subscriptions. So there goes another $827.64. And because legacy publishers have to spend a fortune on paywalls, authentication systems, lawyers, spin-doctors, lobbyists and the like, that could well account for, say, half of the remainder. If that’s correct, then only $620.73 of your APC — 19% of what you give them — is actually paying for publishing services such as copy-editing, typesetting, Web hosting and archiving.
You could be forgiven for thinking that’s not the best way to spend your $3300.
it would of course be much cheaper to publish in PLOS ONE, or PeerJ, or eLife, or F1000 Research, or one of the relevant BMC journals. But let’s suppose that your heart is set on Cretaceous Research.
I don’t know how common it is for people to find themselves in this situation, but I’m guessing it crops up more often than somewhat. Often enough, maybe, that the editors wish that the journal they run was published by someone other than Elsevier.
So my question is this: who “owns” journals? For example, we know JVP could move away from T&F if they wanted — at least, when its four-year contract expires — but could Cretaceous Research move from Elsevier? Do the editorial board “own” it? Or does Elsevier? If the CR editors hypothetically wanted to keep running their journal but as (say) an open access Ubiquity Press journal with a £250 APC, would they be forced to start The New Journal of Cretaceous Research, leaving the old one to wither with no editors?
And just to be clear: this isn’t a question about Cretaceous Research, Elsevier and Ubiquity. They’re just examples. It’s about the broader problem of who controls what journals, and what the people who actually run those journals can do about it.
September 12, 2013
Paul Jump’s coverage of open-access issues in Times Higher Education continues with today’s post discussing the fallout from the new BIS report. That report says:
The Finch group, composed of representatives from publishers, universities, funders and libraries [...] was charged with determining a route to open access to which all interested parties could sign up.
There’s your problem, right there. Barrier-based publishers want the opposite to what everything else wants: to set the default to zero access. It’s fundamentally impossible to satisfy both researchers/students/doctors/businesses that want access, and publishers that want to deny them access.
The Finch Group — or BIS, if they can’t get it done — is going to have to grasp the nettle and accept that the UK’s solution on open access is going to make someone very unhappy. The only question is whether that Someone is going to be (A) barrier-based publishers, or (B) literally everyone else in the world.
September 10, 2013
I just read Mick Watson’s post Why I resigned as PLOS ONE academic editor on his blog opiniomics. Turns out his frustration with PLOS ONE is not to do with his editorial work but with the long silences he faced as an author at that journal when trying to get a bad decision appealed.
I can totally identify with that, though my most frustrating experiences along these lines have been with other journals. (yes, Paleobiology, I’m looking at you.) So here’s what I wrote in response (lightly edited from the version that appeared as a comment on the original blog).
There’s one thing that PLOS ONE could and should do to mitigate this kind of frustration: communicate. And so should all other journals.
At every step in the appeal process — and indeed the initial review process — an automated email should be sent to the author. So for the initial submission:
- “Your paper has been assigned an academic editor.”
- “Your paper has been sent out to a reviewer.”
- “An invited reviewer has declined to review; we will try another.”
- “An invited reviewer failed to accept or decline within two weeks; we will try another.”
- “A review has been submitted.”
- “A reviewer has failed to submit his report within four weeks; we are making contact again to ask for a quick response.”
- “A reviewer has failed to submit his report within six weeks; we have dropped that reviewer from this process and will try another.”
- “All reviews are in; the editor is considering the decision.”
- Decision letter.
And for the appeal:
- “Your appeal has been noted and is under consideration.”
- “We have contacted the original handling editor.”
- “The original handling editor has responded.”
- “The original handling editor has failed to respond after four weeks; we are escalating to a senior editor.”
- [perhaps] go back into some of all of the submission process.
- Decision letter.
Most if not all of these stages in the process already have workflow logic in the manuscript-handing system. There is no reason not to send the poor author emails when they happen — it’s no extra work for the editor or reviewers.
Speaking as the veteran of plenty of long-drawn-out silences from journals that I’ve submitted to, I know that getting these messages would have made a big difference to me.
September 9, 2013
You know how every time you point out a problem to legacy publishers — like when they’re caught misrepresenting their open-access offerings they explain that it’s very complicated and will take months to fix?
Here’s how that should work:
To summarise: I found a bug in the PeerJ system; I reported it in two tweets (total word-count: 32); 27 hours later, they had fixed it, and our article was showing the end-pages in its bibliography.
Are you watching, Elsevier? 27 hours.
Of course, we do realise that it’s much harder for you. PeerJ have all that manpower, those thousands of people working on their system, while you only have one or two techies, who have all sorts of other duties as well as finding bug-reports on Twitter and immediately fixing them. It’s always tough for the little guy, isn’t it?
August 19, 2013
July 25, 2013
Last October, we published a sequence of posts about misleading review/reject/resubmit practices by Royal Society journals (Dear Royal Society, please stop lying to us about publication times; We will no longer provide peer reviews for Royal Society journals until they adopt honest editorial policies; Biology Letters does trumpet its submission-to-acceptance time; Lying about submission times at other journals?; Discussing Biology Letters with the Royal Society). As noted in the last of these posts, the outcome was that I had what seemed to be a fruitful conversation with Stuart Taylor, Commercial Director of the society.
Then things went quiet for some time.
On 8 May this year, I emailed Stuart to ask what progress there had been. At his request Phil Hurst (Publisher, The Royal Society) emailed me back on 10 May as follows:
Stuart has asked me to update you on the changes we have made following your conversation last year.
We have reviewed editorial procedures on Biology Letters. Further to this, we now provide Editors with the additional decision option of ‘revise’. This provides a middle way between ‘reject and resubmit’ and ‘accept with minor revisions’. Editors use all three options and it is entirely at their discretion which they select. ‘Revised’ papers retain the original submission date and we account for this in our published acceptance times.
In addition, we now publicise ‘first decision’ times rather than ‘first acceptance’ times on our website. We feel this is more meaningful as it gives prospective authors an indication of the time, irrespective of decision.
The first thing to say is, it’s great to see some progress on this.
The second thing is, I must apologise for my terrible slowness in reporting back. Phil emailed me again on 17 June to remind me to post, and it’s still taken me more than another month.
The third thing is, while this is definitely progress, it doesn’t (yet) fix the problem. That’s for two reasons.
The first problem is that so long as there is a “reject and resubmit” option that does not involve a brand new round of review (like a true resubmission), there is still a loophole by which editors can massage the journals’ figures. Of course, there is nothing wrong with “reject and resubmit” per se, but it does have to result in the resubmission being treated as a brand new submission — it can’t be a fig-leaf for what are actually minor revisions, as in the paper that first made me aware of this practice.
So I would urge the Royal Society either to get rid of the R&R option completely, replacing it with a simple “reject”; or to establish firm, explicit, transparent rules about how such resubmissions are treated.
The second problems is with the reporting. It’s true that the home pages of both Proc. B and Biology Letters do now publicise “Average receipt to first decision time” rather than the misleading old “Average receipt to acceptance time”. This is good news. Proc. B (though for some reason not Biology Letters) even includes a link to an excellent and very explicit page that gives three times (receipt to first decision, receipt to online publication and final decision to online publication) for five journals, and explains exactly what they mean.
Unfortunately, individual articles still include only Received and Accepted dates. You can see examples in recent papers both at Proc. B and at Biology Letters. As far as I can tell, there is no way to determine whether the Received date is for the original submission, or (as I can’t help but suspect) the minor revision that is disguised as a resubmission.
The solution for this is very simple (and was raised when I first talked to Stuart Taylor back in October): just give three dates: Received, Revised and Accepted. Then everything is clear and above board, and there is no scope for anyone to suspect wrongdoing.
July 3, 2013
Christopher W. Schadt tells a distasteful story over on his blog, about how a PLOS ONE paper that he was a co-author on was republished as part of a non-PLOS printed volume that retails for $100. The editors and publishers of this volume neither asked the authors’ permission to do this (which is fair enough, it was published as CC By), nor even took the elementary courtesy of informing them. Worse, the reprinted copy in the book doesn’t have a reference to the original version in PLOS ONE.
It’s clear the editors of this book have (to put it mildly) been rather rude here. But what they’ve done is possibly legal and in accordance with the terms under which the article was originally published. The CC By licence requires attribution, and sure enough the work is attributed to the correct authors.
But does CC By require that the original publication also be credited? Not exactly. The terms of the licence say that the work can be reused subject to this condition:
Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).
So the author could specify (and PLOS should probably specify in the published form of their articles) that the manner in which the work should be attributed requires not only authorship to be recognised but also the original publication in PLOS to be cited.
But the current PLOS wording on this is unfortunately a mess. Schadt’s article, like all PLOS ONE articles, says:
Copyright: © 2010 Reganold et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
This is probably intended to say that attribution must mention both author and source (i.e. citation of original publication). But what this wording actually does, wrongly, is state that this is what the CC BY licence intrinsically requires.
So PLOS have a bit of work to do to tidy this up. And they are not alone in this. PeerJ uses the exact same form of words, and BMC says something a bit different (“… provided the original work is properly cited”) which us also open to misinterpretation.
All three of these publishers, and probably many others using CC By, need to tighten their wording so that they don’t claim that CC By requires a full citation, but stipulate that in their use of CC By, providing a citation is part of what constitutes proper attribution.
Had PLOS ONE done that, then the reprinted version of the Reganold et al. paper would have been clearly not covered by the CC By licencing option, and so would have constituted copyright violation plain and simple. As it is, they’re clearly guilty but have some wiggle-room. (To be fair, representatives of the production company and publisher have been quick to apologise on Schadt’s blog.)