Dear eLife: please give us eLife ONE
April 22, 2019
Since the rather surprising apppointment of Mike Eisen as the new Editor-in-Chief of eLife, I’ve found myself thinking about this journal again. At its inception in 2012, it was explicitly intended to be the open-access alternative that would “compete with publishing powerhouses such as Nature, Science and Cell“.
A few years ago, in an article about eLife‘s £25M additional cash injection from its three original funders, Nature News reported a doubling down on its original mission, citing an interview with founding Editor-in-Chief Randy Schekman:
But it won’t, he says, establish other open-access journals that accept more papers and have lower selectivity — a strategy that some organizations, such as the Public Library of Science, or PLoS, has turned to in an attempt to shore up finances. “We have no interest in creating other lesser journals with lower standards,” he says.
This makes no sense to me.
Four years ago, I was already saying “It’s a real shame that the eLife people have fallen into the impact-chasing trap and show no interest in running an eLife megajournals.” Now that eLife’s funding is running out and it’s having to introduce APCs, it makes even less sense to refuse to run a review-for-correctness-only journal alongside its flagship.
I do see why some people think it’s desirable to have an OA alternative to Science and Nature. But I can’t understand at all why they won’t add a second, non-selective journal — an eLIFE ONE, if you will — and automatically propagate articles to it that are judged “sound but dull” at eLIFE proper (or eLIFE Gold, as they may want to rename it). Way back in I think 2012 I spoke separately to Randy Schekman and executive director Mark Patterson about this: both of them were completely uninterested then, and it seems that’s still the case.
This is why Mike Eisen’s appointment is such a surprise. In a recent interview regarding this appointment, he commented “Our addiction to high-impact factor journals poisons hiring and funding decisions, and distorts the research process” — which I agree with 100%. But then why has he taken on a role in a journal that perpetuates that addiction?
We can only hope that he plans to change it from within, and that eLife ONE is lurking just beyond the horizon.
Appendix
This isn’t a new drum for me to be banging. Way back when eLife launched in 2012, I left a comment on its Reviewer’s Charter. Seven years on, all the comments seem to have vanished (I hate it when that happens), but happily for posterity I saved what I wrote. Here it is:
This charter is an excellent start. But I would like to see a Guideline Zero that lays out what the purposes (plural) of peer-review are. I see three very distinct purposes, and much of the frustration with peer-review comes from reviews that blur these.
1. Assessing whether the paper is sound, i.e. does it express a coherent argument that is backed up by data?
2. Assessing “importance” or “impact”, i.e. is the paper likely to be seen as a significant advance in its field, and to gather many citations?
3. Helping the author improve the paper by constructively suggesting changes that can be made without altering the fundamental nature of the paper.
All of these are important contributions, but they are quite separate and should be kept so. In category 3, for example, if a reviewer suggests that a sentence would be easier to parse if changed around, that should certainly not affect the gatekeeping decisions in categories 1 and 2.
PLoS ONE has shown the importance of separating caterories 1 and 2. In that journal, and those that now emulate it, the “impact” criterion is explicitly ignored, and all good papers are published however important or unimportant they are considered to be. (This means, among other things, that replication studies are welcome.)
This inclusive strategy is not appropriate for all journals — I understand that eLife is actively aiming to reject most submissions on “impact” criteria, in the hope of attaining prestige similar to that of Science and Nature. But even in journals that evaluate for impact, it’s important to separate the two assessment criteria. One important practical implication of this separation is that a subsequent high-volume “eLife ONE” journal could publish all eLife submissions that passed criterion 1 but failed criterion 2 without the need for further review.
I don’t think there was ever a response to that comment back in the day (though I can’t be certain due to the comments’ having vanished.) I hope that’s set to change under the new regime.
April 22, 2019 at 3:23 pm
Two things come to mind. First, if you asked us why Nature and Science are bad, we’d say primarily because they promote the lie that important science can be distinguished a priori, with all the ills that come with the idea of short, ‘high impact’ publications, and also because they are barrier-based. It’s weird that the folks at eLife are fixated on addressing only the second problem, but are fully on board with propagating the first.
Second, it’s pretty ironic that Nature has broadened its remit with Scientific Reports, but eLife refuses to do the same. Turns out that eLife is even more committed to the ‘high impact’ model than freaking Nature. I think that should give the editors of eLife some pause.
April 22, 2019 at 9:35 pm
The comments on the reviewers’ charter haven’t actually vanished, although they’re hard to find. You have to click on “ANNOTATIONS”, which opens them.
Other than launching a megajournal, which would probably work for eLife but isn’t really feasible for small societies, another thing selective journals could do would be to provide a private, shareable link to the rejection letter. That way, the editors’ and reviewers’ comments could be sent to another journal in a verifiable way, with no fear that authors were faking it. If the reviews said it was good but not important or novel enough, a less glamorous journal could accept it with no waste of reviewers’ time. This wouldn’t lock authors into a particular publisher.
April 22, 2019 at 9:46 pm
Thanks, Thomas, that is an excellent idea. (And thanks for the tip about Annotations.)
April 23, 2019 at 7:35 am
Just to point out that eLife conducted a peer review trial last year which was designed to help them understand if there are ways to improve the peer review and selection process that in turn would enable publication of more articles.
Trial announcement here:https://elifesciences.org/inside-elife/2905802e/peer-review-elife-trials-a-new-approach
Frist Trial results here: https://elifesciences.org/inside-elife/262c4c71/peer-review-first-results-from-a-trial-at-elife
April 23, 2019 at 8:41 am
Thanks, Michael, that’s really encouraging.
April 23, 2019 at 11:00 am
Having read both the initial announcement of the trial and the initial reporting, though, I don’t see that this initiative really addresses the specific problem I have with eLife, which is its deliberate choice to reject many perfectly sound papers. Their usual acceptance rate is about 15%; under the terms of the experiment this improved to 22%, but that is nowhere near enough. Megajournals like PLOS ONE and PeerJ all seem to be converging on about a two-in-three acceptance rate, where the other one third is made up papers that are not legitimate attempts to do science (such as advocacy), unethical, or simply incoherent. Assuming that eLife gets a similar proportion of legitimately unacceptable paper, it’s rejecting 33% for that reason and therefore another 52% for “impact” reasons. That’s 52% from the 66% of likely sound papers, which means it’s accepting less than a quarter of its good submissions.