Have we reached Peak Megajournal?

May 29, 2015

[I am using the term “megajournal” here to mean “journal that practices PLOS ONE-style peer-review for correctness only, ignoring guesses at possible impact”. It’s not a great term for this class of journals, but it seems to be becoming established as the default.]

Bo-Christer Björk​’s (2015) new paper in PeerJ asks the question “Have the “mega-journals” reached the limits to growth?”, and suggests that the answer may be yes. (Although, frustratingly, you can’t tell from the abstract that this is the conclusion.)

I was a bit disappointed that the paper didn’t include a graph showing its conclusion, and asked about this (thanks to PeerJ’s lightweight commenting system). Björk’s response acknowledged that a graph would have been helpful, and invited me to go ahead and make one, since the underlying data is freely available. So using OpenOffice’s cumbersome but adequate graphing facilities, I plotted the numbers from Björk’s table 3.

megajournal-volumes-2010-2015

As we can see, the result for total megajournal publications upholds the conclusion that megajournals have peaked and started to decline. But PLOS ONE (the dark blue line) enormously dominates all the other megajournals, with Nature’s Scientific Reports the only other publication to even be meaningfully visible on the graph. Since Scientific Reports seems to be still in the exponential phase of its growth and everything else is too low-volume to register, what we’re really seeing here is just a decline in PLOS ONE volume.

It’s interesting to think about what the fall-off in PLOS ONE volume means, but it’s certainly not the same thing as megajournals having topped out.

What do we see when we expand the lower part of the graph by taking out PLOS ONE and Scientific Reports?

megajournal-volumes-2010-2015-without-top2-recoloured

Here, the picture is more confused. The numbers are dominated by BMJ Open, which is still growing, but its growth has levelled off. Springer Plus grew quickly, but seems to be falling away — perhaps reflecting an initial push, followed by author apathy for a megajournal run by a legacy publisher. AIP Advances (which I admit I’d not heard of) and SAGE Open both seem to have modest but healthy year-on-year growth. And of course PeerJ is growing fast, but it’s too young for us to have a meaningful sense of the trend.

What does it all mean?

The STM Report for 2015 (Ware and Mabe 2015) estimates that 2.5 million scholarly articles were published in English-language journals in 2014 (page 6). Björk’s data tells us that only 38 thousand of those were in megajournals — that’s less than 1/65th of all the articles. I find it very hard to believe that 1.5% of the total scholarly article market represents saturation for megajournals.

I suspect that what this study really shows us — and I’m sure the PLOS people would be the first to agree with this — is that we need a lot more megajournals out there than just PLOS ONE. Specifically:

  • It’s well established that pure-OA journals offer better value for their APCs than hybrid ones.
  • It’s at least strongly suspected (has there been a study?) that OA megajournals offer better value than selective OA journals.
  • We want to get the APCs of OA megajournals down.
  • PLOS ONE needs competition on price, to force down its increasingly unjustifiable APC of $1350.
  • It’s a real shame that the eLIFE people have fallen into the impact-chasing trap and show no interest in running an eLIFE megajournals.
  • I think the usually reliable Zen Faulks is dead wrong when he writes off what he calls “Zune journals“.

So the establishment of new megajournals is very much a good thing, and their growth is to be encouraged. Many of the newer megajournals may well find (and I hate to admit this) that their submission rates increase when they’re handed their first impact factor, as happened with PLOS ONE.

Onward!

References

Advertisements

19 Responses to “Have we reached Peak Megajournal?”


  1. I added arXiv in this comparison, seen the recent discussion around what is and what is not green OA.


  2. I am impressed by your first graph–specifically the confident 2015 numbers. Are they based on extrapolation from January-April? If so, is that reasonable based on past patterns? (No argument with the post in general, just wondering about the projections.)

  3. Mike Taylor Says:

    See the caption to Björk’s table 3: “​The figures for 2015 are the articles published in the first quarter of the year multiplied by four.”


  4. […] skull remains. The first post mentioned is very interesting and I suggest you go read it here! The second post is just as interesting and can be found […]


  5. Hi Mike – in relation to your comment re eLife, here are a few thoughts.

    The mega-journal approach is interesting and beneficial in all the ways that you highlight. But as you also point out there’s a lot still to find out. What’s the optimal subject breadth? What’s the optimal size? There’s some good work going on to explore these questions, but right now we have a different focus at eLife.

    We *are* chasing impact, but not in the way that most journals do. We share the view, with many people, that the incentive system in science is broken. So we’re trying to figure out how to encourage and recognise the kind of behaviours in science that we’d all like to see more of – transparency, integrity, sharing of resources, effective collaboration and so on.

    The eLife editors are selecting work that they judge will move their fields forward in an important way, because with that kind of science at the heart of eLife, we feel that this is the way we can have the greatest impact, in the sense of changing behaviour. Vivek Malhotra and Eve Marder (eLife editors) summarised their view on this in a recent editorial, entitled “The Pleasure of Publishing” – http://elifesciences.org/content/4/e05770. We want authors to tell their story honestly and completely. We don’t limit the number or papers we publish or their length, so that authors can describe their work in a way that will enhance the ability of others to build on it.

    We will also be increasing the options that researchers have to communicate new findings and ideas, and we will not be chasing impact in the sense of impact factors. Publishing is in transition and we need to be adaptable. Ultimately, we will judge the success of eLife by the extent to which we enhance science.

    We don’t see an advantage in taking a megajournal approach at this point by relaxing the selection criteria. We do, however, see an advantage in remaining selective, so that authors (who aren’t prepared to submit all their work to mega-journals) have an alternative venue to send what they consider to be their best work. That’s the opportunity that we see – both in terms of driving open-access but also counteracting some of the more toxic aspects of journal publishing.


  6. A couple of thoughts –

    a) One thing to bear in mind is that all the megajournals (at least, all the successful ones) are focused on the sciences, so the market can’t be saturated yet – there’s a potential megajournal for the majority of authors, but not all of them.

    I am really keen to see what happens with Open Library of the Humanities, the best chance we have (so far) of a non-science megajournal. They have a fairly conservative funding model, and probably won’t be able to publish PLOS-like numbers even if they got the submissions… but if they get more than they can handle, it’ll stimulate more people to work on the problem. And because OA in the humanities is inextricably linked to the problem of funding OA without charging APCs, finding an economic model which works for this may help drive down APC costs for the sciences.

    b) “Many of the newer megajournals may well find (and I hate to admit this) that their submission rates increase when they’re handed their first impact factor, as happened with PLOS ONE.”

    I think this is entirely likely. While many people (rightly) disparage the impact factor as a measurement, the simple fact of having been rated for one is an easily identified marker of a minimum quality level, and so has a signalling value regardless of the actual number.

  7. Matt Wedel Says:

    While many people (rightly) disparage the impact factor as a measurement, the simple fact of having been rated for one is an easily identified marker of a minimum quality level

    This is eerily similar to Mike’s conclusion from a year and a half ago that the the principal value of peer review is that it indicates the author’s seriousness – being willing to undergo peer review means they’re not just messing around.

    Just thinking out loud for a minute, let’s see where this goes.

    If Mike’s right about peer review, then the simple fact that a work has been reviewed may be more important than the specific recommendations made by the reviewers (more important to science in general, not more important in terms of improving that specific work). But we all know that the current peer review system is capricious. And we’ve heard from a couple of sources that even at the megajournals about a third of submissions are unpublishable. I wonder, if we secretly replaced the current peer-review system with an algorithm that blindly and randomly rejected 1/3 of all submissions, how much worse would things actually be?

    To be clear, I’m not saying random rejection would be better than the current system – I’m curious, if someone did it, how long it would take people to notice.

    Related: for peer review to serve as a signal of authorial seriousness, does there have to be some kind of threat of rejection? If we move to a more open, post-publication review model, will the threat of bad reviews still be enough to weed out the people who don’t want to subject their work to pre-publication review now? And what are we selecting for with pre-publication review, good scholarship or merely thick skin?

  8. Mike Taylor Says:

    I wonder, if we secretly replaced the current peer-review system with an algorithm that blindly and randomly rejected 1/3 of all submissions, how much worse would things actually be?

    It’s an amusing idea, but things would be much worse. The 30% or so of submissions that are rejected from PLOS ONE, Scientific Reports and PeerJ (they have very similar rates) really are for the most part, as you said, unpublishable: either they are simply not science, or they breach ethical guidelines. I think it’s pretty much a cast-iron rule that 70% is as high as the acceptance rate of a serious scientific journal can get. (I seem to remember also hearing that this is the rate at many journals that don’t bill themselves as unselective, but in practice are.)


  9. Mike: The recent PLOS “submission strategy” paper for ecology journals (http://dx.doi.org/10.1371/journal.pone.0115451) has some interesting data on acceptance rates – of their 61 titles, 6 had an acceptance rate of 70% or more, and PLoS ONE only just missed it at 69%.

    The only one of the >70% titles I’m familiar with is Polar Record, where science is actually a relatively small part of the journal – it’s heavily oriented to history/social science. This may skew things somewhat. Alternatively, it might be that “regional” journals like this – regional in terms of scope, not audience – can have a higher acceptance rate because they get fewer completely frivolous applications; the broader the scope, the broader the potential for vaguely related junk. I note Polar Research was in at 67% and Polar Biology was fairly high, as well.

    We do know (per comments at http://blog.dshr.org/2012/03/what-is-peer-review-for.html) that a sizable chunk of the papers rejected by PLoS ONE were later published in another journal, so the baseline “completely invalid” rate is probably below 30%.

  10. Mike Taylor Says:

    Well, Andrew, that last part shouldn’t be true: PLOS ONE’s policy is supposed to be if it’s valid, we’ll publish it. But we’ve all heard of instances where the system didn’t work as advertised. (Also, I suspect there are plenty of journals out there with a much lower threshold of what counts as valid than PLOS ONE does.)


  11. […] Are mega-journals the future of science?  A good commentary here.  And what motivates dinosaur […]


  12. […] Are mega-journals the future of science?  A good commentary here.  And what motivates dinosaur […]


  13. Generally, we’re tempted to make early predictions on OA, but often find that we can barely look 2 years into the future with detail. Details do change the course of events

    I agree, despite the unfortunate effects of Impact Factors on some behavior, for now and for some unknown time, they are real author dynamics that affect the development of journals, mega or not.

    Relatively speaking, author choice determines the development of journals more than journals determine their own development. I don’t think megajournals have peaked. I think the ones in play right now have temporarily, at this moment, relatively lost some attraction for authors relative to other journals, but that’s a changing dynamic. Placing a journal in the line of strategic serendipity does make a difference, and much of that has yet to materialize for the OA ecology.

    In a way, I think the future is already here, we’re discussing it, but rather than concluding based on what we see right now, I’d say it has not been fully triggered yet. We’ll see something different when we reach the horizon.


  14. […] first link suggests that such journals may have saturated the market, but actually this result is overwhelmingly dominated by PLOS ONE, and the other megajournals look like they are still growing. (H/t Tyler […]


  15. […] In May 2015, Mike Taylor posted Have we reached Peak Megajournal? […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: