I was a bit shaken to read this short article, Submit It Again! Learning From Rejected Manuscripts (Campbell et al. 2022), recently posted on Mastodon by open-access legend Peter Suber.

For example:

Journals may reject manuscripts because the paper is not in the scope of the journal, because they recently published a similar article, because the formatting of the article is incorrect, or because the paper is not noteworthy. In addition, editors may reject a paper expecting authors to make their work more compelling.

Let’s pick this apart a bit.

“Because they recently published a similar article”? What is this nonsense. Does the Journal of Vertebrate Paleontology reject a paper on, say, ornithopod ontogeny because “we published something on ornithopod ontogeny a few months ago”? No, it doesn’t because it’s a serious journal.

“Because the formatting of the article is incorrect”? What is this idiocy? If the formatting is incorrect, the job of the publisher is to correct it. That’s literally what they’re there for.

“Expecting authors to make their work more compelling”. This is code for sexing up the results, maybe dropping that inconvenient outlier, getting p below 0.05 … in short, fraud. The very last thing we need more of.

Elsewhere this paper suggests:

… adjusting an original research paper to a letter to the editor or shifting the focus to make the same content into a commentary or narrative essay.

Needless to say, this is putting the cart before the horse. Once we start prioritising what kind of content a journal would like to have ahead of what our work actually tells us, we’re not scientists any more.

Then there is this:

Most manuscripts can eventually Ynd a home in a PubMed-indexed journal if the authors continually modify the manuscript to the specifications of the editors.

I’m not saying this is incorrect. I’m not even saying it’s not good advice. But I worry about the attitude that it communicates — that editors are capricious gods whose whims are to be satisfied. Editors should be, and good editors are, partners in the process of bringing a work to publication, not barriers.

Next up:

Studies confirming something already well known and supported might not be suitable for publication, but looking for a different perspective or a new angle to make it a new contribution to the literature may be useful.

In other words, if you run an experiment, however well you do the work and however well you write the paper, you should expect to have it rejected if the result doesn’t excite the editor. But if you can twist it into something that does excite the editor, you might be OK. Is this really how we want to encourage researchers to behave?

I’ve seen studies like this. I have seen projects that set out to determine how tibia shape correlates with lifestyle in felids, find out the rather important fact that there is no correlation, and instead report the Principle Component 1, which explains 4.2% of the morphological difference, sort of shows a slight grouping if you squint hard and don’t mind all your groups overlapping. (Note: all details changed to protect the guilty. I know nothing of felid tibiae.) I don’t wish to see more such reporting. I want to know what a study actually showed, not what an editor thought might be exciting.

But here is why I am so unhappy about this paper.

It’s that the authors seem so cheerful about all this. That they serenely accept it as a law of the universe that perfectly good papers can be rejected for the most spurious of reasons, and that the proper thing to do is smile broadly and take your ass to the next ass-kicking station.

It doesn’t seem to occur to them that there are other ways of doing scientific communication: ways that are constructive rather than adversarial, ways the aim to get at the truth rather than aiming at being discussed in a Malcolm Gladwell book[1], ways that make the best use of researchers’ work instead of discarding what is inconvenient.

Folks, we have to do better. Those of us in senior positions have to make sure we’re not teaching out students that the psychopathic systems we had to negotiate are a law of the universe.

References

Campbell, Kendall M., Judy C. Washington, Donna Baluchi and José E. Rodríguez. 2022. Submit It Again! Learning From Rejected Manuscripts. PRiMER. 6:42. doi:10.22454/PRiMER.2022.715584

Notes

  1. I offer the observation that any finding reported and discussed in a Malcolm Gladwell book seems to have about an 80% chance of being shown to be incorrect some time in the next ten years. In the social sciences, particularly, a good heuristic for guessing whether or not a given result is going to replicate is to ask: has it been in a Gladwell book?

 

It’s been a while since we checked in on our old friends Elsevier, Springer Nature and Wiley — collectively, the big legacy publishers who still dominate scholarly publishing. Like every publisher, they have realised which way the wind is blowing, and flipped their rhetoric to pro-open access — a far cry from the days when they were hiring PR “pit bulls” to smear open access.

These days, it’s clear that open access is winning. In fact, I’ll go further: open access has won and now we’re just mopping up the remaining pockets of resistance. We’ve had our D-Day. That doesn’t mean there isn’t still lots of work to get through before we arrive at our VE-Day, but it’s coming. And the legacy publishers, having recognised that the old journal-subscriptions gravy train is coasting to a halt, are keen to get big slices of the OA pie.

Does this change in strategy reflect a change of heart in these organization?

Reader, it does not.

Just in the last few days, these three stories have come up:

Widespread outrage at the last of these has forced Wiley to back down and temporarily reinstate the missing textbooks, though only for the next eight months. It’s clear that courses which used these books will need to re-tool — hopefully by pivoting to open textbooks.

All of this tells as unwelcome truth that we just need to accept: that the big publishers are still not our friends. We must make our decisions accordingly.

The peer-review cycle as it works at most established journals. Green lines show the positive path; red lines show the negative path; amber lines show the path of delay. Modified from Taylor and Wedel (in press: figure 1).

Many aspects of scholarly publishing are presently in flux. But for most journals the process of getting a paper published remains essentially the same as decades ago, the main change being that documents are sent electronically rather than by post.

It begins with the corresponding author of the paper submitting a manuscript — sometimes, though not often, in response to an invitation from a journal editor. The journal assigns a handling editor to the manuscript, and that editor decides whether the submission meets basic criteria: is it a genuine attempt at scholarship rather than an advertisement? Is it written clearly enough to be reviewed? Is it new work not already published elsewhere?

Assuming these checks are passed, the editor sends the manuscript out to potential reviewers. Since review is generally unpaid and qualified reviewers have many other commitments, review invitations may be declined, and the editor may have to send many requests before obtaining the two or three reviews that are typically used.

Each reviewer returns a report assessing the manuscript in several aspects (soundness, clarity, novelty, perhaps perceived impact) and recommending a verdict. The handling editor reads these reports and sends them to the author along with a verdict: this may be rejection, in which case the paper is not published (and the author may try again at a different journal); acceptance, in which case the paper is typeset and published; or more often a request for revisions along the lines suggested by the reviewers.

The corresponding author (with the co-authors) then prepares a revised version of the manuscript and a response letter, the latter explaining what changes have been made and which have not: the authors can push back on reviewer requests that they do not agree with. These documents are returned to the handling editor, who may either make a decision directly, or send the revised manuscript out for another round of peer review (either with the original reviewers or less often with new reviewers). This cycle continues as many times as necessary to arrive at either acceptance or rejection.

Last time, we looked at the difference between cost, value and price, and applied those concepts to simple markets like the one for chairs, and the complex market that is scholarly publication. We finished with the observation that the price our community pays for the publication of a paper (about $3,333 on average) is about 3–7 times as much as its costs to publish ($500-$1000)?

How is this possible? One part of the answer is that the value of a published paper to the commnity is higher still: were it not so, no-one would be paying. But that can’t be the whole reason.

In an efficient market, competing providers of a good will each try to undercut each other until the prices they charge approach the cost. If, for example, Elsevier and Springer-Nature were competing in a healthy free market, they would each be charging prices around one third of what they are charging now, for fear of being outcompeted by their lower-priced competitor. (Half of those price-cuts would be absorbed just by decreasing the huge profit margins; the rest would have to come from streamlining business processes, in particular things like the costs of maintaining paywalls and the means of passing through them.)

So why doesn’t the Invisible Hand operate on scholarly publishers? Because they are not really in competition. Subscriptions are not substitutable goods because each published article is unique. If I need to read an article in an Elsevier journal then it’s no good my buying a lower-priced Springer-Nature subscription instead: it won’t give me access to the article I need.

(This is one of the reasons why the APC-based model — despite its very real drawbacks — is better than the subscription model: because the editorial-and-publication services offered by Elsevier and Springer-Nature are substitutable. If one offers the service for $3000 and the other for $2000, I can go to the better-value provider. And if some other publisher offers it for $1000 or $500, I can go there instead.)

The last few years have seen huge and welcome strides towards establishing open access as the dominant mode of publication for scholarly works, and currently output is split more or less 50/50 between paywalled and open. We can expect OA to dominate increasingly in future years. In many respects, the battle for OA is won: we’ve not got to VE Day yet, but the D-Day Landings have been accomplished.

Yet big-publisher APCs still sit in the $3000–$5000 range instead of converging on $500-$1000. Why?

Björn Brembs has been writing for years about the fact that every market has a luxury segment: you can buy a perfectly functional wristwatch for $10, yet people spend thousands on high-end watches. He’s long been concerned that if scholarly publishing goes APC-only, then people will be queuing up to pay the €9,500 APC for Nature in what would become a straightforward pay-for-prestige deal. And he’s right: given the outstandingly stupid way we evaluate reseachers for jobs, promotion and tenure, lots of people will pay a 10x markup for the “I was published in Nature” badge even though Nature papers are an objectively bad way to communicate research.

But it feels like something stranger is happening here. It’s almost as though the whole darned market is a luxury segment. The average APC funded by the Wellcome Trust in 2018/19 was £2,410 — currently about $3,300. Which is almost exactly the average article cost of $3,333 that we calculated earlier. What’s happening is that the big publishers have landed on APCs at rates that preserve the previous level of income. That is understandable on their part, but what I want to know is why are we still paying them? Why are all Wellcome’s grantees not walking away from Elsevier and Springer-Nature, and publishing in much cheaper alternatives?

Why, in other words, are market forces not operating here?

I can think of three reasons why researchers prefer to spend $3000 instead of $1000:

  1. It could be that they are genuinely getting a three-times-better service from the big publishers. I mention this purely for completeness, as no evidence supports the hypothesis. There seems to be absolutely no correlation between price and quality of service.
  2. Researchers are coasting on sheer inertia, continuing to submit to the journals they used to submit to back in the bad old days of subscriptions. I am not entirely without sympathy for this: there is comfort in familiarity, and convenience in knowing a journal’s flavour, expectations and editorial board. But are those things worth a 200% markup?
  3. Researchers are buying prestige — or at least what they perceive as prestige. (In reality, I am not convinced that papers in non-exceptional Elsevier or Springer-Nature journals are at all thought of as more prestigous than those in cheaper but better born-OA journals. But for this to happen, it only needs people to think the old journals are more prestigious, it doesn’t need them to be right.)

But underlying all these reasons to go to a more expensive publishers is one very important reason not to bother going to a cheaper publisher: researchers are spending other people’s money. No wonder they don’t care about the extra few thousand pounds.

How can funders fix this, and get APCs down to levels that approximate publishing cost? I see at least three possibilities.

First, they could stop paying APCs for their grantees. Instead, they could add a fixed sum onto all grants they make — $1,500, say — and leave it up to the researchers whether to spend more on a legacy publisher (supplementing the $1,500 from other sources of their own) or to spend less on a cheaper born-OA publisher and redistribute the excess elsewhere.

Second, funders could simply publish the papes themselves. To be fair several big funders are doing this now, so we have Wellcome Open Research, Gates Open Research, etc. But doesn’t it seem a bit silly to silo research according to what body awarded the grant that funded it? And what about authors who don’t have a grant from one of these bodies, or indeed any grant at all?

That’s why I think the third solution is best. I would like to see funders stop paying APCs and stop building their own publishing solutions, and instead collaborate to build and maintain a global publishing solution that all researchers could use irrespective of grant-recipient status. I have much to say on what such a solution should look like, but that is for another time.

We have a tendency to be sloppy about language in everyday usage, so that words like “cost”, “value” and “price” are used more or less interchangeably. But economists will tell you that the words have distinct meanings, and picking them apart is crucial to understand economic transaction. Suppose I am a carpenter and I make chairs:

  • The cost of the chair is what it costs me to make it: raw materials, overheads, my own time, etc.
  • The value of the chair is what it’s worth to you: how much it adds to your lifestyle.
  • The price of the chair is how much you actually pay me for it.

In a functioning market, the value is more than the cost. Say it costs me £60 to make the chair, and it’s worth £100 to you. Then there is a £40 range in which the price could fall and we would both come out of the deal ahead. If you buy the chair for £75, then I have made £15 more than what it cost me to make, so I am happy; and you got it for £25 less than it was worth to you, so you’re happy, too.

(If the value is less than the cost, then there is no happy outcome. The best I can do is dump the product on the market at below cost, in the hope of making back at least some of my outlay.)

So far, so good.

Now let’s think about scientific publications.

There is a growing consensus that the cost of converting a scientific manuscript into a published paper — peer-reviewed, typeset, made machine-readable, references extracted, archived, indexed, sustainably hosted — is on the order of $500-$1000.

The value of a published paper to the world is incredibly hard to estimate, but let’s for now just say that it’s high. (We’ll see evidence of this in a moment.)

The price of a published paper is easier to calculate. According to the 2018 edition of the STM Report (which seems to be the most recent one available), “The annual revenues generated from English-language STM journal publishing are estimated at about $10 billion in 2017 […] collectively publishing over 3 million articles a year” (p5). So, bundling together subscription revenues, APCs, offsets deals and what have you, the average revenue accruing from a paper is $10,000,000,000/3,000,000 = $10,000/3 = $3,333.

(Given that these prices are paid, we can be confident that the value is at least as much, i.e. somewhere north of $3,333 — which is why I was happy earlier to characterise the value as “high”.)

Why is it possible for the price of a paper to be 3–7 times as high as its cost? One part of the answer is that the value is higher still. Were it not so, no-one would be paying. But that can’t be the whole reason.

Tune in next time to find out the exciting reason why the price of scholarly publishing is so much higher than the cost!

Today should be a day of rejoicing, as it brings us a new sauropod: Arackar licanantay Rubilar-Rogers et al. 2021., a small titanosaur from Chile.

It’s not, though. Because not only is this paper behind a paywall in Elsevier’s journal Cretaceous Research, but the paywalled paper is what they term a “pre-proof” — a fact advertised in a tiny font half way down the page rather than in a giant red letters at the top.

“Pre-proof” is not a term in common usage. What does it mean? It turns out to be an unformatted, double-spaced, and line-numbered manuscript. In other words, this is an AAM (author’s accepted manuscript) of the kind that the authors could have deposited in their institutional repository for anyone to read for free.

But wait — there’s more! By way of “added value”, Elsevier have slapped a big intrusive “journal pre-proof” watermark across the middle of every single page, to make it even less readable than a double-spaced line-numbered manuscript already is:

Sample page from “pre-proof” of Rubilar-Rogers et al. 2021. Reproduced under the Fair Dealing doctrine as non-commercial research, criticism / review / quotation, and news reporting (sections 29, 30, 178 of the Copyright, Designs and Patents Act 1988). Get back in your box, Elsevier copyright lawyers.

If you want to see this for yourself, Elsevier will let you download it for $37.95:

Yeah. Thirty-seven dollars and 95 cents.

And now, the punchline. You may be wondering why, when a new sauropod has been announced, I didn’t lead with a nice image of one of the vertebrae? After all, are we not Sauropod Vertebra Picture of the Week?

It’s because there are no images of the vertebra. There are no images of any of the fossil material. In fact, there are no images at all.

Yes. This “pre-proof” omits all twelve illustrations. (We know there are twelve images, because all the figure captions appear at the end.) I have no idea what Arackar licanantay looks like. None at all. And for this, let’s just remind ourselves again, they charge $37.95.

I want to be careful throwing words like “fraud” around, but what I will say is that this is behaviour unbecoming of a major and once-respected multinational corporation. If a publisher’s behaviour ever merited the label “predatory”, surely this is it.

 

References

  • Rubilar-Rogers, David, Alexander O. Vargas, Bernardo González Riga, Sergio Soto-Acuña, Jhonatan Alarcón-Muñoz, José Iriarte-Díaz, Carlos Arévalo and Carolina S. Gutstein. 2021. Arackar licanantay gen. et sp. nov. a new lithostrotian (Dinosauria, Sauropoda) from the Upper Cretaceous of the Atacama Region, northern Chile. Cretaceous Research 104802 (pre-proof). doi:10.1016/j.cretres.2021.104802

 

Here’s an odd thing. Over and over again, when a researcher is mistreated by a journal or publisher, we see them telling their story but redacting the name of the journal or publisher involved. Here are a couple of recent examples.

First, Daniel A. González-Padilla’s experience with a journal engaging in flagrant citation-pumping, but which he declines to name:

Interesting highlight after rejecting a paper I submitted.
Is this even legal/ethical?
EDITOR-IN-CHIEF’S COMMENT REGARDING THE INCLUSION OF REFERENCES TO ARTICLES IN [REDACTED]
Please note that if you wish to submit a manuscript to [REDACTED] in future, we would prefer that you cite at least TWO articles published in our journal WITHIN THE LAST TWO YEARS. This is a polict adopted by several journals in the urology field. Your current article contains only ONE reference to recent articles in [REDACTED].

We know from a subsequent tweet that the journal is published by Springer Nature, but we don’t know the name of the journal itself.

And here is Waheed Imran’s experience of editorial dereliction:

I submitted my manuscript to a journal back in September 2017, and it is rejected by the journal on September 6, 2020. The reason of rejection is “reviewers declined to review”, they just told me this after 3 years, this is how we live with rejections. @AcademicChatter
@PhDForum

My, my question is, why in such situations do we protect the journals in question? In this case, I wrote to Waheed urging him to name the journal, and he replied saying that he will do so once an investigation is complete. But I find myself wondering why we have this tendency to protect guilty journals in the first place?

Thing is, I’ve done this myself. For example, back in 2012, I wrote about having a paper rejected from “a mid-to-low ranked palaeo journal” for what I considered (and still consider) spurious reasons. Why didn’t I name the journal? I’m not really sure. (It was Palaeontologia Electronica, BTW.)

In cases like my unhelpful peer-review, it’s not really a big deal either way. In cases like those mentioned in the tweets above, it’s a much bigger issue, because those (unlike PE) are journals to avoid. Whichever journal sat on a submission for three years before rejecting it because it couldn’t find reviewers is not one that other researchers should waste their time on in the future — but how can they avoid it if they don’t know what journal it is?

So what’s going on? Why do we have this widespread tendency to protect the guilty?

Update (13 September 2021)

One year later, Waheed confirms that the journal in question not only did not satisfactorily resolve his complaint, it didn’t even respond to his message. At this stage, there really is no point in protecting the journal that has behaved so badly, so Waheed outed it: it’s Scientia Iranica. Avoid.

As I was figuring out what I thought about the new paper on sauropod posture (Vidal et al. 2020) I found the paper uncommonly difficult to parse. And I quickly came to realise that this was not due to any failure on the authors’ part, but on the journal it was published in: Nature’s Scientific Reports.

A catalogue of pointless whining

A big part of the problem is that the journal inexplicably insists on moving important parts of the manuscript out of the main paper and into supplementary information. So for example, as I read the paper, I didn’t really know what Vidal et al. meant by describing a sacrum as wedged: did it mean non-parallel anterior and posterior articular surfaces, or just that those surfaces are not at right angles to the long axis of the sacrum? It turns out to be the former, but I only found that out by reading the supplementary information:

The term describes marked trapezoidal shape in the
centrum of a platycoelous vertebrae in lateral view or in the rims of a condyle-cotyle (procoelous or opisthocoelous) centrum type.

This crucial information is nowhere in the paper itself: you could read the whole thing and not understand what the core point of the paper is due to not understanding the key piece of terminology.

And the relegation of important material to second-class, unformatted, maybe un-reviewed supplementary information doesn’t end there, by a long way. The SI includes crucial information, and a lot of it:

  • A terminology section of which “wedged vertebrae” is just one of ten sub-sections, including a crucial discussion of different interpretation of what ONP means.
  • All the information about the actual specimens the work is based on.
  • All the meat of the methods, including how the specimens were digitized, retro-deformed and digitally separated.
  • How the missing forelimbs, so important to the posture, were interpreted.
  • How the virtual skeleton was assembled.
  • How the range of motion of the neck was assessed.
  • Comparisons of the sacra of different sauropods.

And lots more. All this stuff is essential to properly understanding the work that was done and the conclusions that were reached.

And there’s more: as well as the supplementary information, which contains six supplementary figures and three supplementary tables, there is an additonal supplementary supplementary table, which could quite reasonably have gone into the supplementary information.

In a similar vein, even within the highly compressed actual paper, the Materials and Methods are hidden away at the back, after the Results, Discussion and Conclusion — as though they are something to be ashamed of; or, at best, an unwelcome necessity that can’t quite be omitted altogether, but need not be on display.

Then we have the disappointingly small illustrations: even the “full size” version of the crucial Figure 1 (which contains both the full skeleton and callout illustrations of key bones) is only 1000×871 pixels. (That’s why the illustration of the sacrum that I pulled out of the paper for the previous post was so inadequate.)

Compare that with, for example, the 3750×3098 Figure 1 of my own recent Xenoposeidon paper in PeerJ (Taylor 2018) — that has more than thirteen times as much visual information. And the thing is, you can bet that Vidal et al. submitted their illustration in much higher resolution than 1000×871. The journal scaled it down to that size. In 2020. That’s just crazy.

And to make things even worse, unrelated images are shoved into multi-part illustrations. Consider the ridiculousness of figure 2:

Vidal et al. (2020: figure 2). The verticalization of sauropod feeding envelopes. (A) Increased neck range of motion in Spinophorosaurus in the dorso-ventral plane, with the first dorsal vertebra as the vertex and 0° marking the ground. Poses shown: (1) maximum dorsiflexion; (2) highest vertical reach of the head (7.16 m from the ground), with the neck 90° deflected; (3) alert pose sensu Taylor Wedel and Naish13; (4) osteological neutral pose sensu Stevens14; (5) lowest vertical reach of the head (0.72 m from the ground at 0°), with the head as close to the ground without flexing the appendicular elements; (6) maximum ventriflexion. Blue indicates the arc described between maximum and minimum head heights. Grey indicates the arc described between maximum dorsiflexion and ventriflexion. (B) Bivariant plot comparing femur/humerus proportion with sacrum angle. The proportion of humerus and femur are compared as a ratio of femur maximum length/humerus maximum length. Sacrum angle measures the angle the presacral vertebral series are deflected from the caudal series by sacrum geometry in osteologically neutral pose. Measurements and taxa on Table 1. Scale = 1000 mm.

It’s perfectly clear that parts A and B of this figure have nothing to do with each other. It would be far more sensible for them to appear as two separate figures — which would allow part B enough space to convey its point much more clearly. (And would save us from a disconcertingly inflated caption).

And there are other, less important irritants. Authors’ given names not divulged, only initials. I happen to know that D. Vidal is Daniel, and that J. L. Sanz is José Luis Sanz; but I have no idea what the P in P. Mocho, the A in A. Aberasturi or the F in F. Ortega stand for. Journal names in the bibliography are abbreviated, in confusing and sometimes ludicrous ways: is there really any point in abbreviating Palaeogeography Palaeoclimatology Palaeoecology to Palaeogeogr. Palaeoclimatol. Palaeoecol?

The common theme

All of these problems — the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, even the abbreviating of author names and journal titles — have this in common: that they are aping how Science ‘n’ Nature appear in print.

They present a sort of cargo cult: a superstitious belief that extreme space pressures (such as print journals legitimately wrestle with) are somehow an indicator of quality. The assumption that copying the form of prestigious journals will mean that the content is equally revered.

And this is simply idiotic. Scientific Reports is an open-access web-only journal that has no print edition. It has no rational reason to compress space like a print journal does. In omitting the “aniel” from “Daniel Vidal” it is saving nothing. All it’s doing is landing itself with the limitations of print journals in exchange for nothing. Nothing at all.

Why does this matter?

This squeezing of a web-based journal into a print-sized pot matters because it’s apparent that a tremendous amount of brainwork has gone into Vidal et al.’s research; but much of that is obscured by the glam-chasing presentation of Scientific Reports. It reduces a Pinter play to a soap-opera episode. The work deserved better; and so do readers.

References

 

Robin N. Kok asked an interesting question on Twitter:

For all the free money researchers throw at them, they might as well be shareholders. Maybe someone could model a scenario where all the APC money is spent on RELX shares instead, and see how long it takes until researchers own a majority share or RELX.

Well, Elsevier is part of the RELX group, which has a total market capitalisation of £33.5 billion. We can’t know directly how much of that value is in Elsevier, since it’s not traded independently. But according to page 124 their 2017 annual report (the most recent one available), the “Scientific, Technical and Medical” part of RELX (i.e. Elsevier) is responsible for £2,478M of the total £7,355M revenue (33.7%), and for £913M of the £2,284M profit (40.0%). On the basis that a company’s value is largely its ability to make a profit, let’s use the 40% figure, and estimate that Elsevier is worth £13.4 billion.

(Side-comment: ouch.)

According to the Wellcome Trust’s 2016/17 analysis of its open access spend, the average APC for Elsevier articles was £3,049 (average across pure-OA journals and hybrid articles).

On that basis, it would take 4,395,000 APCs to buy Elsevier. How long would that take to do? To work that out, we first need to know how many APC-funded articles they publish each year.

From page 14 of the same annual report as cited above. Elsevier published “over 430,000 articles” in a year. But most of those will have been in subscription journals. The same page says “Subscription sales generated 72% of revenue, transactional sales 26% and advertising 2%”, so assuming that transactional sales means APCs and that per-article revenue was roughly equal for subscription and open-access articles, that means 26% of their articles — a total of 111,800.

At 111,800 APCs per year, it would take a little over 39 years to accumulate the 4,395,000 APCs we’d need to buy Elsevier outright.

That’s no good — it’s too slow.

What if we also cancelled all our subscriptions, and put those funds towards the buy-out, too? That’s actually a much simpler calculation. Total Elsevier revenue was £2,478M. Discard the 2% that’s due to advertising, and £2428M was from subscriptions and APCs. If we saved that much for just five and a half years, we’d have saved enough to buy the whole company.

That’s a surprisingly short time, isn’t it?

(In practice of course it would be much faster: the share-price would drop precipitously as we cancelled all subscription and stopped paying APCs, instantly cutting revenue to one fiftieth of what it was before. But we’ll ignore that effect for our present purposes.)

 

The opening remarks by the hosts of conferences are usually highly forgettable, a courtesy platform offered to a high-ranking academic who has nothing to say about the conference’s subject. NOT THIS TIME!

This is the opening address of APE 2018, the Academic Publishing in Europe conference. The remarks are by Martin Grötschel, who as well as being president of the host institution, the Berlin Brandenburg Academy of Sciences and Humanities, is a 25-year veteran of open-access campaigning. and a member of the German DEAL negotiating team.

Here are some choice quotes:

1m50s: “I have always been aware of the significant imbalance and the fundamental divisions of the academic publication market. Being in the DEAL negotiation team, this became even more apparent …”

2m04s: “On the side of the scientists there is an atomistic market where, up to now and unfortunately, many of the actors play without having any clue about the economic consequences of their activities.”

2m22s: “In Germany and a few other countries where buyer alliances have been organised, they are, as expected, immediately accused of forming monopolies and they are taken to court — fortunately, without success, and with the result of strengthening the alliances.”

2m38s: “On the publishers’ side there is a very small number of huge publication enterprises with very smart marketing people. They totally dominate the market, produce grotesque profits, and amazingly manage to pretend to be the Good Samaritans of the sciences.”

2m27s: “And there are the tiny [publishers …] tentatively observed by many delegates of the big players, who are letting them play the game, ready to swallow them if an opportunity comes up.”

3m18s: “When you, the small publishers, discuss with the representatives of the big guys, these are most likely very friendly to you. But […] when it comes to discussing system changes, when the arguments get tight, the smiles disappear and the greed begins to gleam.”

3m42s: “You will hear in words, and not implicitly, that the small academic publishers are considered to be just round-off errors, tolerated for another while, irrelevant for the world-wide scientific publishing market, and having no influence at all.”

4m00s: “One big publisher stated: if your country stops subscribing to our journals, science in your country will be set back significantly. I responded […] it is interesting to hear such a threat from a producer of envelopes who does not have any idea of the contents.”

4m39s: “Will the small publishers side with the intentions of the scholars? Or will you try to copy the move towards becoming a packaging industry that exploits the volunteer work of scientists and results financed by public funding?”

5m55: “I do know, though, that the major publishers are verbally agreeing [to low-cost Gold #OpenAccess] , but not acting in this direction all, simply to maintain their huge profit margins.”

6m06s: “In a market economy, no-one can argue against profit maximisation [of barrier-based scholarly publishers]. But one is also allowed to act against it. The danger may be really disruptive, instead of smooth moves in the development of the academic publishing market.”

6:42: “You may not have enjoyed my somewhat unusual words of welcome, but I do hope that you enjoy this year’s APE conference.”

It’s just beautiful to hear someone in such a senior position, given such a platform, using it say so very clearly what we’re all thinking. (And as a side-note: I’m constantly amazed that so many advocates are so clear, emphatic and rhetorically powerful in their second, or sometimes third, language. Humbling.)

As RLUK’s David Prosser noted: “I bet this wasn’t what the conference organisers were expecting. A fabulous, hard-hitting polemic on big publishers #OA.”

 

 


Note. This post is adapted from a thread of tweets that I posted excerpting the video.