Look on my works, ye mighty, and despair!

DSCN0476

[Giraffatitan brancai paralectotype MB.R.2181 (formerly HMN S II), mounted skeleton in left anteroventrolateral view. Presacral vertebrae sculpted, skull scaled and 3d-printed from specimen T1. Round the decay of that colossal wreck, boundless and bare, the lone and level sands stretch far away.]

[This is a guest-post by Richard Poynder, a long-time observer and analyst of academic publishing now perhaps best known for the very detailed posts on his Open and Shut blog. It was originally part of a much longer post on that blog, the introduction to an interview with the publisher MDPI. I’m pleased to reproduce it here with Richard’s kind permission — Mike.]


In light of the current lack of information available to enable us to adequately judge the activities of scholarly publishers, or to evaluate the rigour of the publication process that research papers undergo, should not both scholarly publishers and the research community be committing themselves to much greater transparency than we see today?

For instance, should not open peer review now be the norm? Should not the reviews and the names of reviewers be routinely published alongside papers? Should not the eligibility criteria and application procedures for obtaining APC waivers be routinely published on a journal’s web site, along with regularly updated data on how many waivers are being granted? Should not publishers be willing to declare the nature and extent of the unsolicited email campaigns they engage in in order to recruit submissions?

Should not the full details of “big deals” and hybrid OA “offsetting agreements” be made publicly available? Should not publishers be more transparent about why they charge what they charge for APCs? Should not publishers be more transparent about their revenues and profits? For instance, should not privately owned publishers make their accounts available online (even where there is no legal obligation to do so), and should not public companies provide more detailed information about the money they earn from publicly-funded research and exactly how it was earned? And should not publishers whose revenue comes primarily from the public purse be entirely open about who owns the company, and where it is based?

Should not the research community refuse to deal with publishers unwilling to do all the above? Did not US Justice Louis D. Brandeis have a point when he said, “Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.”

 

Peggy Sue's Diner-saurs - London with sauropod

A couple of weekends ago, London and I went camping and stargazing at Afton Canyon, a nice dark spot about 40 miles east of Barstow. On the way home, we took the exit off I-15 at Ghost Town Road, initially because we wanted to visit the old Calico Ghost Town. But then we saw big metal dinosaurs south of the highway, and that’s how we came to Peggy Sue’s Diner and in particular the Diner-saur Park.

Peggy Sue's Diner-saurs - spinosaur

The Diner-saur Park is out behind the diner and admission is free. There are pools with red-eared sliders, paved walkways, grass, trees, a small gift shop, and dinosaurs. Here’s a Spinosauruscuriously popular in the Mojave Desert, those spinosaurs.

Peggy Sue's Diner-saurs - stegosaur

Ornithischians are represented by two stegosaurs, this big metal one and a smaller concrete one under a tree.

Peggy Sue's Diner-saurs - turtles

The turtles are entertaining. They paddle around placidly and crawl out to bask on the banks of the pools, and on little islands in the centers.

Peggy Sue's Diner-saurs - sign

The gift shop is tiny and the selection of paleo paraphernalia is not going to blow away any hard-core dinophiles. But it is not without its charm. And, hey, when you find a dinosaur gift shop in the middle of nowhere, you don’t quibble about size. London got some little plastic turtles and I got some cheap and horribly inaccurate plastic dinosaur skeletons to make a NecroDinoMechaLaser Squad for our Dinosaur Island D&D campaign.

Now, about that sauropod. The identification sign on the side of the gift shop notwithstanding, this is not a Brachiosaurus. With the short forelimbs and big back end, this is clearly a diplodocid. The neck is too skinny for Apatosaurus or the newly-resurrected Brontosaurus, and too long for Diplodocus. I lean toward Barosaurus, although I noticed in going back through these photos that with the mostly-straight, roughly-45-degree-angle neck, it is doing a good impression of the Supersaurus from my 2012 dinosaur nerve paper. Compare this:

Peggy Sue's Diner-saurs - sauropod 1

to this:

Wedel RLN fig1 - revised

If I had noticed it sooner, I would have maneuvered for a better, more comparable shot.

Guess I’ll just have to go back.

Reference

Wedel, M.J. 2012. A monument of inefficiency: the presumed course of the recurrent laryngeal nerve in sauropod dinosaurs. Acta Palaeontologica Polonica 57(2):251-256.

When a paper goes for peer-review at PLOS ONE, the reviewers are told not to make any judgement about how important or sexy or “impacty” the paper is — to judge it only on methodical soundness. All papers that are judged sound are to be published without making guesses about which will and won’t improve the journal’s reputation through being influential down the line. (Such guesses are hopelessly inaccurate anyway.)

When PLOS ONE was new, this approach drew scorn from established publishers, but now those publishers all have their own journals that use similar editorial criteria (Nature’s Scientific Reports, AAAS‘s Science Advances, Elsevier’s first attempt, Elsevier’s second attempt, the Royal Society’s Royal Society Open Science). Those editorial criteria have proved their worth.

But what are we going to call this style of peer-review?

It’s not a new problem. I discussed it with with David Crotty three years ago without reaching any very satisfactory conclusion. But three years have not really helped us much as we try to agree on a term for this increasingly important and prevalent model.

What are the options on the table?

PLOS ONE-style peer-review. It’s a cumbersome term, and it privileges PLOS ONE when that is now far from the only journal to use this approach to peer-review (and may not even have been first).

Peer-review Lite. A snide term coined by people who wanted PLOS ONE to fail. It’s not a good description, and it carries baggage.

Scientific peer-review. This one came up in the discussion with David Crotty, but it’s not really acceptable because it would leave us still needing a term for what the Open Library of Humanities does.

Objective peer-review. This is the term that was used at the Royal Society meeting at the start of this week — the idea being that you review objectively for the quality of the research, but don’t make a subjective judgement of its importance. Several people didn’t like this on the grounds that even the “objective” half is inevitably subjective.

Any others that I missed?

I don’t have a good solution to propose to this problem; but I think it’s getting more urgent that we do solve it. We have to have a simple, unambiguous, universally understood term to understand a model of peer-review that is becoming increasingly pervasive and may well end up as the dominant form of peer-review.

Plough in — comments are open!

Update, 6pm

Liz Wager asked a very similar question four years ago, over on the BMJ blog: what to call the journals that use this approach to peer-review. Terms that she mentions include:

  • “bias to publish” (from BioMed Central)
  • “non-selective” (her own coinage, which she doesn’t like)
  • “bumboat” (I can’t explain this one, you’ll have to read the article)
  • “author-driver” or “author-focused” publication (AFP for short)
  • “search-located” (which she coins, the dismisses as tautologous)
  • “unconventional” or “non-traditional” (discarded as disparaging)
  • “non-discriminatory”, “impartial” or “unprejudiced”
  • “general” (dismissed as a non-starter)
  • “broad-spectrum” (inapplicable to specialised journals)

And then in the comments various people proposed:

  • “below the fold” journals
  • “omnivorous” (I quite like that one)
  • “alternative”
  • “Voldermortian journals”, which I don’t understand at all.
  • “Unfiltered”, contrasted with “filtered”
  • “inclusive”, contrasted with “exclusive” (I quite like this, too)
  • “high volume low hassle”

But there’s no conclusion or preferred term.

Copied from an email exchange.

Mike:

Did we know about the Royal Society’s PLOS ONE-clone?
http://rsos.royalsocietypublishing.org/about

I am in favour of this. I might well send them my next paper while the universal waiver is still in place.

Matt:

Did not know about it. Their post-waiver APC is insane. How can they possibly justify $1600?

Mike:

Well, I am obviously not a big fan of a $1600 APC; but it’s not a great deal more than PLOS ONE, and much less than PLOS Biology/Medicine.

But I think we’re converging on the idea that you can make a living running journals that charge $500 — see Ubiquity Press at http://www.ubiquitypress.com/site/publish/ – so I think anyone charging more than that has to explain why. In the case of the Royal Society, I assume it’s to fund their other activities; I am assured that I could get a waiver anyway, since I lack funding.

But are you saying you definitely won’t publish there even during the $0 phase?

Matt (with Mike’s previous post quoted):

Well, I am obviously not a big fan of a $1600 APC; but it’s not a great deal more than PLOS ONE

Right, but the direction of change should be down, not up.

and much less than PLOS Biology/Medicine.

Well, is this new journal supposed to a PLOS ONE clone or a PLOS Biology clone? If the former, a lower APC is more desirable. And even talking in such terms is conceding that “prestige” outlets should get to charge more, which does not sit well with me.

But I think we’re converging on the idea that you can make a living running journals that charge $500 – see Ubiquity Press at http://www.ubiquitypress.com/site/publish/ – so I think anyone charging more than that has to explain why. In the case of the Royal Society, I assume it’s to fund their other activities;

I would like to have that demonstrated rather than assumed; I’d like to know the extra dough is actually going to support science rather than enrich shareholders. I’m not very optimistic.

I am assured that I could get a waiver anyway, since I lack funding.

Sure. But just because you could dodge that hammer doesn’t legitimize their swinging it.

But are you saying you definitely won’t publish there even during the $0 phase?

Probably not, for two reasons. First, I don’t want to put on airs but I know that where we publish does influence other people’s thoughts on these things. PeerJ got at least a small legitimacy bump in paleo because we were in it right out of the gate and singing its praises [Or so I’ve heard, from more than once source. – MJW]. I don’t want to lend my endorsement to an outfit that is charging an unjustifiably high APC. Definitely not if the extra money is going to shareholders, and possibly not even if all of it is going to science. A $1600 APC only looks non-insane because the real bastards are charging even more. If we were in a PeerJ/Ubiquity world where APCs were all $500 or less, and a new journal came along that said, “Hey, you can publish with us and donate $1100 to our cause every time!” I’d say “Screw you!” and I assume most other folks would as well. So even if all the extra money is going to a good cause, they’re still promulgating the idea that APCs over $500 are justified. I can’t get behind that.
Second, will they give me everything PeerJ does? Because they are charging a hell of a lot more. Even if I get a waiver, if they’re not going to take care of me as well as PeerJ, screw ’em. The bar has been raised. Are they actually adding value relative to the new, post-PeerJ baseline, or are they in fact launching a journal with 2005 functionality in 2015? I should applaud them for belatedly getting on board?
In short, my authorship is theirs to earn, and so far I haven’t seen anything that makes me think they’ll earn it.

Mike:

Yes, APCs should be pushing downwards all the time now. I agree that the Royal Society coming in at a level above PLOS ONE doesn’t look good — indeed PLOS ONE’s own $1350 is also looking increasingly unfashionable in the light of (A) Ubiquity providing essentially the same service for 37% of the price, and (B) the fact that PLOS now runs at an operating surplus of 27%. To my mind, it’s well past time that PLOS ONE found a way to wind its APC down — really, down into triple figures ($999 would do), though even a nominal reduction of say $50 would send a good message.

You’re absolutely right that Royal Society Open Science is, by design, a PLOS ONE rather than a PLOS Biology: it reviews on correctness alone, not on guesswork about likely impact. So, yes, it’s PLOS ONE’s price-point that’s the correct comparison here.

Where you’re mistaken, though, is in assuming that the Royal Society has shareholders who might be skimming off the cream from the APC. There are none: the Society has nothing else to spend publishing profits on but furthering its scientific mission. (Of course, it doesn’t follow from this that is ought to be seeking to make a profit from publishing at all. It has other sources of income, and presently only 8% of its income is from publishing profits.)

But I hear you on the message sent by acquiescing to a $1600-APC journal, even if that APC is waived. We both want to shift towards a world where there are no journals that charge that kind of money — or at least, that if they do, it’s because they’re the kind of “selective” journal that thinks there’s something praiseworthy about rejecting most scientifically sound submissions. Journals of that kind don’t concern me one way or another, because I just don’t play that game.

The REF (Research Excellence Framework) is a time-consuming exercise that UK universities have to go through every few years to assess and demonstrate the value of their research to the government; the way funding is allocated between universities is largely dependent on the results of the REF. The exercise is widely resented, in part because the processes of preparing and reviewing the submissions are so time-consuming.

Dorothy Bishop has noted that results of REF assessments correlate strongly with departmental H-indexes (and suggested that we could save on the cost of future REFs by just using that H-index instead of the time-consuming peer-review process).

But it’s also been shown that H-index is strongly correlated with the simple number of publications. A seductive but naive conclusion would be: “we could just count publications for the next REF!”

But of course if we simply allocated research funding across universities on the basis of how many papers they produce, they would — they would have to — respond by simply racing to write more and more papers. Our already overloaded ability to assimilate new information would be further flooded. It’s a perverse incentive.

So this is a classic example of a very important general dictum: measure the thing you’re actually interested in, not a proxy. I don’t know if this dictum has been given a name yet, but it ought to be.

Measuring the thing we’re interested in is often difficult and time-consuming. But since only the thing we measure will ever be optimised for, we must measure the thing we want optimised — in this case, quality of research rather than quantity. That would still be true even if the correlation between REF assessments and departmental H-indexes was absolutely perfect. Because that correlation is an accident (in the philosophical sense); changing the circumstances will break that correlation.

No doubt all this reasoning is very familiar and painfully basic to people who have been working on the problem of metrics in assessment for years; to them, I apologise. For everyone else, I hope this comment provides some grains of insight.

[I originally wrote the initial form of this post as a comment on the Royal Society blog how should research and impact be assessed?, but I’m still waiting for it to be approved, so here it is.]

 

I’m at the Royal Society today and tomorrow as part of the Future of Scholarly Scientific Communication conference. Here’s the programme.

I’m making some notes for my own benefit, and I thought I might as well do them in the form of a blog-post, which I will continuously update, in case anyone else is interested.

I stupidly didn’t make notes on the first two speakers, but let’s pick up from the third:

Deborah Shorley, ex-librarian of Imperial College London

Started out by saying that she feels her opinion, as a librarian, is irrelevant, because librarians are becoming irrelevant. A pretty incendiary opening!

Important observations:

“Scientific communication in itself doesn’t matter; what matters is that good science be communicated well.”

And regarding the model of giving papers to publishers gratis, then paying them for the privilege of reading them:

“I can’t think of any other area where such a dopey business model pertains.”

(On which, see Scott Aaronson’s brilliant take on this in his review of The Access Principle — the article that first woke me up to the importance of open access.)

Shorey wants to bring publishing skills back in-house, to the universities and their libraries, and do it all themselves. As far as I can make out, she simply sees no need for specialist publishers. (Note: I do not necessarily endorse all these views.)

“If we don’t seize the opportunity, market forces will prevail. And market forces in this case are not pretty.”

Robert Parker, ex-head of publishing, Royal Society of Chemistry

Feels that society publishers allowed themselves to be overtaken by commercial publishers. Notes that when he started working for the RSC’s publishing arm, it was “positively dickensian”, using technology that would mostly have been familiar to Gutenberg. Failure to engage with authors and with technology allowed the commercial publishers to get ahead — something that is only now being redressed.

He’s talking an awful lot about the impact factors of their various journals.

My overall impression is that his perspective is much less radical than that of Deborah Shorley, wanting learned-society publishers to be better able to compete with the commercial publishers.

Gary Evoniuk, policy director at Glaxo Smith Klein

GSK submits 300-400 scientific studies for publication each year.

Although the rise of online-only journals means there is no good reason to not publish any finding, they still find that negative results are harder to get published.

“The paper journal, and the paper article, will soon be dead. This makes me a little bit sad.”

He goes further and wonders whether we need journal articles at all? When actual results are often available long before the article, is the context and interpretation that it provides valuable enough to be worth all the effort that’s expended on it? [My answer: yes — Ed.]

Discussion now follows. I probably won’t attempt to blog it (not least because I will want to participate). Better check out the twitter stream.

Nigel Shadbolt, Open Data Institute

Begin by reflecting on a meeting ten years ago, convened at Southampton by Stevan Harnad, on … the future of scholarly scientific communication.

Still optimistic about the Semantic Web, as I guess we more or less have to be. [At least, about many separate small-sw semantic webs — Ed.] We’re starting to see regular search-engines like Google taking advantage of available machine-readable data to return better results.

Archiving data is important, of course; but it’s also going to be increasingly important to archive algorithms. github is a useful prototype of this.

David Lambert, president/CEO, internet2

Given how the digital revolution has transformed so many fields (shopping, auctions, newspapers, movies) why has scholarly communication been so slow to follow? [Because the incumbents with a vested interesting in keeping things as they are have disproportionate influence due to their monopoly ownership of content and brands — Ed.]

Current publication models are not good at handling data. So we have to build a new model to handle data. In which case, why not build a new model to handle everything?

New “born-digital” researchers are influenced by the models of social networks: that is going to push them towards SN-like approaches of communicating more stuff, more often, in smaller unit. This is going to affect how scholarly communication is done.

Along with this goes an increasing level of comfort with collaboration. [I’m not sure I see that — Ed.]

Bonus section: tweets from Stephen Curry

He posted these during the previous talk. Very important:

Ritu Dhand, Nature

[A disappointing and unconvincing apologia for the continuing existence and importance of traditional publishers, and especially Nature. You would think that they, and they alone, guard the gates of academia from the barbarians. *sigh*. — Ed.]

Lunch

Georgina Mace, UCL

[A defence of classical peer-review. Largely an overview of how peer-review is supposed to work.]

“It’s not perfect, it has its challenges, but it’s not broken yet.”

Richard Smith, ex-editor of BMJ

[An attack on classical peer-review.]

“Peer review is faith-, not evidence-based; ineffective; a lottery; slow; expensive; wasteful; ineffective; easily abused; biased; doesn’t detect fraud; irrelevant.

Apart from that, it’s perfect.”

He doesn’t want to reform peer-review, he wants to get rid of it. Publish, let the world decide. That’s the real peer-review.

He cites studies supporting his assertions. Cochrane review concluded there is no evidence that peer-review is effective. The Ioannidis paper shows that most published findings are false.

Someone should be recording this talk. It’s solid gold.

Annual cost of peer-review is $1.9 billion.

[There is much, much more. I can’t get it down quickly enough.]

 Georgina Mace’s rebuttal

… amounts to contradicting Richard Smith’s evidence-supported statements, but she provides no evidence in support of her position.

Richard Smith’s counter-counter rebuttal

… cites a bunch more studies. This is solid. Solid.

For those who missed out, see Smith’s equally brutal paper Classical peer review: an empty gun. I find his conclusion (that we should just dump peer-review) emotionally hard to accept, but extremely compelling based on actual, you know, evidence.

Fascinating to hear the level of denial in the room. People really, really want to keep believing in peer-review, in spite of evidence. I understand that impulse, but I think it’s unbecoming in scientists.

The challenge for peer-review advocates is: produce evidence that it has value. No-one has responded to that.

Richard Sever, Cold Spring Harbour Press

Richard presents the BiorXive preprint server. Turns out it’s pronounced “bio-archive”, not “bye-orx-ive”.

Nothing in this talk will be new to regular SV-POW! readers, but he makes good, compelling points in favour of preprinting (which we of course agree with!)

Elizabeth Marincola, CEO, PLOS

PLOS is taking steps towards improving peer-review:

  • Use of article-level metrics
  • Moves towards open review
  • Move toward papers evolving over time, not being frozen at the point of publication
  • Better recognition of different kinds of contribution to papers
  • Intention to make submitted paper available to view before peer-review has been carried out, subject only to checks on ethical and technical standard: they aim to make papers available in “a matter of days”.

She notes that much of this is not original: elements of these approaches are in F1000 Research, BiorXiv, etc.

Jan Velterop, science publisher with everyone at some point.

“I’m basically with Richard Smith when it comes to abolishing peer review, but I have a feeling it won’t happen in the next few weeks.”

The situation of publishers:

“Academia throws money at you. What do you do? You pick it up.”

Velterop gets a BIG laugh for this:

“Does peer-review benefit science? I think it does; and it also benefits many other journals.”

He quotes a Scholarly Kitchen blog-post[citation needed] as saying that the cost of technical preparation at PubMed Central — translating from an MS-Word manuscript to valid JATS XML — at $47. So why do we pay $3000 APCs? Surely the peer-review phase doesn’t cost $2953?

Update: here is that Scholarly Kitchen article.

Velterop’s plan is to streamline the review-and-publish process as follows:

  • Author writes manuscript.
  • She solicits reviews from two experts, using her own knowledge of the field to determine who is suitably skilled.
  • They eventually sign off (perhaps after multiple rounds of revisions)
  • The author submits the manuscript, along with the endorsements.
  • The editor checks with the endorsers that they really have given endorsement.
  • The article is posted.

Bam, done!

And at that point in the proceedings, my battery was running dangerously low. I typed a tweet: “low battery may finally force me to shut up! #RSSC”, but literally between typing at and hitting the Tweet button, my laptop shut down. So that’s it for day 1. I’ll do a separate post for the second and final day.