This post is a response to Copyright from the lens of a lawyer (and poet), posted a couple of days ago by Elsevier’s General Counsel, Mark Seeley. Yes, I am a slave to SIWOTI syndrome. No, I shouldn’t be wasting my time responding to this. Yes, I ought to be working on that exciting new manuscript that we SV-POW!er Rangers have up and running. But but but … I can’t just let this go.


Copyright from the lens of a lawyer (and poet) is a defence of Elsevier’s practice of having copyright encumber scientific publishing. I tried to read it in the name of fairness. It didn’t go well. The very first sentence is wrong:

It is often said that copyright law is about a balance of interests and communities, creators and users, and ultimately society as a whole.

No. Copyright is not a balance between competing interests; it’s a bargain that society makes. We, the people, give up some rights in exchange for incentivising creative people to make new work, because that new work is of value to society. To quote the US constitution’s helpful clause, copyrights exist “To promote the Progress of Science and useful Arts” — not for authors, but for wider society. And certainly not of publishers who coerce authors to donate copyright!

(To be fair to Seeley, he did hedge by writing “It is often said that copyright law is about a balance”. That is technically true. It is often said; it’s just wrong.)

Well, that’s three paragraphs on the first sentence of Elsevier’s defence of copyright. I suppose I’d better move on.

The STM journal publishing sector is constantly adjusting to find the right balance between researcher needs and the journal business model, as refracted through copyright.

Wrong wrong wrong. We don’t look for a balance between researchers needs (i.e. science) and the journal business model. Journals are there to serve science. That’s what they’re for.

Then we have the quote from Mark Fischer:

I submit that society benefits when the best creative spirits can be full-time creators and not part-timers doing whatever else (other than writing, composing, painting, etc.) they have to do to pay the rent.

This may be true. But it is totally irrelevant to scholarly copyright. That should hardly need pointing out, but here it is for those hard of thinking. Scholars make no money from the copyright in the work they do, because (under the Elsevier model) they hand that copyright over to the publisher. Their living comes in the form of grants and salaries, not royalties.

Ready for the next one?

The alternatives to a copyright-based market for published works and other creative works are based on near-medieval concepts of patronage, government subsidy […]

Woah! Governments subsidising research and publication is “near-medieval”? And there we were thinking it was by far the most widespread model. Silly us. We were all near-medieval all this time.

Someone please tell me this is a joke.

Moving swiftly on …

Loud advocates for “copyright reform” suggest that the copyright industries have too much power […] My comparatively contrarian view is that this ignores the enormous creative efforts and societal benefits that arise from authoring and producing the original creative work in the first place: works that identify and enable key scientific discoveries, medical treatments, profound insights, and emotionally powerful narratives and musical experiences.

Wait, wait. Are we now saying that … uh, the only reason we get scientific discoveries and medical treatment because … er … because of copyright? Is that it? That can’t be it. Can it?

Copyright has no role in enabling this. None.

In fact, it’s worse than that. The only role of copyright in modern scholarly publishing is to prevent societal benefits arising from scientific and medical research.

The article then wanders off into an (admittedly interesting) history of Seeley’s background as a poet, and as a publisher of literary magazines. The conclusion of this section is:

Of course creators and scientists want visibility […] At the very least, they’d like to see some benefit and support from their work. Copyright law is a way of helping make that happen.

This article continues to baffle. The argument, if you want to dignify it with that name, seems to be:

  • poets like copyright
  • => we copyright other people’s science
  • => … profit!

Well, that was incoherent. But never mind: finally we come to part of the article that makes sense:

  • There is the “idea-expression” dichotomy — that copyright protects expression but not the fundamental ideas expressed in a copyright work.

This is correct, of course. That shouldn’t be cause for comment, coming from a copyright lawyer, but the point needs to be made because the last time an Elsevier lawyer blogged, she confused plagiarism with copyright violation. So in that respect, this new blog is a step forward.

But then the article takes a sudden left turn:

The question of the appropriateness of copyright, or “authors’ rights,” in the academic field, particularly with respect to research journal articles, is sometimes controversial. In a way quite similar to poets, avant-garde literary writers and, for that matter, legal scholars, research academics do not rely directly on income from their journal article publishing.

Er, wait, what? So you admit that scholarly authors do not benefit from copyright in their articles? We all agree, then, do we? Then … what was the first half of the article supposed to be about?

And in light of this, what on earth are we to make of this:

There is sometimes a simplistic “repugnance” about the core publishing concept that journal publishers request rights from authors and in return sell or license those rights to journal subscribers or article purchasers.

Seeley got that much right! (Apart from the mystifyingly snide use of “simplistic” and the inexplicable scare-quotes.) The question is why he considers this remotely surprising. Why would anyone not find such a system repugnant? (That was a rhetorical question, but here’s the answer anyway: because they make a massive profit from it. That is the only reason.)

Well, we’re into the final stretch. The last paragraph

Some of the criticism of the involvement of commercial publishing and academic research is simply prejudice, in my view;

Yes. Some of us are irrationally prejudiced against a system where, having laboriously created new knowledge, it’s then locked up behind a paywall. It’s like the irrational prejudice some coal-miners have against the idea of the coal they dig up being immediately buried again.

And finally, this:

Some members of the academic community […] base their criticism on idealism.

Isn’t that odd? I have never understood why some people consider “idealism” to be a criticism. I accept it as high praise. People who are not idealists have nothing to base their pragmatism on. They are pragmatic, sure, but to what end?

So what are we left with? What is Seeley’s article actually about? It’s very hard to pick out a coherent thread. If there is one, it seems to be this: copyright is helpful for some artists, so it follows that scholarly authors should donate their copyright to for-profit publishers. That is a consequence that, to my mind, does not follow particularly naturally from the hypothesis.

[Today’s live-blog is brought to you by Yvonne Nobis, science librarian at Cambridge, UK. Thanks, Yvonne! — Mike.]

Session 1 — The Journal Article: is the end in sight?

Slightly late start due to trains – !

Just arrived to hear Aileen Fyfe University of St Andrews saying that something similar to journal articles will be needed for ‘quite some time’.

Steven Hall, IOP.

The article still fulfils its primary role — the registration, dissemination, certification and archiving of scholarly information. The Journal Article still provides a fixed point — and researchers still see the article as a critical part of research — although it is now evolving into something much more fluid.

Steve then outlined some of the initiatives that IOP have implemented. Examples include the development of thesauri — every article is ‘semantically fingerprinted’. No particular claims are made for IOP innovation — some are broad industry initiatives — but demonstrate how the journal article has evolved.

(Personal bias: as a librarian I like the IOP journal and ebook offering!) IOP have worked with RIN on a study on the researcher behaviour of physical sciences — to research the impact of new technology on researchers. Primary conclusion: researchers in the physical sciences are conservative and oddly see the journal article as most important method of communicating research. (This seems at odds with use of arXiv?)


Mike Brady discusses the ‘floribunda’ of the 19th century scholarly publishing environment.

Sally Shuttleworth (Oxford) questions the move from the gentleman scholar to the publishing machinery of the 21st century and wonders if there will be a resurgence due to citizen science?

Tim Smith (CERN) proposes that change is being technologically driven.

Stuart Taylor (Royal Society publishing) agrees with Steve that there is disconnect between reality and outlandish speculations about what should be in place, and the ‘bells and whistles’ that publishers are adding in to the mix that are not used.

Cameron Neylon: what the web gives us the ability to separate content from display — and this gives us a huge opportunity — and many of us in the this room did predict the death of the article several years ago …(This was premature!)

Herman Hauser makes the valid point that it is well nigh impossible for a researcher now to understand the breadth of a whole field.

Ginny Barbour raises the question of incentives (the article still being the accepted de facto standard). The point was also raised that perhaps this meeting should be repeated with an audience 30 years younger…

No panel comment on this point, however I fear what many would say is that this meeting represents the apex of a pyramid, where these discussions have occurred for years in other conferences (for example, the various science online and force meetings) and have driven both innovation (novel publishing models) and the creation of tools.

I asked about (predictably enough) about use of arXiv — slightly surprised at the response to the RIN study.

Steve Hall: ‘science publishers are service providers’ — if scientific communities become clear about what they want, we can provide such services — but coherent thinking needs to underwrite this. Steve also questions the incentives put in place for researchers to publish in certain high impact journals and how this is damaging.

David Coloquhan raises the issues of perverse incentives for judging researchers, including altmetrics.

Steve Hall: arXiv won’t allow publishers on their governing bodies –and interestingly librarians (take note!) should be engaging with the storage of the data!

Aileen, in conclusion, questions how did the plurality of modes of communication we had in the 18th and 19th centuries get closed down to the level of purely journals? The issue of learned societies and their relationship with commercial agencies is often a cause for concern…

Session 2 How might scientists communicate in the future?

Mike Brady

the role of the speakers is to catalyse discussion amongst ourselves…

Anita de Waard (Elsevier)

350 years ago science was an individual enterprise, although now many large collaborations, much scientific discussion is still on a peer to peer level.

How do we unify the needs of the collective and individual scientists?

We need to create the systems of knowledge management that work for scientists, publishers and librarians.

Quotes John Perry Barlow: ‘Let us endeavour to build systems that allow a kid in Mali who wants to learn about proteomics to not be overwhelmed by the irrelevant and the untrue’ (It would be cruel to mention various issues with the Journal of Proteomics last year…)

Problem is the the paper is the overarching modus operandi. Citations to data are often citations to pictures. We need better ways of citing and connecting knowledge. ‘Papers are stories that persuade with data’, says Anita. She argues we need better ways of citing claims, and constructing chains of evidence that can be traced to their source.

For this we need tools and to build habits of citing evidence into all aspects of our educational system (starting at kindergarten)!

Another problem is data cannot be found or integrated (this to my view is something that the academic community should be tackling, not out-sourcing, which is the way I see this going…)

An understanding needs to evolve that science is a collective endeavour.

Anita is now covering scientific software (‘scientific software sucks’ is the quote attributed to Ben Goldacre yesterday) — it compares unfavourably to Amazon … not sure how true this is?

Anita is very dismissive of scientific software not being adequate — often code is written for a particular purpose. (My view is that this is not something that can easily be commercially outsourced — High energy physics anyone?)

Mark Hahnel, FigShare

(FigShare was built as a way for Mark to curate/publish his own research.)

Mark opens with policies from different funders (at Cambridge we are feeling the effect of these already) for data mandates — especially EPSRC: all digital outputs from funded research now must be made available.

Mark talks around the Open Academic Tidal Wave — sorry not a great link but the only one I can find (thanks Lou Woodley): and we are at level 4 of this.

Mark surveyed publishers about what they see the future of publishing in 2020 — and they replied ‘Version control on papers, data incorporated within the article’, but the technology is there already — and uses the example of F1000 Research.


Mike Brady: It’s as well Imelda Marcos was not a scientist — following on from Anita’s claims that software for buying shoes is more fit for purpose than scientific software!

Herman Hauser: willing to fund things that help with an ‘evidence engine’ to avoid repeats of the MMR fiasco!

David Coloquhan: science is not the same as buying shoes! Refreshingly cynical.

Wendy Hall stresses the importance of linking information — every publisher should have a semantically linked website (and on the science of buying shoes).

Comment from the floor: Getting more data into repositories may not be exciting but is essential. Mark agrees — once the data is there you can do things with it, such as building apps to extract what you need.

Richard Sever (Cold Harbour Press) with a great quote: “The best way to store genomic data is in DNA.”

Mike Taylor: when we discuss how data is associated with papers we must ensure that this is ‘open’, this includes the APIs, to avoid repeating the ‘walled garden of silos’ in which we find ourselves now.

Question of electronic access in the future (Dave Garner) — how do we future-proof science? Very valid — we can’t access material from 1980s floppy disks!

Anita: data is entwined with software and we need to preserve these executable components. Issues returning to citation and data citations and incentives again which has been a pervasive theme over the last couple of days.

Cameron Neylon: we need to move to a situation where we can publish data itself, and this can be an incremental process, not the current binary ‘publish or not publish’ situation (which of course comes back to incentives).

In summary, Mark questions timescales, and Anita wonders how the Royal Society can bring these topics to the world?

Time for lunch, and now over to Matthew Dovey to continue this afternoon (alongside Steven Hall another of my former colleagues)!

I’ll try to live-blog the first day of part 2 of the Royal Society’s Future of Scholarly Scientific Communication meeting, as I did for the first day of part 1. We’ll see how it goes.

Here’s the schedule for today and tomorrow.

Session 1: the reproducibility problem

Chair: Alex Halliday, vice-president of the Royal Society

Introduction to reproducibility. What it means, how to achieve it, what role funding organisations and publishers might play.

For an introduction/overview, see #FSSC – The role of openness and publishers in reproducible research.

Michele Dougherty, planetary scientist

It’s very humbling being at this meeting, when it’s so full of people who have done astonishing things. For example, Dougherty discovered an atmosphere around one of Saturn’s moons by an innovative use of magnetic field data. So many awesome people.

Her work is largely to do with very long-term project involving planetary probes, e.g. the Cassini-Huygens probe. It’s going to be interesting to know what can be said about reproducibility of experiments that take decades and cost billions.

“The best science output you can obtain is as a result of collaboration with lots of different teams.”

Application of reproducibility here is about making the data from the probes available to the scientific community — and the general public — so that the result of analysis can be reproduced. So not experimental replication.

Such data often has a proprietary period (essentially an embargo) before its public release, partly because it’s taken 20 years to obtain and the team that did this should get the first crack at it. But it all has to be made publicly available.

Dorothy Bishop, chair of Academy of Medical Sciences group on replicability

The Royal Society is very much not the first to be talking about replicability — these discussions have been going on for years.

About 50% of studies in Bishop’s field are capable of replication. Numbers are even worse in some fields. Replication of drug trials are particularly important, as false result kill people.

Journals cause awful problems with impact-chasing: e.g. high-impact journals will publish sexy-looking autism studies with tiny samples, which no reputable medical journal would publish.

Statistical illiteracy is very widespread. Authors can give the impression of being statistically aware but in a superficial way.

Too much HARKing going on (Hypothesising After Results Known — searching a dataset for anything that looks statistically significant in the shallow p < 0.05 sense.)

“It’s just assumed that people doing research, know what they are doing. Often that’s just not the case.”

many more criticisms of how the journal system encourages bad research. They’re coming much faster than I can type them. This is a storming talk, I wish the record would be made available.

Employers are also to blame for prioritising expensive research proposals (= large grants) over good ones.

All of this causes non-replicable science.

Floor discussion

Lots of great stuff here that I just can’t capture, sorry. Best follow the tweet stream for the fast-moving stuff.

One highlight: Pat Brown thinks it’s not necessarily a problem if lots of statistically underpowered studies are performed, so long as they’re recognised as such. Dorothy Bishop politely but emphatically disagrees: they waste resources, and produce results that are not merely useless but actively wrong and harmful.

David Colhoun comments from the floor: while physical sciences consider “significant results” to be five sigmas (p < 0.000001), biomed is satisfied with slightly less than two sigmas (p < 0.05) which really should be interpreted only as “worth another look”.

Dorothy Bishop on publishing data, and authors’ reluctance to do so: “It should be accepted as a cultural norm that mistakes in data do happen, rather than shaming people who make data open.”

Coffee break

Nothing to report :-)

Session 2: what can be done to improve reproducibility?

Iain Hrynaszkiewicz, head of data, Nature

In an analysis of retractions of papers in PubMed Central, 2/3 were due to fraud and 20% due to error.

Access to methods and data is a prerequisite for replicability.

Pre-registration, sharing of data, reporting guidelines all help.

“Open access is important, but it’s only part of the solution. Openness is a means to an end.”

Hrynaszkiewicz says text-miners are a small minority of researchers. [That is true now, but I and others are confident this will change rapidly as the legal and technical barriers are removed: it has to, since automated reading is the only real solution to the problem of keeping up with an exponentially growing literature. — Ed.]

Floor discussion

When a paper goes for peer-review at PLOS ONE, the reviewers are told not to make any judgement about how important or sexy or “impacty” the paper is — to judge it only on methodical soundness. All papers that are judged sound are to be published without making guesses about which will and won’t improve the journal’s reputation through being influential down the line. (Such guesses are hopelessly inaccurate anyway.)

When PLOS ONE was new, this approach drew scorn from established publishers, but now those publishers all have their own journals that use similar editorial criteria (Nature’s Scientific Reports, AAAS‘s Science Advances, Elsevier’s first attempt, Elsevier’s second attempt, the Royal Society’s Royal Society Open Science). Those editorial criteria have proved their worth.

But what are we going to call this style of peer-review?

It’s not a new problem. I discussed it with with David Crotty three years ago without reaching any very satisfactory conclusion. But three years have not really helped us much as we try to agree on a term for this increasingly important and prevalent model.

What are the options on the table?

PLOS ONE-style peer-review. It’s a cumbersome term, and it privileges PLOS ONE when that is now far from the only journal to use this approach to peer-review (and may not even have been first).

Peer-review Lite. A snide term coined by people who wanted PLOS ONE to fail. It’s not a good description, and it carries baggage.

Scientific peer-review. This one came up in the discussion with David Crotty, but it’s not really acceptable because it would leave us still needing a term for what the Open Library of Humanities does.

Objective peer-review. This is the term that was used at the Royal Society meeting at the start of this week — the idea being that you review objectively for the quality of the research, but don’t make a subjective judgement of its importance. Several people didn’t like this on the grounds that even the “objective” half is inevitably subjective.

Any others that I missed?

I don’t have a good solution to propose to this problem; but I think it’s getting more urgent that we do solve it. We have to have a simple, unambiguous, universally understood term to understand a model of peer-review that is becoming increasingly pervasive and may well end up as the dominant form of peer-review.

Plough in — comments are open!

Update, 6pm

Liz Wager asked a very similar question four years ago, over on the BMJ blog: what to call the journals that use this approach to peer-review. Terms that she mentions include:

  • “bias to publish” (from BioMed Central)
  • “non-selective” (her own coinage, which she doesn’t like)
  • “bumboat” (I can’t explain this one, you’ll have to read the article)
  • “author-driver” or “author-focused” publication (AFP for short)
  • “search-located” (which she coins, the dismisses as tautologous)
  • “unconventional” or “non-traditional” (discarded as disparaging)
  • “non-discriminatory”, “impartial” or “unprejudiced”
  • “general” (dismissed as a non-starter)
  • “broad-spectrum” (inapplicable to specialised journals)

And then in the comments various people proposed:

  • “below the fold” journals
  • “omnivorous” (I quite like that one)
  • “alternative”
  • “Voldermortian journals”, which I don’t understand at all.
  • “Unfiltered”, contrasted with “filtered”
  • “inclusive”, contrasted with “exclusive” (I quite like this, too)
  • “high volume low hassle”

But there’s no conclusion or preferred term.

The REF (Research Excellence Framework) is a time-consuming exercise that UK universities have to go through every few years to assess and demonstrate the value of their research to the government; the way funding is allocated between universities is largely dependent on the results of the REF. The exercise is widely resented, in part because the processes of preparing and reviewing the submissions are so time-consuming.

Dorothy Bishop has noted that results of REF assessments correlate strongly with departmental H-indexes (and suggested that we could save on the cost of future REFs by just using that H-index instead of the time-consuming peer-review process).

But it’s also been shown that H-index is strongly correlated with the simple number of publications. A seductive but naive conclusion would be: “we could just count publications for the next REF!”

But of course if we simply allocated research funding across universities on the basis of how many papers they produce, they would — they would have to — respond by simply racing to write more and more papers. Our already overloaded ability to assimilate new information would be further flooded. It’s a perverse incentive.

So this is a classic example of a very important general dictum: measure the thing you’re actually interested in, not a proxy. I don’t know if this dictum has been given a name yet, but it ought to be.

Measuring the thing we’re interested in is often difficult and time-consuming. But since only the thing we measure will ever be optimised for, we must measure the thing we want optimised — in this case, quality of research rather than quantity. That would still be true even if the correlation between REF assessments and departmental H-indexes was absolutely perfect. Because that correlation is an accident (in the philosophical sense); changing the circumstances will break that correlation.

No doubt all this reasoning is very familiar and painfully basic to people who have been working on the problem of metrics in assessment for years; to them, I apologise. For everyone else, I hope this comment provides some grains of insight.

[I originally wrote the initial form of this post as a comment on the Royal Society blog how should research and impact be assessed?, but I’m still waiting for it to be approved, so here it is.]


I’m at the Royal Society today and tomorrow as part of the Future of Scholarly Scientific Communication conference. Here’s the programme.

I’m making some notes for my own benefit, and I thought I might as well do them in the form of a blog-post, which I will continuously update, in case anyone else is interested.

I stupidly didn’t make notes on the first two speakers, but let’s pick up from the third:

Deborah Shorley, ex-librarian of Imperial College London

Started out by saying that she feels her opinion, as a librarian, is irrelevant, because librarians are becoming irrelevant. A pretty incendiary opening!

Important observations:

“Scientific communication in itself doesn’t matter; what matters is that good science be communicated well.”

And regarding the model of giving papers to publishers gratis, then paying them for the privilege of reading them:

“I can’t think of any other area where such a dopey business model pertains.”

(On which, see Scott Aaronson’s brilliant take on this in his review of The Access Principle — the article that first woke me up to the importance of open access.)

Shorey wants to bring publishing skills back in-house, to the universities and their libraries, and do it all themselves. As far as I can make out, she simply sees no need for specialist publishers. (Note: I do not necessarily endorse all these views.)

“If we don’t seize the opportunity, market forces will prevail. And market forces in this case are not pretty.”

Robert Parker, ex-head of publishing, Royal Society of Chemistry

Feels that society publishers allowed themselves to be overtaken by commercial publishers. Notes that when he started working for the RSC’s publishing arm, it was “positively dickensian”, using technology that would mostly have been familiar to Gutenberg. Failure to engage with authors and with technology allowed the commercial publishers to get ahead — something that is only now being redressed.

He’s talking an awful lot about the impact factors of their various journals.

My overall impression is that his perspective is much less radical than that of Deborah Shorley, wanting learned-society publishers to be better able to compete with the commercial publishers.

Gary Evoniuk, policy director at Glaxo Smith Klein

GSK submits 300-400 scientific studies for publication each year.

Although the rise of online-only journals means there is no good reason to not publish any finding, they still find that negative results are harder to get published.

“The paper journal, and the paper article, will soon be dead. This makes me a little bit sad.”

He goes further and wonders whether we need journal articles at all? When actual results are often available long before the article, is the context and interpretation that it provides valuable enough to be worth all the effort that’s expended on it? [My answer: yes — Ed.]

Discussion now follows. I probably won’t attempt to blog it (not least because I will want to participate). Better check out the twitter stream.

Nigel Shadbolt, Open Data Institute

Begin by reflecting on a meeting ten years ago, convened at Southampton by Stevan Harnad, on … the future of scholarly scientific communication.

Still optimistic about the Semantic Web, as I guess we more or less have to be. [At least, about many separate small-sw semantic webs — Ed.] We’re starting to see regular search-engines like Google taking advantage of available machine-readable data to return better results.

Archiving data is important, of course; but it’s also going to be increasingly important to archive algorithms. github is a useful prototype of this.

David Lambert, president/CEO, internet2

Given how the digital revolution has transformed so many fields (shopping, auctions, newspapers, movies) why has scholarly communication been so slow to follow? [Because the incumbents with a vested interesting in keeping things as they are have disproportionate influence due to their monopoly ownership of content and brands — Ed.]

Current publication models are not good at handling data. So we have to build a new model to handle data. In which case, why not build a new model to handle everything?

New “born-digital” researchers are influenced by the models of social networks: that is going to push them towards SN-like approaches of communicating more stuff, more often, in smaller unit. This is going to affect how scholarly communication is done.

Along with this goes an increasing level of comfort with collaboration. [I’m not sure I see that — Ed.]

Bonus section: tweets from Stephen Curry

He posted these during the previous talk. Very important:

Ritu Dhand, Nature

[A disappointing and unconvincing apologia for the continuing existence and importance of traditional publishers, and especially Nature. You would think that they, and they alone, guard the gates of academia from the barbarians. *sigh*. — Ed.]


Georgina Mace, UCL

[A defence of classical peer-review. Largely an overview of how peer-review is supposed to work.]

“It’s not perfect, it has its challenges, but it’s not broken yet.”

Richard Smith, ex-editor of BMJ

[An attack on classical peer-review.]

“Peer review is faith-, not evidence-based; ineffective; a lottery; slow; expensive; wasteful; ineffective; easily abused; biased; doesn’t detect fraud; irrelevant.

Apart from that, it’s perfect.”

He doesn’t want to reform peer-review, he wants to get rid of it. Publish, let the world decide. That’s the real peer-review.

He cites studies supporting his assertions. Cochrane review concluded there is no evidence that peer-review is effective. The Ioannidis paper shows that most published findings are false.

Someone should be recording this talk. It’s solid gold.

Annual cost of peer-review is $1.9 billion.

[There is much, much more. I can’t get it down quickly enough.]

 Georgina Mace’s rebuttal

… amounts to contradicting Richard Smith’s evidence-supported statements, but she provides no evidence in support of her position.

Richard Smith’s counter-counter rebuttal

… cites a bunch more studies. This is solid. Solid.

For those who missed out, see Smith’s equally brutal paper Classical peer review: an empty gun. I find his conclusion (that we should just dump peer-review) emotionally hard to accept, but extremely compelling based on actual, you know, evidence.

Fascinating to hear the level of denial in the room. People really, really want to keep believing in peer-review, in spite of evidence. I understand that impulse, but I think it’s unbecoming in scientists.

The challenge for peer-review advocates is: produce evidence that it has value. No-one has responded to that.

Richard Sever, Cold Spring Harbour Press

Richard presents the BiorXive preprint server. Turns out it’s pronounced “bio-archive”, not “bye-orx-ive”.

Nothing in this talk will be new to regular SV-POW! readers, but he makes good, compelling points in favour of preprinting (which we of course agree with!)

Elizabeth Marincola, CEO, PLOS

PLOS is taking steps towards improving peer-review:

  • Use of article-level metrics
  • Moves towards open review
  • Move toward papers evolving over time, not being frozen at the point of publication
  • Better recognition of different kinds of contribution to papers
  • Intention to make submitted paper available to view before peer-review has been carried out, subject only to checks on ethical and technical standard: they aim to make papers available in “a matter of days”.

She notes that much of this is not original: elements of these approaches are in F1000 Research, BiorXiv, etc.

Jan Velterop, science publisher with everyone at some point.

“I’m basically with Richard Smith when it comes to abolishing peer review, but I have a feeling it won’t happen in the next few weeks.”

The situation of publishers:

“Academia throws money at you. What do you do? You pick it up.”

Velterop gets a BIG laugh for this:

“Does peer-review benefit science? I think it does; and it also benefits many other journals.”

He quotes a Scholarly Kitchen blog-post[citation needed] as saying that the cost of technical preparation at PubMed Central — translating from an MS-Word manuscript to valid JATS XML — at $47. So why do we pay $3000 APCs? Surely the peer-review phase doesn’t cost $2953?

Update: here is that Scholarly Kitchen article.

Velterop’s plan is to streamline the review-and-publish process as follows:

  • Author writes manuscript.
  • She solicits reviews from two experts, using her own knowledge of the field to determine who is suitably skilled.
  • They eventually sign off (perhaps after multiple rounds of revisions)
  • The author submits the manuscript, along with the endorsements.
  • The editor checks with the endorsers that they really have given endorsement.
  • The article is posted.

Bam, done!

And at that point in the proceedings, my battery was running dangerously low. I typed a tweet: “low battery may finally force me to shut up! #RSSC”, but literally between typing at and hitting the Tweet button, my laptop shut down. So that’s it for day 1. I’ll do a separate post for the second and final day.

There’s been some concern over Scientific Reports‘ new scheme whereby authors submitting manuscripts can pay $750 to have them peer-reviewed more quickly. Some members of the editorial board have quit over this development, feeling that it’s unfair to authors who can’t pay. Myself, I feel it at least shows admirable audacity — NPG has found a way to monetise its own lethargy, which is surely what capitalism is all about.

The real problem with this scheme is that $750 is an awful lot to gamble, as a sort of “pre-APC”, at a point when you don’t know whether your article is actually going to be published or not. If the peer-review returns an unfavourable verdict it’s just money down the drain.

So I welcome today’s announcement that, for only a slightly higher payment of a round $1000, it’s now possible to bypass peer-review completely, and move directly to publication. This seems like a much fairer deal for authors, and of course it streamlines the publication process yet further. Now authors can obtain the prestigious Nature Publishing Group imprint in a matter of a couple of days.

Onward and upward!


Get every new post delivered to your Inbox.

Join 3,392 other followers