While Mike’s been off having fun at the Royal Society, this has been happening:

Lots of feathers flying right now over the situation at the Medical Journal of Australia (MJA). The short, short version is that AMPCo, the company that publishes MJA, made plans to outsource production of the journal, and apparently some sub-editing and administrative functions as well, to Elsevier. MJA’s editor-in-chief, Professor Stephen Leeder, raised concerns about the journal getting involved with one of the most ethically problematic publishing companies in existence. And also about this having been done without consultation.

He was sacked for his trouble.

After Leeder was pushed out, his job was offered to MJA’s deputy editor, Tania Janusic. She declined, and resigned from the journal, as did 19 of the 20 members of the journal’s editorial advisory committee. (Some accounts say 18. Anyway, 90%+ of the committee is gone.)

When we first discussed the situation via email, Mike wrote, “My take is that at the present stage of the OA transition, editorial board resignations from journals controlled by predatory legacy publishers are about the most important visible steps that can be taken. Very good news for the world, even though it must be a mighty pain for the people involved.”

Yes. I feel pretty bad for the people involved, but I’m hugely supportive of what they’re doing.

I don’t know what we can do to materially contribute here, beyond amplifying the signal and lending our public support to Leeder, Janusic, and the 19 editors who resigned. That’s a courageous thing to do, but no-one should have to do it. The sooner we move to a world where scientific results and other forms of scholarly publication are freely available to all, instead of under the monopolistic control of a handful of exploitative, hugely profitable corporations, the better.

A short list of links, nowhere near exhaustive, if you’d like to read more:

UPDATE: In the first comment below, Alex Holcombe pointed us to this post written by Leeder himself, explaining the reasoning and consequences of his decision.

Also, dunno how I forgot this – if you haven’t already, you might be interested in signing the Cost of Knowledge boycott against Elsevier. Here’s the link.

[Today’s live-blog is brought to you by Yvonne Nobis, science librarian at Cambridge, UK. Thanks, Yvonne! — Mike.]

Session 1 — The Journal Article: is the end in sight?

Slightly late start due to trains – !

Just arrived to hear Aileen Fyfe University of St Andrews saying that something similar to journal articles will be needed for ‘quite some time’.

Steven Hall, IOP.

The article still fulfils its primary role — the registration, dissemination, certification and archiving of scholarly information. The Journal Article still provides a fixed point — and researchers still see the article as a critical part of research — although it is now evolving into something much more fluid.

Steve then outlined some of the initiatives that IOP have implemented. Examples include the development of thesauri — every article is ‘semantically fingerprinted’. No particular claims are made for IOP innovation — some are broad industry initiatives — but demonstrate how the journal article has evolved.

(Personal bias: as a librarian I like the IOP journal and ebook offering!) IOP have worked with RIN on a study on the researcher behaviour of physical sciences — to research the impact of new technology on researchers. Primary conclusion: researchers in the physical sciences are conservative and oddly see the journal article as most important method of communicating research. (This seems at odds with use of arXiv?)


Mike Brady discusses the ‘floribunda’ of the 19th century scholarly publishing environment.

Sally Shuttleworth (Oxford) questions the move from the gentleman scholar to the publishing machinery of the 21st century and wonders if there will be a resurgence due to citizen science?

Tim Smith (CERN) proposes that change is being technologically driven.

Stuart Taylor (Royal Society publishing) agrees with Steve that there is disconnect between reality and outlandish speculations about what should be in place, and the ‘bells and whistles’ that publishers are adding in to the mix that are not used.

Cameron Neylon: what the web gives us the ability to separate content from display — and this gives us a huge opportunity — and many of us in the this room did predict the death of the article several years ago …(This was premature!)

Herman Hauser makes the valid point that it is well nigh impossible for a researcher now to understand the breadth of a whole field.

Ginny Barbour raises the question of incentives (the article still being the accepted de facto standard). The point was also raised that perhaps this meeting should be repeated with an audience 30 years younger…

No panel comment on this point, however I fear what many would say is that this meeting represents the apex of a pyramid, where these discussions have occurred for years in other conferences (for example, the various science online and force meetings) and have driven both innovation (novel publishing models) and the creation of tools.

I asked about (predictably enough) about use of arXiv — slightly surprised at the response to the RIN study.

Steve Hall: ‘science publishers are service providers’ — if scientific communities become clear about what they want, we can provide such services — but coherent thinking needs to underwrite this. Steve also questions the incentives put in place for researchers to publish in certain high impact journals and how this is damaging.

David Coloquhan raises the issues of perverse incentives for judging researchers, including altmetrics.

Steve Hall: arXiv won’t allow publishers on their governing bodies –and interestingly librarians (take note!) should be engaging with the storage of the data!

Aileen, in conclusion, questions how did the plurality of modes of communication we had in the 18th and 19th centuries get closed down to the level of purely journals? The issue of learned societies and their relationship with commercial agencies is often a cause for concern…

Session 2 How might scientists communicate in the future?

Mike Brady

the role of the speakers is to catalyse discussion amongst ourselves…

Anita de Waard (Elsevier)

350 years ago science was an individual enterprise, although now many large collaborations, much scientific discussion is still on a peer to peer level.

How do we unify the needs of the collective and individual scientists?

We need to create the systems of knowledge management that work for scientists, publishers and librarians.

Quotes John Perry Barlow: ‘Let us endeavour to build systems that allow a kid in Mali who wants to learn about proteomics to not be overwhelmed by the irrelevant and the untrue’ (It would be cruel to mention various issues with the Journal of Proteomics last year…)

Problem is the the paper is the overarching modus operandi. Citations to data are often citations to pictures. We need better ways of citing and connecting knowledge. ‘Papers are stories that persuade with data’, says Anita. She argues we need better ways of citing claims, and constructing chains of evidence that can be traced to their source.

For this we need tools and to build habits of citing evidence into all aspects of our educational system (starting at kindergarten)!

Another problem is data cannot be found or integrated (this to my view is something that the academic community should be tackling, not out-sourcing, which is the way I see this going…)

An understanding needs to evolve that science is a collective endeavour.

Anita is now covering scientific software (‘scientific software sucks’ is the quote attributed to Ben Goldacre yesterday) — it compares unfavourably to Amazon … not sure how true this is?

Anita is very dismissive of scientific software not being adequate — often code is written for a particular purpose. (My view is that this is not something that can easily be commercially outsourced — High energy physics anyone?)

Mark Hahnel, FigShare

(FigShare was built as a way for Mark to curate/publish his own research.)

Mark opens with policies from different funders (at Cambridge we are feeling the effect of these already) for data mandates — especially EPSRC: all digital outputs from funded research now must be made available.

Mark talks around the Open Academic Tidal Wave — sorry not a great link but the only one I can find (thanks Lou Woodley): and we are at level 4 of this.

Mark surveyed publishers about what they see the future of publishing in 2020 — and they replied ‘Version control on papers, data incorporated within the article’, but the technology is there already — and uses the example of F1000 Research.


Mike Brady: It’s as well Imelda Marcos was not a scientist — following on from Anita’s claims that software for buying shoes is more fit for purpose than scientific software!

Herman Hauser: willing to fund things that help with an ‘evidence engine’ to avoid repeats of the MMR fiasco!

David Coloquhan: science is not the same as buying shoes! Refreshingly cynical.

Wendy Hall stresses the importance of linking information — every publisher should have a semantically linked website (and on the science of buying shoes).

Comment from the floor: Getting more data into repositories may not be exciting but is essential. Mark agrees — once the data is there you can do things with it, such as building apps to extract what you need.

Richard Sever (Cold Harbour Press) with a great quote: “The best way to store genomic data is in DNA.”

Mike Taylor: when we discuss how data is associated with papers we must ensure that this is ‘open’, this includes the APIs, to avoid repeating the ‘walled garden of silos’ in which we find ourselves now.

Question of electronic access in the future (Dave Garner) — how do we future-proof science? Very valid — we can’t access material from 1980s floppy disks!

Anita: data is entwined with software and we need to preserve these executable components. Issues returning to citation and data citations and incentives again which has been a pervasive theme over the last couple of days.

Cameron Neylon: we need to move to a situation where we can publish data itself, and this can be an incremental process, not the current binary ‘publish or not publish’ situation (which of course comes back to incentives).

In summary, Mark questions timescales, and Anita wonders how the Royal Society can bring these topics to the world?

Time for lunch, and now over to Matthew Dovey to continue this afternoon (alongside Steven Hall another of my former colleagues)!

I’ll try to live-blog the first day of part 2 of the Royal Society’s Future of Scholarly Scientific Communication meeting, as I did for the first day of part 1. We’ll see how it goes.

Here’s the schedule for today and tomorrow.

Session 1: the reproducibility problem

Chair: Alex Halliday, vice-president of the Royal Society

Introduction to reproducibility. What it means, how to achieve it, what role funding organisations and publishers might play.

For an introduction/overview, see #FSSC – The role of openness and publishers in reproducible research.

Michele Dougherty, planetary scientist

It’s very humbling being at this meeting, when it’s so full of people who have done astonishing things. For example, Dougherty discovered an atmosphere around one of Saturn’s moons by an innovative use of magnetic field data. So many awesome people.

Her work is largely to do with very long-term project involving planetary probes, e.g. the Cassini-Huygens probe. It’s going to be interesting to know what can be said about reproducibility of experiments that take decades and cost billions.

“The best science output you can obtain is as a result of collaboration with lots of different teams.”

Application of reproducibility here is about making the data from the probes available to the scientific community — and the general public — so that the result of analysis can be reproduced. So not experimental replication.

Such data often has a proprietary period (essentially an embargo) before its public release, partly because it’s taken 20 years to obtain and the team that did this should get the first crack at it. But it all has to be made publicly available.

Dorothy Bishop, chair of Academy of Medical Sciences group on replicability

The Royal Society is very much not the first to be talking about replicability — these discussions have been going on for years.

About 50% of studies in Bishop’s field are capable of replication. Numbers are even worse in some fields. Replication of drug trials are particularly important, as false result kill people.

Journals cause awful problems with impact-chasing: e.g. high-impact journals will publish sexy-looking autism studies with tiny samples, which no reputable medical journal would publish.

Statistical illiteracy is very widespread. Authors can give the impression of being statistically aware but in a superficial way.

Too much HARKing going on (Hypothesising After Results Known — searching a dataset for anything that looks statistically significant in the shallow p < 0.05 sense.)

“It’s just assumed that people doing research, know what they are doing. Often that’s just not the case.”

many more criticisms of how the journal system encourages bad research. They’re coming much faster than I can type them. This is a storming talk, I wish the record would be made available.

Employers are also to blame for prioritising expensive research proposals (= large grants) over good ones.

All of this causes non-replicable science.

Floor discussion

Lots of great stuff here that I just can’t capture, sorry. Best follow the tweet stream for the fast-moving stuff.

One highlight: Pat Brown thinks it’s not necessarily a problem if lots of statistically underpowered studies are performed, so long as they’re recognised as such. Dorothy Bishop politely but emphatically disagrees: they waste resources, and produce results that are not merely useless but actively wrong and harmful.

David Colhoun comments from the floor: while physical sciences consider “significant results” to be five sigmas (p < 0.000001), biomed is satisfied with slightly less than two sigmas (p < 0.05) which really should be interpreted only as “worth another look”.

Dorothy Bishop on publishing data, and authors’ reluctance to do so: “It should be accepted as a cultural norm that mistakes in data do happen, rather than shaming people who make data open.”

Coffee break

Nothing to report :-)

Session 2: what can be done to improve reproducibility?

Iain Hrynaszkiewicz, head of data, Nature

In an analysis of retractions of papers in PubMed Central, 2/3 were due to fraud and 20% due to error.

Access to methods and data is a prerequisite for replicability.

Pre-registration, sharing of data, reporting guidelines all help.

“Open access is important, but it’s only part of the solution. Openness is a means to an end.”

Hrynaszkiewicz says text-miners are a small minority of researchers. [That is true now, but I and others are confident this will change rapidly as the legal and technical barriers are removed: it has to, since automated reading is the only real solution to the problem of keeping up with an exponentially growing literature. — Ed.]

Floor discussion