Nature on choosing to publish in open-access journals
March 28, 2013
There’s a good, balanced piece by Stephen Pincock in the new Nature, on the question of whether early-career researchers should publish their work in open-access journals. It seems to be free to read, so take a look at Publishing: Open to possibilities.
I mention it not only because it’s a subject dear to my heart, but also because the article mentions and quotes me. (Regarding “I got quite a lot of criticism from people I respect a lot”, most of those criticisms are in the comments on this post.)
But I also feel obliged to respond to a couple of points in the article, and since it doesn’t seem to have comments enabled, a short post here seems to be appropriate.
First, there’s this quote from Rob Brooks:
Impact factors still pretty much rule. A lot of people — grant committees, administrators and even referees — can’t assess quality. All they can do is count or pseudo-quantify. They count the number of papers you’ve got and count the impact factors of the papers and make a little metric, rather than just reading the papers.
My response: are there really referees who can’t assess quality? Do we really have situations where you submit a paper for peer-review, and the referees evaluate its quality — and recommend acceptance or rejection — not on the basis of the quality of the science, but on the impact factors of other journals you’ve published in?
If that’s true, then those referees should get out of science, now. Or, no — wait — it’s too late for that. They are already out of science. But they should stop pretending to be scientists and go work in McDonald’s.
By contrast, Robert Kiley of the Wellcome Trust is a beacon of sanity:
Many funders are looking beyond a journal’s brand name. “If you come to Wellcome for a grant,” he says, “we make it clear that funding decisions are based on the intrinsic merit of the work, and not the title of the journal in which an author’s work is published.” Kiley points to the policies of the UK programme for assessing research quality, the Research Excellence Framework, which stated in July 2012 that no grant-review sub-panel “will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs”
Pincock then discusses the open-access citation advantage:
Whether [open access] translates into higher citation rates is up for debate. In 2010, a meta-analysis found 27 studies showing that open-access articles had more citations than papers behind paywalls — up to 600% more, depending on the field — and four that found no open-access advantage.
I would not describe that as “up for debate”. I would describe it “has been analysed in detail and the jury is in”. As noted previously here on SV-POW! and in my submission to the House of Commons, Swan’s data says that on average open-access articles are cited 2.76 times as often as non-open.
The most misleading part of the article, though, is this you-get-what-you-pay-for assertion:
According to Carl Bergstrom, an evolutionary biologist at the University of Washington in Seattle, there tends to be a positive correlation between an open-access journal’s fees and its score in a system he co-developed that rates journals according to the number of citations they receive, with citations from highly ranked journals weighted more heavily.
However, the actual results show at best an extremely weak correlation, with very wide confidence intervals. (I find it baffling that the page doesn’t give numbers for these important measures.) Someone wanting to summarise these findings in a few words would do better to state that there is essentially no correlation between influence and price.
Apart from these caveats, the article is good, and presents multiple perspectives with little bias — to Nature‘s credit. It’s well worth reading.
March 28, 2013 at 9:20 am
I guess that the “referees” that Rob Brooks refers to are grant referees rather than article referees. To assess the “quality of the PI” (which is often one of the things they are asked to do) I think he is right that many grant referees look at impact factors of journals, citations and so on rather than reading some of the work of the PI.
In fact, EPSRC (the UK research council for engineering and physical sciences) essentially directs them to do this. Look at the guidance for fellowships that EPSRC produced:
http://www.epsrc.ac.uk/skills/fellows/Pages/howtoapply.aspx
It says:
“The publication list forms an important part of your track record. You should include a paragraph at the beginning of your publication list to indicate which journals and conferences are highly rated in your field, highlighting where they occur in your own list.”
and
“Please place an asterisk beside any papers of which you were the lead author and highlight in italics the most significant papers (up to a maximum of ten.) Please include the numbers of citations for selected publications, if they are relevant within your area of research.”
March 28, 2013 at 11:54 am
Also Mike, I wrote a separate article in Nature on this question of whether ‘you get what you pay for’. (I was unaware of the ground that Pincock’s article would cover, but the two touch on similar studies). I created another graphic which uses some of the eigenfactor project work. http://www.nature.com/news/open-access-the-true-cost-of-science-publishing-1.12676
March 28, 2013 at 12:21 pm
How depressing to read EPSRC’s instructions to fellowship applicants. Let’s hope RCUK’s new guidelines will lead to formulation of an assessment policy that is better focused on the quality of applicants’ work.
March 28, 2013 at 12:42 pm
[…] thanks for Richard Van Noorden for drawing my attention to his new piece Open access: The true cost of science publishing in Nature. I wrote a detailed […]