February 9, 2014
Stop what you’re doing and go read Cameron Neylon’s blog. Specifically, read his new post, Improving on “Access to Research”.
Regular readers of SV-POW! might legitimately complain that my so-called advocacy consists mostly of whining about how rubbish things are. If you find that wearying (and I won’t blame you if you do), then read Cameron instead: he goes beyond critiquing what is, and sees what could be. Here is a key quote on this new post:
I did this on a rainy Saturday afternoon because I could, because it helped me learn a few things, and because it was fun. I’m one of tens or hundreds of thousands who could have done this, who might apply those skills to cleaning up the geocoding of species in research articles, or extracting chemical names, or phylogenetic trees, or finding new ways to understand the networks of influence in the research literature. I’m not going to ask for permission, I’m not going to go out of my way to get access, and I’m not going to build something I’m not allowed to share. A few dedicated individuals will tackle the permissions issues and the politics. The rest will just move on to the next interesting, and more accessible, puzzle.
Right! Open access is not about reducing subscription costs to libraries, or about slicing away the absurd profits of the legacy publishers, or about a change to business models. It’s about doing new and exciting things that simply weren’t possible before.
February 3, 2014
From the files of J. K. Rowling.
Dear Ms. Rowling,
Thank you for submitting your manuscript Harry Potter and the Half-Blood Prince. We will be happy to consider it for publication. However we have some concerns about the excessive length of this manuscript. We usually handle works of 5-20 pages, sometimes as much as 30 pages. Your 1337-page manuscript exceeds these limits, and requires some trimming.
We suggest that this rather wide-ranging work could usefully be split into a number of smaller, more tightly focussed, papers. In particular, we feel that the “magic” theme is not appropriate for our venue, and should be excised from the current submission.
Assuming you are happy to make these changes, we will be pleased to work with you on this project.
Esteemed Joenne Kay Rowling,
We are delightful to recieve your manuscript Harry Potter and the Half-Blood Prince and we look forword to publish it in our highly prestigious International Journal of Story Peer Reviewed which in 2013 is awarded an impact factor of 0.024.
Before we can progression this mutually benefit work, we require you to send a cheque for $5,000 US Dollars to the above address.
Dear J.R.R. Rowling,
We are in receipt of your manuscript Harry Potter and the Half-Blood Prince. Unfortunately, after a discussion with the editorial board, we concluded that it is insufficiently novel to warrant publication in our journal, which is one of the leading venues in its field. Although your work is well executed, it does not represent a significant advance in scholarship.
That is not to say that minor studies such as yours are of no value, however! Have you considered one of the smaller society journals?
Dear Dr. Rowling
Your submission Harry Potter and the Half-Blood Prince has passed initial editorial checks and will now be sent to two peer-reviewers. We will contact you when we have their reports and are able to make a decision.
Dear Dr. Rowling
Re: Harry Potter and the Half-Blood Prince.
We agree that eighteen months is too long for a manuscript to spend in review. On making inquiries, we find that we are unfortunately no longer able to contact the editor who was handling your submission.
We have appointed a new handling editor, who will send your submission to two new reviewers. We will contact you as soon as the new editor has made a decision.
Dear Dr. Rowling
Re: Harry Potter and the Half-Blood Prince.
Your complaint is quite justified. We will chase the reviewers.
Dear Dr. Rowling
I am pleased to say that the reviewers have returned their reports on your submission Harry Potter and the Half-Blood Prince and we are able to make an editiorial decision, which is ACCEPT WITH MAJOR REVISION.
Reviewer 1 felt that the core point of your contribution could be made much more succinctly, and recommended that you remove the characters of Ron, Hermione, Draco, Hagrid and Snape. I concur with his assessment that the final version will be tighter and stronger for these cuts, and am confident that you can make them in a way that does not compromise the plot.
Reviewer 2 was positive over all, but did not like being surprised by the ending, and felt that it should have been outlined in the abstract. She also felt that citation of earlier works including Lewis (1950, 1951, 1952, 1953, 1954, 1955, 1956) and Pullman (1995, 1997, 2000) would be appropriate, and noted an over-use of constructions such as “… said Hermione, warningly”.
Dear Dr. Rowling
Thank you for your revised manuscript of Harry Potter and the Half-Blood Prince, which it is our pleasure to accept. We now ask you to sign the attached copyright transfer form, so we can proceed with publication.
Dear Dr. Rowling
I am sorry that you are unhappy about this, but transfer of copyright is our standard procedure, and we must insist on it as a prerequisite for publication. None of our other authors have complained.
Dear Dr. Rowling
Thank you for the signed copyright transfer form.
In answer to your query, no, we do not pay royalties.
Dear Dr. Rowling
Sadly, no, we are unable to make an exception in the matter of royalties.
Dear Dr. Rowling
Your book has now been formatted. We attach a proof PDF. Please read this very carefully as this is the last chance to spot errors.
You will readily appreciate that publishing is an expensive business. In order to remain competitive we have had to reduce costs, and as a result we are no longer able to offer proof-reading or copy-editing. Therefore you are responsible for ensuring the copy is clean.
At this stage, changes should be kept as small as possible, otherwise a charge may be incurred for re-typesetting.
Dear Dr. Rowling
Many thanks for returning the corrected proofs of Harry Potter and the Half-Blood Prince. We will proceed with publication.
Now that the final length of your contribution is known, we are able to assess page charges. At 607 pages, this work exceeds our standard twenty free pages by 587. At $140 US per page, this comes to $82,180. We would be grateful if you would forward us a cheque for this amount at your convenience.
Dear Dr. Rowling
Thank you for you prompt payment of the page charges. We agree that these are regrettable, but sadly they are part of the reality of the publishing business.
We are delighted to inform you that Harry Potter and the Half-Blood Prince is now published online, and has been assigned the DOI 10.123.45678.
We thank you for working on this fine contribution with us, and hope you will consider us for your future publications.
Dear Dr. Rowling
You are correct, your book is not freely downloadable. As we explained earlier in this correspondence, publishing is an expensive business. We recover our substantial costs by means of subscriptions and paid downloads.
In our experience, those with the most need to read your book will probably have institutional access. As for those who do not: if your readers are as keen as you say, they will no doubt find the customary download fee of $37.95 more than reasonable. Alternatively, readers can rent online access at the convenient price of $9.95 per 24 hours.
Dear Dr. Rowling
I am sorry that you feel the need to take that tone. I must reiterate, as already stated, that the revenues from download charges are not sufficient for us to be able to pay royalties. The $37.95 goes to cover our own costs.
If you wish for your book to be available as “open access”, then you may take advantage of our Freedom Through Slavery option. This will attract a further charge of $3,000, which can be paid by cheque as previously.
Your attitude is really quite difficult to understand. All of this was quite clearly set out on our web-site, and should have been understood by you before you made your submission.
As stated in the copyright transfer form that you signed, you do not retain the right to post freely downloadable copies of your work, since you are no longer the copyright holder.
We must ask you not to contact your handling editor directly. He was quite shaken by your latest outburst. If you feel you must write to us again, we must ask you to moderate your language, which is quite unsuitable for a lady. Meanwhile, we remind you that our publishing agreement follows industry best practice. It’s too late to complain about it now.
Dear Pyramid Web-Hosting,
We write on behalf of our client, Ancient Monolith Scholarly Publishing, who we assert are the copyright holders of Harry Potter and the Half-Blood Prince. It has come to our attention that a copy of this copyrighted work has been posted on a site hosted by you at the URL below.
This letter is official notification under the provisions of Section 512(c) of the Digital Millennium Copyright Act (“DMCA”) to effect removal of the above-reported infringement. We request that you immediately issue a cancellation message as specified in RFC 1036 for the specified posting and prevent the infringer, Ms. J. K. Rowling, from posting the infringing material to your servers in the future. Please be advised that law requires you, as a service provider, to “expeditiously remove or disable access to” the infringing material upon receiving this notice. Noncompliance may result in a loss of immunity for liability under the DMCA.
Please send us at the address above a prompt response indicating the actions you have taken to resolve this matter.
Examination of Ms. Rowling’s personal effects established that she had written most of a seventh book, Harry Potter and the Deathly Hallows. However, Rowling never sought to publish this final book in the series.
January 21, 2014
Regular readers will remember Jennifer Raff’s guest post on the PeerJ blog, How To Become Good At Peer-Review; and my response to it, Three points of disagreement. Today I read a very different take on this piece by Chorasimilarity, who is a frequent commenter here at SV-POW!: Two pieces of all too obvious propaganda.
Chorasimilarity starts by taking the original piece to task — fairly, I think — for its opening statement that ”peer review is at the heart of the scientific method”. It’s true that the scientific method is something rather different. But as I argued in Science is enforced humility, peer-review is part of the scaffolding that prevents individual scientists from running away with their own ideas, unchecked by consensus wisdom.
Chorasimilarity then goes on to make a stronger criticism of peer-review:
Peer review is an idea based on authority, not on science [...] the quote mentions that “one’s research must survive the scrutiny of experts before it is presented to the larger scientific community as worthy of serious consideration”, which would be just sad, dinosaurish speaking, if it would come from an old person who did not understood that today there is, or there should be, free access to information.
As we’ve discussed here before, having been through peer review certainly does not mean we can trust a published paper. People do sometimes talk as though this is the case, and it’s an absolutely fallacy that we should be quick to rebut whenever we encounter it.
But it does have a weaker, yet still non-negligible, value.
The real value of peer-review not as a mark of correctness, but of seriousness. Back in the original SV-POW! series on peer-review (Where peer-review went wrong, Some more of peer-review’s greatest mistakes, What is this peer-review process anyway?, Well, that about wraps it up for peer-review), I likened peer-review to hazing:
The best analogy for our current system of pre-publication peer-review is that it’s a hazing ritual. It doesn’t exist because of any intrinsic value it has, and it certainly isn’t there for the benefit of the recipient. It’s basically a way to draw a line between In and Out. Something for the inductee to endure as a way of proving he’s made of the Right Stuff.
So: the principle value of peer-review is that it provides an opportunity for authors to demonstrate that they are prepared to undergo peer-review.
When I first wrote that, I wrote it in a spirit of cynicism and in frustration that so much of the effort that goes into the process is thrown away and that the results are so arbitrary. Those are real and serious complaints, but I’ve since come around to the idea that peer-review is useful in that the hazing aspect enables it to clear a much lower bar. Being prepared to undergo peer-review really is a mark of seriousness.
I would imagine that everyone involved in dinosaur research occasionally gets unsolicited emails from cranks and from as-yet unpublished amateurs. One of the most reliable ways to distinguish the two groups is this: serious amateurs are trying to figure out how to get their work into peer-review, while cranks are either actively avoiding it or not even aware of it. That’s why the web is full of sites like Dinosaur Home, with all its fine pictures of pebbles, which can continue on their merry way free of scrutiny.
I do think that the benefits of traditional peer-review are usually greatly overstated and the costs (both direct and indirect) underestimated. But I’m coming down on the side that its barrier-to-cranks effect might just tip the balance in favour of retaining it.
Think your work has scientific value? Good. Prove it, by showing it to professionals. If you won’t do that, then the rest of us don’t need to expend mental energy on taking you seriously.
January 20, 2014
Jennifer Raff wrote a useful guest post on the PeerJ Blog: How To Become Good At Peer-Review. Most of its advice is excellent, and I’d heartily recommend it to anyone starting out on reviewing. But there are three points where I disagree with it. Here are the three things Jennifer said, and my counter-points.
1. Communicating with authors
“Don’t communicate with the authors about their manuscript. All thoughts and comments on it should only go to the editor.”
This may be different in different academic fields, but I’ve been contacted by reviewers of my material, and contacted the authors of papers I’m reviewing, too. Palaeo may be less formal in this respect than fields such as medical research. It’s often useful, for example, to get the authors to send higher resolution versions of the specimen photographs than the downscaled ones the journal passes on; or to get the manuscript in a read-write format that lets you more easily add notes and corrections. Most importantly, I’ve sometimes had to send my marked-up copy of the manuscript directly to the corresponding author because the journal’s automated system has no way to attach it to the formal response.
Perhaps the idea that you shouldn’t communicate with authors comes from confidentiality concerns. But I know who the authors are. (There are no palaeo journals that do double-blind reviewing, and it would be impossible any in a field small enough that you pretty much know who everyone is and what they work on.) And since I never review anonymously, I don’t mind them knowing who I am while I am still doing the review.
In the end, one of the main goals of peer-review — I would say the main goal — is to help the authors make their work the best it can be. Often contacting them directly is the more effective way to do that.
“Ask yourself whether the questions the authors are addressing are really advancing the field in a meaningful way. This does not mean that an article has to be completely novel, but it does mean that the work contributes to the sum of knowledge in the field and does not, for example, simply repeat well known results.”
I only agree with this for certain values of “well known”. In experimental sciences, replication is hugely important, and it’s one of the worst consequences of the prestige-obsessed journal system that it’s so hard to get a replication published. You could almost say that an experimental result that’s only been published once is worthless.
Equally important, or maybe even more important than replication, is the failed replication. When Doyen et al. (2012) tried and failed to replicate the findings of Bargh et al. (1996) on psychological priming, it was an important check on the influence of an article that has been cited more than 2,500 times. Bargh himself was not happy about it, but to quote a much-loved SV-POW! maxim due to Tom Holtz, ”Sorry if that makes some people feel bad, but I’m not in the ‘make people feel good’ business; I’m a scientist.”
So a reviewer should only complain about lack of novelty if the experiment has already been replicated several times. (There’s no value in a research paper showing that large and small cannonballs fall at the same speed from the top of the leaning tower of Pisa.)
3. Changing the subject
“Can you think of a better way to address the research questions than what the authors did?” … “You have every right to ask the authors to do a different experiment.”
Ugh. I just hate this. There is literally nothing I detest more in a review than “You should have written this different paper instead”. Please reviewers, review what’s in front of you, not what you would have done instead.
If you think of another approach that you think is promising, by all means suggest it as a followup project. But please in the name of all that we hold dear, don’t let it be a roadblock that delays this work from being published.
January 13, 2014
So, I’ve been thinking a lot about this interesting situation with Elsevier, which David Tempest’s remarks at the Oxford Evolution or Revolution debate highlighted: they can’t afford (literally or figuratively) to tell us how much they charge different institutions for the same stuff.
And I had this thought, which Mike tweeted:
When simply telling the truth can blow up your business model, you need a new business model.
Mash that up with “information wants to be free” and “if all else fails, someone will show up to liberate it”, and you get this:
When a single person of good conscience can blow up your business model simply by telling the truth, you need a new business model.
If we’ve learned anything in the past few years, it is that humans are the weak link in any campaign of secrecy.
We know that all of the big barrier-based publishers have these bundling deals with libraries, and that no-one on either side is allowed to say what the terms of those deals are. But there must be a lot of people with access to that information. And at least some of them must know how much libraries are getting screwed, precisely because they have access to that information. Seems unlikely that information will stay secret forever.
So, should we be expecting a Snowden-type leak from one or another barrier-based publisher? It doesn’t have to be Elsevier, but I think if it happens they’re the most likely target, because they are so single-minded about cultivating the ill-will of the people they allegedly serve (most recently with this and this). Sometimes I wonder if the other barrier-based publishers are getting too much of a free pass precisely because Elsevier is so good at tossing grenades and then jumping on them.
Corollary: barrier-based publishers, what are you doing to prepare for such a leak? “More secrecy” and “harsher penalties” will probably not work out well in the long run. But do feel free to keep scoring own goals if you must.
December 13, 2013
It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.
As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.
To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):
I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.
There are a few things worth unpacking on that graph.
First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.
My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.
My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology – unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)
In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)
Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)
What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.
- Farke, Andrew A., Michael P. Taylor and Mathew J. Wedel. 2009. Sharing: public databases combat mistrust and secrecy. Nature 461:1053.
- Lozano, George A., Vincent Larivière and Yves Gingras. 2012. The weakening relationship between the impact factor and papers’ citations in the digital age. Journal of the American Society for Information Science and Technology 63(11):2140-2145. doi:10.1002/asi.22731 [arXiv preprint]
- Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrae Paleontology 29(3):787-806.
- Taylor, Michael P., Mathew J. Wedel and Darren Naish. 2009. Head and neck posture in sauropod dinosaurs inferred from extant animals. Acta Palaeontologica Polonica 54(2):213-230.
November 19, 2013
Yesterday I was at the Berlin 11 satellite conference for students and early-career researchers. It was a privilege to be part of a stellar line-up of speakers, including the likes of SPARC’s Heather Joseph, PLOS’s Cameron Neylon, and eLIFE’s Mark Patterson. But even more than these, there were two people who impressed me so much that I had to give in to my fannish tendencies and have photos taken with them. Here they are.
This is Jack Andraka, who at the age of fifteen invented a new test for pancreatic cancer that is 168 times faster, 1/26000 as expensive and 400 times more sensitive than the current diagnostic tests, and only takes five minutes to run. Of course he’s grown up a bit since then — he’s sixteen now.
Right at the moment Jack’s not getting much science done because he’s sprinting from meeting to meeting. He came to us in Berlin literally straight from an audience with the Pope. He’s met Barack Obama in the oval office. And one of the main burdens of his talk is that he’s not such an outlier as he appears: there are lots of other brilliant kids out there who are capable of doing similarly groundbreaking work — if only they could get access to the published papers they need. (Jack was lucky: his parents are indulgent, and spent thousands of dollars on paywalled papers for him.)
Someone on Twitter noted that every single photo of Jack seems to show him, and the people he’s with, in thumbs-up pose. It’s true: and that is his infectious positivity at work. It’s energising as well as inspiring to be around him.
(Read Jack’s guest post at PLOS on Why Science Journal Paywalls Have to Go)
Here’s the other photo:
This is Bernard Rentier, who is rector of the University of Liège. To put it bluntly, he is the boss of the whole darned university — an academic of the very senior variety that I never meet; and of the vintage that, to put it kindly, can have a tendency to be rather conservative in approach, and cautious about open access.
With Bernard, not a bit of it. He has instituted a superb open-access policy at Liège — one that is now being taken up a model for the whole of Belgium. Whenever members of the Liège faculty apply for anything — office space, promotions, grants, tenure — their case is evaluated by taking into account only publications that have been deposited in the university’s open-access repository, ORBi.
Needless to say, the compliance rate is superb — essentially 100% since the policy came in. As a result, Liège’s work is more widely used, cited, reused, replicated, rebutted and generally put to work. The world benefits, and the university benefits.
Bernard is a spectacular example of someone in a position of great power using that power for good. Meanwhile, at the other end of scale, Jack is someone who — one would have thought — had no power at all. But in part because of work made available through the influence of people like Bernard, it turned out he had the power to make a medical breakthrough.
I came away from the satellite meeting very excited — in fact, by nearly all the presentations and discussions, but most especially by the range represented by Jack and Bernard. People at both ends of their careers; both of them not only promoting open access, but also doing wonderful things with it.
There’s no case against open access, and there never has been. But shifting the inertia of long-established traditions and protocols requires enormous activation energy. With advocates like Jack and Bernard, we’re generating that energy.
Onward and upward!
October 22, 2013
It shouldn’t come as a huge surprise to regular readers that PeerJ is Matt’s and my favourite journal. Reasons include its super-fast turnaround, beautiful formatting that doesn’t look like a facsimile of 1980s printed journals, and its responsiveness to authors and readers. But the top reason is undoubtedly its openness: not only are the article open access, but the peer-review process is also (optionally) open, and of course PeerJ preprints are inherently open science.
It’s a baby Parasaurolophus, but despite being a stinkin’ ornithopod it’s a fascinating specimen for a lot of reasons. For one thing, it’s the most complete known Parasaurolophus. For another, its young age enables new insights into hadrosaur ontogeny. It’s really nicely preserved, with soft-tissue preservation of both the skin and the beak. The most important aspect of the preservation may be that C-scanning shows the cranial airways clearly:
This makes it possible for the new specimen to show us the ontogenetic trajectory of Parasaurolophus – specifically to see how its distinctive tubular crest grew.
But none of this goodness is the reason that we at SV-POW! Towers are excited about this paper. The special sauce is the ground-breaking degree of openness in how the specimen is presented. Not only is the paper itself open access (and the 28 beautiful illustrations correspondingly open, and available in high-resolution versions). But best of all, CT scan data, surface models and segmentation data are freely available on FigShare. That’s all the 3d data that the team produced: everything they used in writing the paper is free for us all. We can use it to verify or falsify their conclusions; we can use it to make new mechanical models; we can use it to make replicas of the bones on 3d printers. In short: we can do science on this specimen, to a degree that’s never been possible with any previously published dinosaur.
This is great, and it shows a generosity of spirit from Andy Farke and his co-authors.
But more than that: I think it’s a great career move. Not so long ago, I might have answered the question “should we release our data?” with a snarky answer: “it depends on why you have a science career: to advance science, or to advance your career”. I don’t see it that way any more. By giving away their data, Farke’s team are certainly not precluding using it themselves as the basis for more papers — and if others use it in their work, then Farke et al. will get cited more. Everyone wins.
Open it up, folks. Do work worthy of giants, and then let others stand freely on your shoulders. They won’t weigh you down; if anything, they’ll lift you up.
Farke, Andrew A., Derek J. Chok, Annisa Herrero, Brandon Scolieri, and Sarah Werning. 2013. Ontogeny in the tube-crested dinosaur Parasaurolophus (Hadrosauridae) and heterochrony in hadrosaurids. PeerJ 1:e182. http://dx.doi.org/10.7717/peerj.182
October 7, 2013
Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.
What would you do?
You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.
But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.
Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.
Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)
Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.
But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.
Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.
To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.
Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”
Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.
Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.
Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.
Everyone reading this will, I am sure, have recognised that I’m talking about John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.
Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.
Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.
Update (9 October)
Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.
October 3, 2013
An extraordinary study has come to light today, showing just how shoddy peer-review standards are at some journals.
Evidently fascinated by Science‘s eagerness to publish the fatally flawed Arsenic Life paper, John Bohannon conceived the idea of constructing a study so incredibly flawed that it didn’t even include a control. His plan was to see whether he could get it past the notoriously lax Science peer-review provided it appealed strongly enough to that journal’s desire for “impact” (designed as the ability to generate headlines) and pandered to its preconceptions (that its own publication model is the best one).
So Bohannon carried out the most flawed study he could imagine: submitting fake papers to open-access journals selected in part from Jeffrey Beall’s list of predatory publishers without sending any of his fake papers to subscription journals, noting that many of the journals accepted the papers, and drawing the flagrantly unsupported conclusion that open-access publishing is flawed.
It’s hard to know where Science can go from here. Having fallen for Bohannon’s sting, its credibility is shot to pieces. We can only assume that the AAAS will now be added to Beall’s list of predatory publishers.
Here are some other responses to the Science story:
- Michael Eisen: I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals
- Martin Eve: Flawed sting operation singles out open access journals (and his longer original version)
- Peter Suber: New “sting” of weak open-access journals
- The Library Loon: Which is it?
- Björn Brembs: Science Magazine Rejects Data, Publishes Anecdote
- Kausik Datta at SciLogs: What Science’s “Sting Operation” Reveals: Open Access Fiasco or Peer Review Hellhole?
- John Hawks: “Open access spam” and how journals sell scientific reputation
- Retraction Watch:
- OASPA: response to the recent article in Science entitled “Who’s Afraid of Peer Review?”
- Jeroen Bosman: Science Mag sting of OA journals: is it about Open Access or about peer review?
- Curt Rice: What Science — and the Gonzo Scientist — got wrong: open access will make research better (now also appearing at the Guardian)
- Michelle N. Meyer: The troubled peer-review system, the open-access wars, and the blurry line between human subjects research and investigative journalism
- Ernesto Priego: Who’s Afraid of Open Access?
- Marius Buliga: On John Bohannon article in Science
- DOAJ: response to the recent article in Science entitled “Who’s Afraid of Peer Review?
- Zen Faulkes: Open access or vanity press, the Science “sting” edition
- Graham Steel: Glam Mag fucks up, news at eleven
- Heather Joseph (SPARC): Science Magazine’s Open Access “Sting”
- Lenny Teytelman: What hurts science – rejection of good or acceptance of bad?
- Fabiana Kubke: Science gone bad; or, or the day after the sting
- Gunther Eysenbach: Unscientific spoof paper accepted by 157 “black sheep” open access journals – but the Bohannon study has severe flaws itself
- Jon Brock: This study lacked an appropriate control group: Two stars
- Me again, this time with the gloves off: Anti-tutorial: how to design and execute a really bad study
- Paul Basken (Chronicle of Higher Education): Critics Say Sting on Open-Access Journals Misses Larger Point
- Neurobonkers: Science’s Straw Man Sting
- The Winnower: The Real Peer Review: Post-Publication
- Sal Robinson: John Bohannon’s Open Access sting paper annoys many, scares the easily scared, accomplishes relatively little
- Peerage of Science: It’s gotta sting
- Peter Murray-Rust: The Bohannon “Sting”; Can we trust AAAS/Science or is this PRISM reemerging from the grave?
- Heather Morrison: Bohannon and Science: bogus articles and PR spin instead of peer review
- Barbara Fister (Inside Higher Ed): The Sting
- Jon Tennant (guesting at SciLogs): Peer Review Quality is Independent of Open Access
- Stuart Shieber: Lessons from the faux journal investigation
- DOAJ: Second response to the Bohannon article 2013-10-18
- Andreas Thoss: Peer review: how to distinguish the good from the bad?