Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.

What would you do?

You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.

But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.

Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.

Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)

Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.

But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.

Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.

To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.

Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”

Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.

Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.

Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.

Everyone reading this will, I am sure, have recognised that I’m talking about  John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.

Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.

Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.

Update (9 October)

Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.

 

 

 

An extraordinary study has come to light today, showing just how shoddy peer-review standards are at some journals.

Evidently fascinated by Science‘s eagerness to publish the fatally flawed Arsenic Life paper, John Bohannon conceived the idea of constructing a study so incredibly flawed that it didn’t even include a control. His plan was to see whether he could get it past the notoriously lax Science peer-review provided it appealed strongly enough to that journal’s desire for “impact” (designed as the ability to generate headlines) and pandered to its preconceptions (that its own publication model is the best one).

So Bohannon carried out the most flawed study he could imagine: submitting fake papers to open-access journals selected in part from Jeffrey Beall’s list of predatory publishers without sending any of his fake papers to subscription journals, noting that many of the journals accepted the papers, and drawing the flagrantly unsupported conclusion that open-access publishing is flawed.

Incredibly, Science not only published this study, but made it the lead story of today’s issue.

It’s hard to know where Science can go from here. Having fallen for Bohannon’s sting, its credibility is shot to pieces. We can only assume that the AAAS will now be added to Beall’s list of predatory publishers.

Rolling updates

Here are some other responses to the Science story:

Yesterday I announced that our new paper on Barosaurus was up as a PeerJ preprint and invited feedback.

I woke up this morning to find its third substantial review waiting for me.

That means that this paper has now accumulated as much useful feedback in the twenty-seven hours since I submitted it as any previous submission I’ve ever made.

xx

Taylor and Wedel (2013b: figure 7). Barosaurus lentus holotype YPM 429, Vertebra S (C?12). Left column from top to bottom: dorsal, right lateral and ventral views; right column: anterior view. Inset shows displaced fragment of broken prezygapophysis. Note the narrow span across the parapophyses in ventral view, and the lack of damage to the ventral surface of the centrum which would indicate transverse crushing.

It’s worth reviewing the timeline here:

  • Monday 23rd September, 1:19 am: I completed the submission process.
  • 7:03 am: the preprint was published. It took less than six hours.
  • 10:52 am: received a careful, detailed review from Emanuel Tschopp. It took less than four hours from publication, and so of course less than ten from submission.
  • About 5:00 pm: received a second review, this one from Mark Robinson. (I don’t know the exact time because PeerJ’s page doesn’t show an actual timestamp, just “21 hours ago”.)
  • Tuesday 24th September, about 4:00 am: received a third review, this from ceratopsian-jockey and open-science guru Andy Farke.

Total time from submission to receiving three substantial reviews: about 27 hours.

It’s worth contrasting that with the times taken to get from submission to the receipt of reviews — usually only two of them — when going through the traditional journal route. Here are a few of mine:

  • Diplodocoid phylogenetic nomenclature at the Journal of Paleontology, 2004-5 (the first reviews I ever received): three months and 14 days.
  • Revised version of the same paper at PaleoBios, 2005 (my first published paper): one month and 10 days.
  • Xenoposeidon description at Palaeontology, 2006: three months and 19 days, although that included a delay as the handling editor sent it to a third, tie-breaking, reviewer.
  • Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2008: one month and 11 days.
  • Sauropod neck anatomy (eventually to be published in a very different form in PeerJ) at Paleobiologyfive months and two days.
  • Trivial correction to the Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2010: five months and 11 days, bizarrely for a half-page paper.

Despite the wide variations in submission-to-review time at these journals, it’s clear that you can expect to wait at least a month before getting any feedback at all on your submission at traditional journals. Even PeerJ took 19 days to get the reviews of our neck-anatomy paper back to us.

So I am now pretty such sold on the pre-printing route. As well as getting this early version of the paper out there early so that other palaeontologists can benefit from it (and so that we can’t be pre-emptively plagiarised), issuing a preprint has meant that we’ve got really useful feedback very quickly.

I highly recommend this route.

By the way, in case anyone’s wondering, PeerJ Preprints is not only for manuscripts that are destined for PeerJ proper. They’re perfectly happy for you to use their service as a place to gather feedback for your work before submitting it elsewhere. So even if your work is destined for, say, JVP, there’s a lot to be gained by preprinting it first.

I just read Mick Watson’s post Why I resigned as PLOS ONE academic editor on his blog opiniomics. Turns out his frustration with PLOS ONE is not to do with his editorial work but with the long silences he faced as an author at that journal when trying to get a bad decision appealed.

I can totally identify with that, though my most frustrating experiences along these lines have been with other journals. (yes, Paleobiology, I’m looking at you.) So here’s what I wrote in response (lightly edited from the version that appeared as a comment on the original blog).

There’s one thing that PLOS ONE could and should do to mitigate this kind of frustration: communicate. And so should all other journals.

At every step in the appeal process — and indeed the initial review process — an automated email should be sent to the author. So for the initial submission:

  1. “Your paper has been assigned an academic editor.”
  2. “Your paper has been sent out to a reviewer.”
  3. “An invited reviewer has declined to review; we will try another.”
  4. “An invited reviewer failed to accept or decline within two weeks; we will try another.”
  5. “A review has been submitted.”
  6. “A reviewer has failed to submit his report within four weeks; we are making contact again to ask for a quick response.”
  7. “A reviewer has failed to submit his report within six weeks; we have dropped that reviewer from this process and will try another.”
  8. “All reviews are in; the editor is considering the decision.”
  9. Decision letter.

And for the appeal:

  1. “Your appeal has been noted and is under consideration.”
  2. “We have contacted the original handling editor.”
  3. “The original handling editor has responded.”
  4. “The original handling editor has failed to respond after four weeks; we are escalating to a senior editor.”
  5. [perhaps] go back into some of all of the submission process.
  6. Decision letter.

Most if not all of these stages in the process already have workflow logic in the manuscript-handing system. There is no reason not to send the poor author emails when they happen — it’s no extra work for the editor or reviewers.

Speaking as the veteran of plenty of long-drawn-out silences from journals that I’ve submitted to, I know that getting these messages would have made a big difference to me.

Last October, we published a sequence of posts about misleading review/reject/resubmit practices by Royal Society journals (Dear Royal Society, please stop lying to us about publication times; We will no longer provide peer reviews for Royal Society journals until they adopt honest editorial policies; Biology Letters does trumpet its submission-to-acceptance time; Lying about submission times at other journals?; Discussing Biology Letters with the Royal Society). As noted in the last of these posts, the outcome was that I had what seemed to be a fruitful conversation with Stuart Taylor, Commercial Director of the society.

Then things went quiet for some time.

On 8 May this year, I emailed Stuart to ask what progress there had been. At his request Phil Hurst (Publisher, The Royal Society) emailed me back on 10 May as follows:

Dear Mike

Stuart has asked me to update you on the changes we have made following your conversation last year.

We have reviewed editorial procedures on Biology Letters. Further to this, we now provide Editors with the additional decision option of ‘revise’. This provides a middle way between ‘reject and resubmit’ and ‘accept with minor revisions’. Editors use all three options and it is entirely at their discretion which they select. ‘Revised’ papers retain the original submission date and we account for this in our published acceptance times.

In addition, we now publicise ‘first decision’ times rather than ‘first acceptance’ times on our website. We feel this is more meaningful as it gives prospective authors an indication of the time, irrespective of decision.

The first thing to say is, it’s great to see some progress on this.

The second thing is, I must apologise for my terrible slowness in reporting back. Phil emailed me again on 17 June to remind me to post, and it’s still taken me more than another month.

The third thing is, while this is definitely progress, it doesn’t (yet) fix the problem. That’s for two reasons.

The first problem is that so long as there is a “reject and resubmit” option that does not involve a brand new round of review (like a true resubmission), there is still a loophole by which editors can massage the journals’ figures. Of course, there is nothing wrong with “reject and resubmit” per se, but it does have to result in the resubmission being treated as a brand new submission — it can’t be a fig-leaf for what are actually minor revisions, as in the paper that first made me aware of this practice.

So I would urge the Royal Society either to get rid of the R&R option completely, replacing it with a simple “reject”; or to establish firm, explicit, transparent rules about how such resubmissions are treated.

The second problems is with the reporting. It’s true that the home pages of both Proc. B and Biology Letters do now publicise “Average receipt to first decision time” rather than the misleading old “Average receipt to acceptance time”. This is good news. Proc. B (though for some reason not Biology Letters) even includes a link to an excellent and very explicit page that gives three times (receipt to first decision, receipt to online publication and final decision to online publication) for five journals, and explains exactly what they mean.

Unfortunately, individual articles still include only Received and Accepted dates. You can see examples in recent papers both at Proc. B and at Biology Letters. As far as I can tell, there is no way to determine whether the Received date is for the original submission, or (as I can’t help but suspect) the minor revision that is disguised as a resubmission.

The solution for this is very simple (and was raised when I first talked to Stuart Taylor back in October): just give three dates: Received, Revised and Accepted. Then everything is clear and above board, and there is no scope for anyone to suspect wrongdoing.

Here at SV-POW!, we are an equal-opportunity criticiser of publishers: SpringerPLOS, Elsevier, the Royal Society, Nature, we don’t care. We call problems as we see them, where we see them. Here is one that has lingered for far too long. PLOS ONE’s journal information page says:

Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound.

Which is as we would expect it to be. But their reviewer guidelines page gives more detail as follows (emphasis added):

[Academic Editors] can employ a variety of methods, alone or in combination, to reach a decision in which they are confident:

  • They can conduct the peer review themselves, based on their own knowledge and experience
  • They can take further advice through discussion with other members of the editorial board
  • They can solicit reports from further referees

As has been noted in comments on this blog, this first form, in which the editor makes the decision alone, is “unlike any other first-tier academic journal”. When I submitted my own manuscript to PLOS ONE a few weeks ago, I did it in the expectation that it would be reviewed in the usual way, by two experts chosen by the editor, who would then use those reviews in conjunction with her own expertise to make a decision. I’d hate to think it would go down the easier track, and so not be accorded the recognition that a properly peer-reviewed article gets. (Merely discussing with other editors would also not constitute proper peer-review in many people’s eyes, so only the third track is really the whole deal.)

The problem here is not a widespread one. Back when we first discussed this in any detail, about 13% of PLOS ONE papers slipped through on the editor-only inside lane. But more recent figures (based on the 1,837 manuscripts that received a decision between 1st July and 30th September 2010) say that only 4.2% of articles take this track. Evidently the process was by then in decline; it’s a shame we don’t have more recent numbers.

But the real issue here is lack of transparency. Four and half years ago, Matt said “I really wish they’d just state the review track for each article–i.e., solo editor approved, multiple editor approved, or externally reviewed [...] I also hope that authors are allowed to preferentially request ‘tougher’ review tracks”.

It seems that still isn’t done. Looking at this article, which at the time of writing is the most recent one published by PLOS ONE, there is a little “PEER REVIEWED” logo up at the top, but no detail of which track was taken. PLOS themselves evidently take the line that all three tracks constitute peer-review, as “Academic Editors are not employees [...] they are external peer reviewers“.

So I call on PLOS ONE to either:

A. eliminate the non-traditional peer-review tracks, or

B1. Allow submitting authors to specify they want the traditional track, and

B2. Specify explicitly on each published paper which track was taken.

“The benefit of published work is that if they have passed the muster of peer review future researchers can have faith in the results”, writes a commenter at The Economist. Such statements are commonplace.

I couldn’t disagree more. Nothing is more fatal to the scientific endeavour than having “faith” in a previously published result — as the string of failed replications in oncology and in social psychology is showing. See also the trivial but crucial spreadsheet error in the economics paper that underlies many austerity policies.

Studies have shown that peer-reviewers on average spend about 2-3 hours in evaluating a paper that’s been sent their way. There is simply no way for even an expert to judge in that time whether a paper is correct: the best they can do is say “this looks legitimate, the authors seem to have gone about things the right way”.

Now that is a useful thing to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread assumption that it means more than it does.

Remember: “has passed peer review” only really means “two experts have looked at this for a couple of hours, and didn’t see anything obviously wrong in it”.

 

 

Note. I initially wrote this as a comment on a pretty good article about open access at The Economist. That article is not perfect, but it’s essentially correct, and it makes me happy that these issues are now mainstream enough that it’s no longer a surprise when they’re covered by as mainstream an outlet as The Economist.

I was really excited to get an invitation to the evolution-or-revolution debate in Oxford, partly for historical reasons. I thought the Oxford Union was where C. S. Lewis, J. R. R. Tolkien and their friends held various debates. Sadly, it turns out I was mistaken, and it was merely the stomping ground for a bunch of lame politicians.

But anyway … It was a great experience — not only for the chance to meet online friends for the first time and make a strong opening statement, but also to hear important ideas batted back and forth — not only between the eight panel members (four on each team) but also with the audience.

BHlafBNCcAEiifx

The debating teams. From left to right: EVOLUTION: David Tempest (Elsevier), Graham Taylor (ex Publishers’ Association), Jason Wilde (Nature) and Cameron Neylon (PLOS). CHAIR: Simon Benjamin. REVOLUTION: Mike Taylor (University of Bristol), Jason Hoyt (PeerJ), Amelia Andersdotter (Swedish Pirate Party MEP) and Paul Wicks (Patientslikeme).

Apparently, video of the debate (and of all the talks) will shortly be available. Until then, here is a brief tour of some highlights.

Opening statements

First, we each had four minutes or so to make an opening statement. It was my privilege to go first, and I used essentially the essay from the last post — though in an effort to avoid bloke-reading-from-a-sheet-of-paper syndrome I allowed myself to drift a bit — not really to good effect. One addition was a mention of the steering-a-supertanker analogy.

Cameron Neylon then spoke for evolution, referring to a poem about South American revolutions entitled “Only the beards have changed” — warning that throwing out an old order can result in a new one that is essentially unchanged.

Jason Hoyt gave a short speech about how PeerJ is practically addressing some of the major failures of the prevailing system: slowness, secrecy surrounding review, and enormous overcharging. Those guys aren’t waiting for a revolution, they’re hosting one.

Jason Wilde, like Cameron, emphasised that revolutions historically have a habit of leaving things no better than they found them — to be fair, a point that I have also made at times. I was pleasantly surprised by how much of his statement I agreed with, and look forward to seeing it again when video comes out.

Amelia Andersdotter gave unquestionably the most impassioned, and bluntest, speech — which I had to admit warmed my heart with its clear-sightedness and honesty. She made the point that a revolution has already happened, and not to our advantage, as publishers have seized control of science and driven restrictive IP laws. Amelia’s contention is that the necessary revolution will be easier to achieve without publishers than with their help, and she would happily do away with them all. Tough stuff.

Graham Taylor‘s contribution made quite a contrast. At its core lay the statement “science needs publishing, and publishing needs publishers”. The first half of that statement is unarguable. The second half does not follow, and its truth remains to be demonstrated. And of course even if it is true, it wouldn’t follow that we need the publishers we have now. (By the way, despite my history of eviscerating Taylor in print, he was very pleasant in person, and evidently didn’t bear a grudge.)

Paul Wicks‘s opening line to the evolutioneers was “I’m here from the Internet to negotiate the terms of your surrender”. He laid out an essentially unanswerable case for access to research as a foundation of advances in heath science. If I remember correctly, his opening statement got the biggest round of applause — and rightly so.

Finally in this first phase of the debate, David Tempest was left with the unenviable task of defending Elsevier’s actions as evolutionary rather than reactionary. Rather to my surprise, he adopted the unflattering (but apposite) metaphor of a supertanker heading for the rocks, but said that Elsevier have been engineering tugs to change its direction. (Is Mendeley meant to be one of those tugs?) Well, I wasn’t persuaded — but then I am increasingly of the opinion that the supertanker is not such a great analogy anyway, since the tanker doesn’t disgorge its cargo of poisonous filth until it hits the rocks.

The opening statement.

The opening statement.

Discussion

The discussion period was based on four questions, each of which was initially addressed by a member of each team, then thrown open to the floor — at least, that was the intention, but it was pretty flexible. The questions:

  • Does the public need access to academic publications?
  • Are mandates good for science? Can we still have a journal “quality ladder”?
  • In light of content-mining, do we need a new attitude to copyright?
  • Will OA lead to higher or lower standards? Will it undermine peer-review?
  • What system do we want to see in ten years?

I don’t now remember what was said in response to which question, and of course they overlapped a lot. So here are some highlights from this period, in no particular order.

The most applauded observation was Paul Wicks’s, that publications getting professors promotions are not the end goal of science. It’s all too easy to forget this (especially if you are an academic seeking promotion). We think of publications as being for other researchers; but they’re not, they’re for the world.

The biggest laugh was for Jason Hoyt’s comment on the simplest way to achieve universal access to Elsevier’s content: let them go out of business, and LOCKSS will take care of it. (Sadly, I’m not sure it’s that simple.)

In a response to one of the questions, Jason Wilde noted that at both Nature’s Scientific Reports and at PLOS ONE — both of which review for technical correctness only, not for novelty or importance — the rejection rate is about 40%. (I heard informally from Jason Hoyt that the rate at PeerJ is similar, based on its so-far small sample.) Interesting that the rate seems so consistent, and distressing that so much of what gets submitted to journals is evidently just no darned good.

But the best moment was provoked by David Tempest’s mention of transparency in pricing. Stephen Curry, from the floor, asked Tempest to justify his librarian’s not being allowed to tell him what Imperial’s Elsevier subscriptions cost, due to a confidentiality agreement. Tempest gave an extraordinary response, in which excess verbiage was unable to conceal the core point “We do this to prevent prices from falling”. His explanation finished “otherwise prices would go down and down and down”, to which the eloquent Dr. Curry shrugged bemusedly. A big laugh, but also a lot of real anger.

Votes

At some stage near the end, the chair asked for a show-of-hands vote on whether the best approach to pursue is Gold or Green open access — not just as a long-term goal, but as the immediate short-term approach. The vote was about three to one in favour of Gold. (This was from a very mixed audience containing researchers, librarians and publishers in I would guess fairly equal numbers, and a fair few startup founders.)

At the end of the whole event, a vote was taken on who had “won” the debate. “Revolution” came out ahead by a factor of two or three, which was gratifying; but I don’t know how much that was because of the quality of the debating, and how much it was because that’s what people already thought. (I hope the latter.)

And finally …

At the dinner afterwards, the organisers had arranged for bottles of wine to be available at cost price (£7), on the basis that you just take a bottle when you want it, and later on they’ll come round and collect the money. A system very open to abuse, but it turned out that the open-access crowd paid for one more bottle than they drank.

So a happy ending.

Acknowledgements

The photos above were provided by Simon Bayly and Victoria Watson. My memories of the debate were supplemented by helpful tweets from Simon Bayly (again), Anna Sharman (and again), Victoria Watson (again and again and again), Bryan Vickery, Jonathan Webb (and again) and Andrew Miller,

Is there any justification for any of these practices other than tradition?

  • Choosing titles that deliberately omit new taxon names.
  • Slicing the manuscript to fit an arbitrary length limit.
  • Squeezing the narrative into a fixed set of sections (Introduction, Methods, Results, Conclusion).
  • Discarding or combining illustrations to avoid exceding an arbitrary count.
  • Flattening illustrations to monochrome.
  • Using passive instead of active voice (especially in singular: “we did this” may be acceptable but not “I did this” for some reason).
  • Giving the taxonomic authority after first use of each formal name.
  • Listing institutional abbreviations at end of the Introduction section, several pages into the paper.
  • Using initials for names in the acknowledgements.
  • Refusing to cite in-prep papers, dissertations and blogs (while accepting pers. comm.)
  • Using numbered citations instead of Author+Date.
  • Using journal abbreviations such as “J. Vertebr. Paleontol.” in the references.
  • Formatting references
  • Having references at all, rather than links.
  • Putting figure captions and tables at end the end of the manuscript instead of where they occur.
  • Arbitrarily relegating parts of the manuscript to Supplementary information.
  • Submitting images in TIFF format (even for born-as-JPEG photos).
  • Double-spacing manuscripts.
  • Writing cover letters for submissions.
  • Throwing away reviews once they’ve been handled.
  • Allowing the final product to go behind a paywall.

Did I miss any?

Schachner et al 2013 fig-13-full

Schachner et al. (2013: Figure 13): Diagrammatic representations of the crocodilian (A) and avian (B) lungs in left lateral view with colors identifying proposed homologous characters within the bronchial tree and air sac system of both groups. The image of the bird is modified from Duncker (1971). Abbreviations: AAS, abdominal air sac; CAS, cervical air sac; CRTS, cranial thoracic air sac; CSS, caudal sac-like structure; CTS, caudal thoracic air sac; d, dorsobronchi; GL, gas-exchanging lung; HS, horizontal septum; IAS, interclavicular air sac; L, laterobronchi; NGL, non-gas-exchanging lung; ObS, oblique septum; P, parabronchi; Pb, primary bronchus; Tr, trachea; v, ventrobronchi.

Gah! No time, no time. I am overdue on some things, so this is a short pointer post, not the thorough breakdown this paper deserves. The short, short version: Schachner et al. (2013) is out in PeerJ, describing airflow in the lungs of Nile crocs, and showing how surprisingly birdlike croc lungs actually are. If you’re reading this, you’re probably aware of the papers by Colleen Farmer and Kent Sanders a couple of years ago describing unidirectional airflow in alligator lungs. Hang on to your hat, because this new work is even more surprising.

I care about this not only because dinosaurian respiration is near and dear to my heart but also because I was a reviewer on this paper, and I am extremely happy to say that Schachner et al. elected to publish the review history alongside the finished paper. I am also pleasantly surprised, because as you’ll see when you read the reviews and responses, the process was a little…tense. But it all worked out well in the end, with a beautiful, solid paper by Schachner et al., and a totally transparent review process available for the world to see. Kudos to Emma, John, and Colleen on a fantastic, important paper, and for opting for maximal transparency in publishing!

UPDATE the next morning: Today’s PeerJ Blog post is an interview with lead author Emma Schachner, where it emerges that open review was one of the major selling points of PeerJ for her:

Once I was made aware of the transparent peer review process, along with the fact that the journal is both open access and very inexpensive to publish in, I was completely sold. [...] The review process was fantastic. It was transparent and fast. The open review system allowed for direct communication between the authors and reviewers, generating a more refined final manuscript. I think that having open reviews is a great first step towards fixing the peer review system.

That post also links to this one, so now the link cycle is complete.

Reference

Schachner, E.R., Hutchinson, J.R., and Farmer, C.G. 2013. Pulmonary anatomy in the Nile crocodile and the evolution of unidirectional airflow in Archosauria. PeerJ 1:e60 http://dx.doi.org/10.7717/peerj.60

Follow

Get every new post delivered to your Inbox.

Join 346 other followers