August 30, 2012
Gah, so much interesting stuff going on and I simply have No. Time. To. Blog.
But I’m making an exception for PeerJ, a new OA journal that is coming online later this year. Like PLoS ONE, it will be an all-subject journal that will publish stuff based on scientific solidity rather than perceived sexiness, but so openly and so cheaply that it might make PLoS ONE look like an Elsevier product (I jest–put away your pitchforks, PLoS fans. [Elsevier fans, you can keep yours--gotta accessorize those forked tails somehow!]). They’ve got a special on right now: discounted lifetime memberships. The rates go up Sept. 1, which you’ll notice is not far off. You can read about the different membership plans here. I just purchased what they refer to as the Investigator Plan, and which I call the max bling membership.
Is this overkill? Quite likely. The Enhanced Plan would give me two pubs per year, which is, so far, as much as I would need at ALL scholarly outlets, let alone just one. And I don’t plan on only sending work to PeerJ once it’s up and running, although I will probably send them stuff often.
I went for the max bling membership mainly so I never have to think about it again. I don’t know if PeerJ will succeed like PLoS ONE or go extinct like you-know-who. I don’t know if I’ll ever publish more than two papers a year, period, let alone whether I’ll want to send more than two a year to PeerJ. And now I don’t have to worry about those things, or about upgrading my membership down the line, or about anything, really. I’m all in. Ninety bucks* to never have to make any PeerJ-related membership decisions for the rest of my life is a freakin’ steal. I wish I could pay off university committees the same way.
* The difference between the Enhanced and Investigator plans.
I’m sure we’ll have more to say about PeerJ, about how we think its genuinely post-scarcity and radically open approach to scientific publishing are, in fact, the wave of the future, but that will have to wait for other posts. In the meantime, I have shedloads of things to do, like interview prospective students and revise my human anatomy lectures and write my SVPCA talk. Happily, worrying about my PeerJ membership is not one of those things. Nor will it be. Ever.
August 22, 2012
No time for anything new, so here’s a post built from parts of other, older posts.
The fourth sacral centrum of Haplocanthosaurus CM 879, in left and right lateral view. This is part of the original color version of Wedel (2009: figure 8), from this page. (Yes, I know I need to get around to posting the full-color versions of those figures. It’s on my To Do list.)
Note the big invasive fossa on the right side of the centrum. The left side is waisted (narrower at the middle than the ends) like most vertebrae of most animals, but has no distinct fossa on lateral face of the centrum. What’s up with that? Here’s an explanation from an old post (about another sauropod) that still fits:
Now, this asymmetry is also weird, but it’s expected weirdness. Pneumaticity seems to just be inherently variable, whether we’re talking about human sinuses or the facial air sacs of whales or the vertebrae of chickens. It appears that the form of pneumatic features is entirely determined by local tissue interactions, with little or no genetic control of the specific form. Think of it this way: genes prescribe certain developmental events, and those events bring tissues into contact–such as pneumatic epithelium and bone. The morphology of the bone arises out of that interaction, and each interaction of bone and pneumatic epithelium has the potential to produce something new. In this case, the diverticula on the left side of the vertebral column come from the lungs or air sacs on the left, and those on the right side come from the lungs or airs sacs on the right, so it’s really two sets of diverticula contacting the bone independently. The wonder, then, is not that pneumatic bones are so variable, but that we see any regularities at all.
August 15, 2012
You haven’t heard from me much lately because I’ve been busy teaching anatomy. Still, I get to help people dissect for a living, so I can’t complain.
Further bulletins as events warrant.
August 7, 2012
Let me begin with a digression. (Hey, we may as well start as we mean to go on.)
Citations in scientific writing are used for two very different reasons, but because the two cases have the same form we often confuse them. We may cite a work as an authority, to lend its weight to our own assertion, as in “Diplodocus carnegii seems to have had fifteen cervical vertebrae (Hatcher 1901)”; or we may cite a work to give it credit for an observation or concept, as in “… using the extant phylogenetic bracket (Witmer 1995)”.
The conflation of these two very different modes of citation causes some difficulty because, while many authors would never cite (say) a blog post as an authority, most would feel some obligation to cite it in order to give credit. You might not want to cite this SV-POW! post as authority for a length of 49 m for Amphicoelias fragillimus; but you would hardly use the sacrum illustration from this post in your own work without crediting the source.
So the fact citations do two rather different jobs causes confusion.
When it comes to peer-review, things are worse: not only it trying to do two different things at once (filtering manuscripts, and improving the ones it retains) but the filtering itself has two components — deciding (A) whether a manuscript is good science, and (B) whether it’s “a good fit” for the journal.
What does “a good fit” mean? Anything or nothing, unfortunately. In the case of Science ‘n’ Nature, of course, it means “is about a new feathered theropod”. For Paleobiology, it seems to mean “has lots of graphs and no specimen photos”. In the case of other journals, it’s much less predictable, and can often, it seems, come down to the whim of the individual reviewer. In many cases, infuriatingly, it can be a matter of whether the reviewer’s guess is that a work is “important” enough for the journal — something that you can never tell in advance, but which only becomes apparent in the years following publication.
As a result, a perfectly good piece of work — one which passes the peer-review process’s “is it good science?” filter with flying colours — can still get rejected, and indeed can be bounced around from journal to journal until the author loses interest, leaves the field or dies; or, of course, until the paper is accepted somewhere.
(The already disheartening process of shopping a paper around multiple journals until it finds a home is made utterly soul-crushing by the completely different formats that the different journals require their submissions in. But that’s a completely different rant.)
This is the Gordian knot that PLoS ONE set out to cut by simply declaring that all scientifically satisfactory work is a good fit. Reviewers for PLoS ONE are explicitly told not to make judgements about the probable impact of papers, and only judge whether the science is good. In this way, it’s left to the rest of world to evaluate the importance of the work — just as in fact it always does anyway (through citation, blog discussion, formal responses, and so on).
But eliminating the “good fit” criterion still leaves PLoS ONE reviewers with two jobs: judging whether a manuscript is scientifically sound, and helping the author to improve it. I find myself wondering whether there might be a way to decouple these functions, too.
Perhaps not: after all, they are somewhat intertwingled. The question “is this scientifically sound” does not always receive a yes-or-no answer. The answer might be “yes, provided that the author’s conclusions are corroborated by a phylogenetic analysis”, for example.
I don’t have any good answers to propose here. (Not yet, anyway.) At this stage, I am just trying to think clearly about the problem, not to come up with solutions. But I do think we can see what’s going on with more clarity if our minds can separate out all the different roles that reviewers play. To my mind, dumping “impact assessment” from the review process is PLoS ONE’s greatest achievement. If we can pick things apart yet further, there may well be even greater gains to be had.
I guess we should be asking ourselves this: what, when it comes right down to it, is peer-review for? Back in the day, the filtering aspect was crucial because paper printing and distribution meant that there was a strict limit on how many papers could be published. That’s not true any more, so that aspect of peer-review is no longer relevant. But what else has the Internet changed? What can we streamline?
August 6, 2012
Last time I argued that traditional pre-publication peer-review isn’t necessarily worth the heavy burden it imposes. I guess no-one who’s been involved in the review process — as an author, editor or reviewer — will deny that it imposes significant costs, both in the time of all the participants, and in the delay in getting new work to press. Where I expected more pushback was in the claim that the benefits are not great.
Remember that the benefits usually claimed for peer-review are in two broad categories: that it improves the quality of what gets published, and that it filters out what’s not good enough to be published at all. I’ll save the second of these claims for next time. This time I want to look at the first.
The immediate catalyst is the two brief reviews with which a manuscript of mine (with Matt as co-author) was rejected two days ago from a mid-to-low ranked palaeo journal. I’m going to quote two sentences that rankle from one of the anonymous reviewers:
The manuscript reads as a long “story” instead of a scientific manuscript. Material and methods, results, and interpretation are unfortunately not clearly separated.
This, in the eyes of the reviewer, was one of the two deficiencies of the manuscript that led him or her to recommend rejection. (The other was that the manuscript is “too long”.) But as I’ve said repeatedly here and elsewhere, all good writing is story-telling. Scientific ideas, like all others, need to be presented as stories in order to sink into our brains. So what’s happened here is that, so far as I’m concerned, the reviewer has praised our manuscript; but he or she thinks its a criticism.
It’s not clear what we should do in response to such a review. To get our paper published in this journal, we could restructure the paper. We could draw every observation, comparison, discussion and illustration out of its current position in a single, flowing argument; and instead cram them them all, out of their natural order, into the “scientific” structure that the reviewer evidently prefers. But I’m not willing to do this, because our judgement is that this would reduce the value of the resulting paper — making it harder to follow the argument. And I hate to do work with negative net value.
In this case, it’s even worse: the initial draft of this paper was in a much more conventional “scientific” structure. In this form, we submitted it to a different journal, whence it was rejected for completely different reasons which I won’t go into. Before submitting the new version, we re-tooled the paper to tell its story in the order that makes most sense. So the reviewer who recommended that the new version be rejected was (albeit unknowingly) requiring us to revert an older and inferior version of the manuscript.
So this is a case where reviewers’ comments really don’t help at all. (To be fair to the reviewer in question, he or she did also say a lot of positive and encouraging things. But they are really just decoration on the big, fat REJECT.)
And sadly this kind of thing is not too unusual. Other reviews I’ve been sent have (A) demanded that we add a phylogenetic analysis even though it could not possibly tell us anything; (B) demanded that we remove a phylogenetic analysis; (C) rejected a paper due to consistently misunderstanding our anatomical nomenclature; (D) rejected my manuscript because they’d never heard of me (ad hominem review); and and my all-time most hated favourite (E) basically told me to write a completely different paper instead.
(E) in particular is a disaster and makes me just want to throw my hands up in the air. Written a paper that analyses a diversity database to draw conclusions about changing clade sizes through time? Too bad, the reviewer wants you to write a literature review on how you assembled the database instead! Written a paper showing that two species are not conspecific? Tough luck, the reviewer wants you to write a paper on the meaning of “genus” instead! I just loathe this. I am increasingly of the opinion that the best response may be “I’ve written that manuscript too, it’s in review elsewhere” or similar.
So those kinds of reviews are a complete waste of time and energy.
Where is the value, then? Well, I’ve mentioned Jerry Harris several times before as someone whose reviews are full of detailed, helpful comments that really do improve papers. I aspire to review like Jerry when it falls to me to provide this service, and I hope I can be as constructive as he is. But really, this is a tiny minority. Most reviews that avoid the traps I mentioned above don’t have much of substance to say. Some “reviews” I’ve received have been only a few sentences long: in such cases it’s hard to believe that the reviewers have actually read the manuscript, and very hard to accept that they have life-or-death power over its fate.
I’ve been wary of writing this post because of the fear that it reads like a catalogue of whining about how hard-done-by I am. That’s not the intention: I want to talk about widespread problems with peer-review, and I wanted to make them concrete by giving real examples. And of course the examples available to me are the reviews my own work has received. Let me note once again that I have made it through the peer-review gauntlet many times, and that I’m therefore criticising the system from within. I’m not a malcontent claiming there’s a conspiracy to keep my work out of journals. I am a small-time but real publishing author who’s sick of jumping through hoops.
Next time: what peer-review is really for.
August 5, 2012
[Note: this post is by Mike. Matt hasn't seen it, may not agree with it, and would probably have advised me not to post it if I'd asked him.]
The magic is going out of my love-affair with peer-review. When we started seeing each other, back in 2004, I was completely on board with the idea that peer-review improves the quality of the scientific record in two ways: by keeping bad science from getting published, and by improving the good science that does get published. Eight years on, I am not convinced that either of those things is as true as I initially thought, and I’m increasingly aware of the monumental amount of time and effort it soaks up. Do the advantages outweigh the disadvantages? Five years ago I would have unhesitatingly said yes. A year ago, I’d have been unsure. Now, I am pretty much convinced that peer-review — at least as we practice it — is an expensive hangover from a former age, and does more harm than good.
What’s that? Evidence, you say? There’s plenty. We all remember Arsenic Life: pre-publication peer-review didn’t protect us from that. We all know of papers in our own fields that should never have been published — for a small sample, google “failure of peer-review” in the Dinosaur Mailing List archives. Peer-review doesn’t protect us from these and they have to be sorted out after publication by criticism, rebuttal and refinement. In other words, by the usual processes of science.
So pre-publication peer-review is not getting the job done as a filter. What about its role in improving papers that do get published? This does happen, for sure; but speaking as a veteran of 30 submissions, my experience has been that no more than half of my reviews have had anything constructive to suggest at all, and most of those that have improved my manuscripts have done so only in pretty trivial ways. If I add up all the time I’ve spent handling and responding to reviewer comments and balance it up against the improvement in the papers, my honest judgement is that it’s not been worth it. The improvement in the published papers is certainly not worth as much as all the extra science I could have been making instead of jumping through hoops.
And that of course is ignoring the long delays that peer-review imposes even in the best-case scenario of resubmit-with-revisions, and the much longer ones that result when reviews result in your having to reformat a manuscript and start the whole wretched process again at another journal.
All of this is much to my surprise, having been a staunch advocate of peer-review until relatively recently. But here’s where I’ve landed up, despite myself: I think the best analogy for our current system of pre-publication peer-review is that it’s a hazing ritual. It doesn’t exist because of any intrinsic value it has, and it certainly isn’t there for the benefit of the recipient. It’s basically a way to draw a line between In and Out. Something for the inductee to endure as a way of proving he’s made of the Right Stuff.
So: the principle value of peer-review is that it provides an opportunity for authors to demonstrate that they are prepared to undergo peer-review.
It’s a way of separating the men from the boys. (And, yes, the women from the girls, though I can’t help thinking there’s something very stereotypically male about our confrontational and oppositional review system.)
Finally, I should address one more thing: is this just whining from an outsider who can’t get in and thinks it’s a conspiracy? I don’t think so. I’ve run the peer-review gauntlet quite a few times now — my publications are into double figures, which doesn’t make me a seasoned professional but does show that I am pretty serious. In other words, I am inside the system that I’m criticising.
For full disclosure, I should make it clear that I am writing this a day after having had a paper rejected by reviewers. So if you like, you can write it off as a bitter ramblings of a resentful man. But the truth is, while this was the immediate trigger for writing this post, the feeling has been building up for a while.