Here at SV-POW!, we are an equal-opportunity criticiser of publishers: Springer, PLOS, Elsevier, the Royal Society, Nature, we don’t care. We call problems as we see them, where we see them. Here is one that has lingered for far too long. PLOS ONE’s journal information page says:
Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound.
Which is as we would expect it to be. But their reviewer guidelines page gives more detail as follows (emphasis added):
[Academic Editors] can employ a variety of methods, alone or in combination, to reach a decision in which they are confident:
- They can conduct the peer review themselves, based on their own knowledge and experience
- They can take further advice through discussion with other members of the editorial board
- They can solicit reports from further referees
As has been noted in comments on this blog, this first form, in which the editor makes the decision alone, is “unlike any other first-tier academic journal”. When I submitted my own manuscript to PLOS ONE a few weeks ago, I did it in the expectation that it would be reviewed in the usual way, by two experts chosen by the editor, who would then use those reviews in conjunction with her own expertise to make a decision. I’d hate to think it would go down the easier track, and so not be accorded the recognition that a properly peer-reviewed article gets. (Merely discussing with other editors would also not constitute proper peer-review in many people’s eyes, so only the third track is really the whole deal.)
The problem here is not a widespread one. Back when we first discussed this in any detail, about 13% of PLOS ONE papers slipped through on the editor-only inside lane. But more recent figures (based on the 1,837 manuscripts that received a decision between 1st July and 30th September 2010) say that only 4.2% of articles take this track. Evidently the process was by then in decline; it’s a shame we don’t have more recent numbers.
But the real issue here is lack of transparency. Four and half years ago, Matt said “I really wish they’d just state the review track for each article–i.e., solo editor approved, multiple editor approved, or externally reviewed [...] I also hope that authors are allowed to preferentially request ‘tougher’ review tracks”.
It seems that still isn’t done. Looking at this article, which at the time of writing is the most recent one published by PLOS ONE, there is a little “PEER REVIEWED” logo up at the top, but no detail of which track was taken. PLOS themselves evidently take the line that all three tracks constitute peer-review, as “Academic Editors are not employees [...] they are external peer reviewers“.
So I call on PLOS ONE to either:
A. eliminate the non-traditional peer-review tracks, or
B1. Allow submitting authors to specify they want the traditional track, and
B2. Specify explicitly on each published paper which track was taken.
“The benefit of published work is that if they have passed the muster of peer review future researchers can have faith in the results”, writes a commenter at The Economist. Such statements are commonplace.
I couldn’t disagree more. Nothing is more fatal to the scientific endeavour than having “faith” in a previously published result — as the string of failed replications in oncology and in social psychology is showing. See also the trivial but crucial spreadsheet error in the economics paper that underlies many austerity policies.
Studies have shown that peer-reviewers on average spend about 2-3 hours in evaluating a paper that’s been sent their way. There is simply no way for even an expert to judge in that time whether a paper is correct: the best they can do is say “this looks legitimate, the authors seem to have gone about things the right way”.
Now that is a useful thing to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread assumption that it means more than it does.
Remember: “has passed peer review” only really means “two experts have looked at this for a couple of hours, and didn’t see anything obviously wrong in it”.
Note. I initially wrote this as a comment on a pretty good article about open access at The Economist. That article is not perfect, but it’s essentially correct, and it makes me happy that these issues are now mainstream enough that it’s no longer a surprise when they’re covered by as mainstream an outlet as The Economist.
April 13, 2013
I was really excited to get an invitation to the evolution-or-revolution debate in Oxford, partly for historical reasons. I thought the Oxford Union was where C. S. Lewis, J. R. R. Tolkien and their friends held various debates. Sadly, it turns out I was mistaken, and it was merely the stomping ground for a bunch of lame politicians.
But anyway … It was a great experience — not only for the chance to meet online friends for the first time and make a strong opening statement, but also to hear important ideas batted back and forth — not only between the eight panel members (four on each team) but also with the audience.
Apparently, video of the debate (and of all the talks) will shortly be available. Until then, here is a brief tour of some highlights.
First, we each had four minutes or so to make an opening statement. It was my privilege to go first, and I used essentially the essay from the last post — though in an effort to avoid bloke-reading-from-a-sheet-of-paper syndrome I allowed myself to drift a bit — not really to good effect. One addition was a mention of the steering-a-supertanker analogy.
Cameron Neylon then spoke for evolution, referring to a poem about South American revolutions entitled “Only the beards have changed” — warning that throwing out an old order can result in a new one that is essentially unchanged.
Jason Hoyt gave a short speech about how PeerJ is practically addressing some of the major failures of the prevailing system: slowness, secrecy surrounding review, and enormous overcharging. Those guys aren’t waiting for a revolution, they’re hosting one.
Jason Wilde, like Cameron, emphasised that revolutions historically have a habit of leaving things no better than they found them — to be fair, a point that I have also made at times. I was pleasantly surprised by how much of his statement I agreed with, and look forward to seeing it again when video comes out.
Amelia Andersdotter gave unquestionably the most impassioned, and bluntest, speech — which I had to admit warmed my heart with its clear-sightedness and honesty. She made the point that a revolution has already happened, and not to our advantage, as publishers have seized control of science and driven restrictive IP laws. Amelia’s contention is that the necessary revolution will be easier to achieve without publishers than with their help, and she would happily do away with them all. Tough stuff.
Graham Taylor‘s contribution made quite a contrast. At its core lay the statement “science needs publishing, and publishing needs publishers”. The first half of that statement is unarguable. The second half does not follow, and its truth remains to be demonstrated. And of course even if it is true, it wouldn’t follow that we need the publishers we have now. (By the way, despite my history of eviscerating Taylor in print, he was very pleasant in person, and evidently didn’t bear a grudge.)
Paul Wicks‘s opening line to the evolutioneers was “I’m here from the Internet to negotiate the terms of your surrender”. He laid out an essentially unanswerable case for access to research as a foundation of advances in heath science. If I remember correctly, his opening statement got the biggest round of applause — and rightly so.
Finally in this first phase of the debate, David Tempest was left with the unenviable task of defending Elsevier’s actions as evolutionary rather than reactionary. Rather to my surprise, he adopted the unflattering (but apposite) metaphor of a supertanker heading for the rocks, but said that Elsevier have been engineering tugs to change its direction. (Is Mendeley meant to be one of those tugs?) Well, I wasn’t persuaded — but then I am increasingly of the opinion that the supertanker is not such a great analogy anyway, since the tanker doesn’t disgorge its cargo of poisonous filth until it hits the rocks.
The discussion period was based on four questions, each of which was initially addressed by a member of each team, then thrown open to the floor — at least, that was the intention, but it was pretty flexible. The questions:
- Does the public need access to academic publications?
- Are mandates good for science? Can we still have a journal “quality ladder”?
- In light of content-mining, do we need a new attitude to copyright?
- Will OA lead to higher or lower standards? Will it undermine peer-review?
- What system do we want to see in ten years?
I don’t now remember what was said in response to which question, and of course they overlapped a lot. So here are some highlights from this period, in no particular order.
The most applauded observation was Paul Wicks’s, that publications getting professors promotions are not the end goal of science. It’s all too easy to forget this (especially if you are an academic seeking promotion). We think of publications as being for other researchers; but they’re not, they’re for the world.
The biggest laugh was for Jason Hoyt’s comment on the simplest way to achieve universal access to Elsevier’s content: let them go out of business, and LOCKSS will take care of it. (Sadly, I’m not sure it’s that simple.)
In a response to one of the questions, Jason Wilde noted that at both Nature’s Scientific Reports and at PLOS ONE — both of which review for technical correctness only, not for novelty or importance — the rejection rate is about 40%. (I heard informally from Jason Hoyt that the rate at PeerJ is similar, based on its so-far small sample.) Interesting that the rate seems so consistent, and distressing that so much of what gets submitted to journals is evidently just no darned good.
But the best moment was provoked by David Tempest’s mention of transparency in pricing. Stephen Curry, from the floor, asked Tempest to justify his librarian’s not being allowed to tell him what Imperial’s Elsevier subscriptions cost, due to a confidentiality agreement. Tempest gave an extraordinary response, in which excess verbiage was unable to conceal the core point “We do this to prevent prices from falling”. His explanation finished “otherwise prices would go down and down and down”, to which the eloquent Dr. Curry shrugged bemusedly. A big laugh, but also a lot of real anger.
At some stage near the end, the chair asked for a show-of-hands vote on whether the best approach to pursue is Gold or Green open access — not just as a long-term goal, but as the immediate short-term approach. The vote was about three to one in favour of Gold. (This was from a very mixed audience containing researchers, librarians and publishers in I would guess fairly equal numbers, and a fair few startup founders.)
At the end of the whole event, a vote was taken on who had “won” the debate. “Revolution” came out ahead by a factor of two or three, which was gratifying; but I don’t know how much that was because of the quality of the debating, and how much it was because that’s what people already thought. (I hope the latter.)
And finally …
At the dinner afterwards, the organisers had arranged for bottles of wine to be available at cost price (£7), on the basis that you just take a bottle when you want it, and later on they’ll come round and collect the money. A system very open to abuse, but it turned out that the open-access crowd paid for one more bottle than they drank.
So a happy ending.
The photos above were provided by Simon Bayly and Victoria Watson. My memories of the debate were supplemented by helpful tweets from Simon Bayly (again), Anna Sharman (and again), Victoria Watson (again and again and again), Bryan Vickery, Jonathan Webb (and again) and Andrew Miller,
April 10, 2013
Is there any justification for any of these practices other than tradition?
- Choosing titles that deliberately omit new taxon names.
- Slicing the manuscript to fit an arbitrary length limit.
- Squeezing the narrative into a fixed set of sections (Introduction, Methods, Results, Conclusion).
- Discarding or combining illustrations to avoid exceding an arbitrary count.
- Flattening illustrations to monochrome.
- Using passive instead of active voice (especially in singular: “we did this” may be acceptable but not “I did this” for some reason).
- Giving the taxonomic authority after first use of each formal name.
- Listing institutional abbreviations at end of the Introduction section, several pages into the paper.
- Using initials for names in the acknowledgements.
- Refusing to cite in-prep papers, dissertations and blogs (while accepting pers. comm.)
- Using numbered citations instead of Author+Date.
- Using journal abbreviations such as “J. Vertebr. Paleontol.” in the references.
- Formatting references
- Having references at all, rather than links.
- Putting figure captions and tables at end the end of the manuscript instead of where they occur.
- Arbitrarily relegating parts of the manuscript to Supplementary information.
- Submitting images in TIFF format (even for born-as-JPEG photos).
- Double-spacing manuscripts.
- Writing cover letters for submissions.
- Throwing away reviews once they’ve been handled.
- Allowing the final product to go behind a paywall.
Did I miss any?
April 3, 2013
Gah! No time, no time. I am overdue on some things, so this is a short pointer post, not the thorough breakdown this paper deserves. The short, short version: Schachner et al. (2013) is out in PeerJ, describing airflow in the lungs of Nile crocs, and showing how surprisingly birdlike croc lungs actually are. If you’re reading this, you’re probably aware of the papers by Colleen Farmer and Kent Sanders a couple of years ago describing unidirectional airflow in alligator lungs. Hang on to your hat, because this new work is even more surprising.
I care about this not only because dinosaurian respiration is near and dear to my heart but also because I was a reviewer on this paper, and I am extremely happy to say that Schachner et al. elected to publish the review history alongside the finished paper. I am also pleasantly surprised, because as you’ll see when you read the reviews and responses, the process was a little…tense. But it all worked out well in the end, with a beautiful, solid paper by Schachner et al., and a totally transparent review process available for the world to see. Kudos to Emma, John, and Colleen on a fantastic, important paper, and for opting for maximal transparency in publishing!
UPDATE the next morning: Today’s PeerJ Blog post is an interview with lead author Emma Schachner, where it emerges that open review was one of the major selling points of PeerJ for her:
Once I was made aware of the transparent peer review process, along with the fact that the journal is both open access and very inexpensive to publish in, I was completely sold. [...] The review process was fantastic. It was transparent and fast. The open review system allowed for direct communication between the authors and reviewers, generating a more refined final manuscript. I think that having open reviews is a great first step towards fixing the peer review system.
That post also links to this one, so now the link cycle is complete.
Schachner, E.R., Hutchinson, J.R., and Farmer, C.G. 2013. Pulmonary anatomy in the Nile crocodile and the evolution of unidirectional airflow in Archosauria. PeerJ 1:e60 http://dx.doi.org/10.7717/peerj.60
March 19, 2013
I find myself reading a lot recently about “portable peer-review” — posts like Take me as I am, and my paper as it is? by scicurious at Neurotic Physiology, which excellently diagnoses a terrible, wasteful problem in scientific publishing:
My papers don’t often get in with minor revisions. Often I’ve got a ridiculously puffed head about my own work (apparently), and send them to places which reject them out of hand, or suggest major revisions and piles of new experiments which we just cannot do for various reasons. Then the paper ends up shuttled around. Send it in, wait 3 months, get rejected. Reformat (+2 mo or even more depending on collaborators and how much other crap you’ve got on your plate at the time) and send it out again. Years go by. In the meantime, suggested reviewers begin to hate me and I run out of new ones (only so many people in the field!).
I really wish there was a way to get out of this. This sort of thing contributes to the long lag times and slowness of scientific advance.
What a waste! What a drag on the progress of science! What a ridiculous situation we’re got ourselves into, with our chasing-after-prestigious-journals games.
An inadequate solution
The solution proposed by scicurious is:
You submit a paper to a large umbrella of journals of several “tiers”. It goes out for review. The reviewers make their criticisms. Then they say “this paper is fine, but it’s not impactful enough for journal X unless major experiments A, B, and C are done. However, it could fit into journal Y with only experiment A, or into journal Z with only minor revisions”.
As an incremental improvement on the current system, this is good, if rather impractical to implement.
But it doesn’t go nearly far enough. It still wastes time by going to multiple journals, probably with different formatting requirements, requiring assessment (albeit more lightweight) by several editors. And it does all that in the name of getting a designer label onto the paper by placing it in a “good” journal.
What are we, fourteen?
High-school kids are dumb enough to judge other kids by how fashionable their clothes are, by the labels on them, by whether they’re the clothes other kids think are cool.
Have we really not got beyond that?
The ugly truth
Trying to get into “good” journals is an idiot game. (Notice I don’t say “an idiot’s game” — more on this distinction below.) Although the political and bean-counting value of getting into Nature is huge, the scientific value of getting into Nature is zero. A paper in Nature is literally no better at all than the same paper would be in PLOS ONE. (In fact, it’s probably less good, because it will be butchered to fit the draconian space requirements.) Spending time and effort in trying to get a given piece of research into Nature is just about the least useful thing that can be done for that research.
I think deep down everyone knows this. But of course scientists still waste innumerable hours formatting their work first for Nature, then for Science when it gets rejected, then for PNAS when it gets rejected again, and so on “down the ladder”. But that direction is only “down” by agreement. And the reason of course is because it’s widely (though not universally) believed that wearing these designer clothes is the way to get jobs and grants. That’s why people who are not idiots play this idiot game.
(Thanks heavens for funders and assessors who explicitly state that the journal a work is published in has no effect on how it’s evaluated. You can find such statements from The Wellcome Trust, and regarding the Research Excellence Framework (REF). I want to see more granting and evaluation bodies make similar statements, and I look forward to seeing a university hiring policy that says the same.)
A better way
Happily for me, I don’t need a job or a grant, so I have the luxury of standing on the sidelines, shaking my head sagely yet smugly at the ridiculous manouevres happening on the pitch.
I admit to my shame that I have played the getting-into-a-good-journal game in the past, just because I blindly copied what I saw my colleagues doing without really thinking about it. One result is that our neck-anatomy paper was needlessly held up for more than four years. No-one benefits from these delays. They are a completely avoidable net loss for science.
No more. I am done with having my work rejected for spurious (i.e. non-scientific reasons). I’m only planning on submitting to journals that don’t do that. I reject the idiot notion that the natural lifecycle of a piece of work involves multiple submissions-review-reject cycles. From now on, my cycle is: do some work, write it up, submit it, see it published, move on to the next thing.
And note that “move on to the next thing” is a crucial step here. What really burns me is not the four-year delays on the papers I mentioned above, but all the other work that I’ve not done because I’ve been buggering about, excuse my French, with the corpses of these long-dead projects instead of getting the next thing done. And if that’s true for me, I bet it’s true for you, as well. Yes, you, reading this!
As of now, except in exceptional circumstances, my plan is only to submit to venues where I know scientifically sound work will be accepted. That means “megajournals” like PLOS ONE, PeerJ and (I don’t know, I will look into it) maybe some or all BMC journals. It also means edited volumes that I’m invited to contribute to (though they have their own issues). It probably also means certain other journals, such as PalArch, though they don’t make it explicit (and it would be good if they did).
First clarification: to be clear, I am not arrogant enough to think this means I will never again have a paper rejected. No doubt there will be occasions where I’ve made significant scientific errors, and reviewers will have to point those out and recommend rejection. I don’t mind that: it’s peer-review actually doing its job, and I’d rather fix those mistakes before publication. What I’m done with is rejections on the basis of “not impacty enough for this journal”, or the often equally specious “not a good fit”.
Second clarification: I don’t absolutely rule out exceptions. There might be occasions where, say, an impact-selective journal announces plans to put out a special volume that I want to be part of. I might submit to that; then again, I might not. I’ll judge it as it comes. But the point is, any exceptions will be exceptions. When I start thinking “where shall I send this?”, my list won’t start with Palaeontology and JVP. I’m glad to have got those notches on my bedpost, but I don’t feel any great need to go back to them.
Third clarification: I do understand that others might not be in a position to make the same leap. I am 99.7% certain that Darren won’t, for example, as he is convinced of the absolute necessity of Science‘n’Nature papers to advance his career. Matt, on the other hand, can and I think will — he’s got a tenure-track job at a university that he likes, has no plans to move on, and doesn’t need “prestigious” papers for his tenure case, only good ones.
(It pained me to have to make that distinction. What a stupid world, where “prestigious papers” and “good papers” are not synonymous, and don’t even overlap that much.)
But for people who, like me, don’t need to have an eye on the possible job-power of “prestige”, it seems obviously better to do what advances science best and fastest. And what a tragedy that advancing science isn’t what gets jobs.
February 27, 2013
I was reading Stephen Curry’s excellent summary of Monday’s Royal Society’s conference on “Open access in the UK and what it means for scientific research”. One point that Stephen made is:
[David Willetts's] argument is that pursuance of green OA leads to an unstable situation in which the cancellation of subscriptions (because readers have free access) drains the system of the funds needed to manage peer review and other publishing costs.
As an analysis of the difficulties of Green OA, this is admirably precise. But my eye was caught by that phrase “funds needed to manage peer review and other publishing costs.”
I think we should make an effort to wean ourselves off the habit of talking about “managing peer review and other publishing costs”. We all recognise that publishers do not provide peer-review — we do. But it’s also true that publishers don’t manage peer-review, either. Once again, we do that, by acting as unpaid academic editors.
I know that this is not news. We all know this. But a habit of speech is affording publishers a degree of credit that their efforts don’t merit, and that clouds the debate. Let’s apportion credit where it belongs.
Of course there are still “other publishing costs”. These are real and not negligible (even though PeerJ’s financial model suggests they are much less than we have sometimes assumed). It’s right that we should acknowledge that there really are publishing costs; and that whatever financial model we end up will need to pay them somehow. But let’s make an effort to be more precise about what those publishing costs are. Managing peer-review is not one of them.
February 21, 2013
Matt and I were discussing “portable peer-review” services like Rubriq, and the conversation quickly wandered to the subject of PeerJ. Then I realised that that seems to be happening with all our conversations lately. Here’s a partial transcript.
Mike: I don’t see portable peer-review catching on. Who’s going to pay for it unless journals give an equal discount from APCs? And what journal is going to do that when they get the peer-review done for free anyway? If I was Elsevier, I wouldn’t say “OK, we’ll accept your external review and give you a $700 discount”, I’d charge the full $3000 and get two more free reviews done.
Plus, you know, I can get all the peer-review I want, free of charge, at PeerJ.
Matt: Yeah, that was pretty much my take. Even as I was sending that I thought about adding, “I wonder if this is one more thing that PeerJ will kill.” Only ‘abort’ is more the verb I want, in that I don’t see this ever getting off the ground anyway.
Mike: I think the world at large has yet to realise what a black hole PeerJ is, in the sense that it’s warping all the space near it. Pretty much every time I have any thought at all about scholarly publishing now, that thought it swiftly followed by “… or, wait, I should just use PeerJ for that.”
Matt: Exactly. It makes me think that we may be discovering the contours of that space-warping effect for some time, in that we’re used to one model, and that, among all the other things PeerJ does, it quacks something like that old model so we tend to think of it as a very cool duck, and not the freakin’ tyrannosaur that is going to eat scholarly publishing.
Also makes me think of that Paul Graham thing about noticing that the door is open, and there being a lag between the freedom to do something and the adoption of that newly facilitated action or behavior.
New thought: assuming PeerJ does not implode, will the established powers try to start PeerJ-alikes, and if so, what will they charge (amount), and what will they charge for (lifetime membership? decadal? annual? per 1000 pages published?).
Mike: Sweet metaphor. It’s true. It’s qualitatively different from other journals in two respects.
First, the APC is literally an order of magnitude less — and at that point, a quantitative difference becomes qualitative. Someone like [NAME REDACTED] would worry about paying $1350 to PLOS ONE, but didn’t even stop and think before saying, yeah, I’ll do that.
Second, the lifetime membership changes the game for all subsequent submissions. Now when you have a manuscript ready to go, your question isn’t going to be “where shall I send this?”, it’s going to be “is there are compelling reason not to send this to PeerJ?”
Legacy publishers won’t start PeerJ-alikes because they can’t. As noted in many SV-POW! posts, Elsevier takes about $5000 for each article they put behind a paywall. Slice away the 40% profit and you get $3000 which not coincidentally is what they charge as an APC. They have old, slow, encumbered systems and processes and top-heavy organisation. At $3000 they are only breaking even. They can’t compete at a PLOS-like $1350 level and they can’t even think about competing at PeerJ levels. If they offered a lifetime membership they’d have to ask $10k or something stupid.
I don’t think it’s that they don’t want to change. They can’t. They’ve ossified into 1990s companies running on 1990s software. It’s hard to steer a ship with a $2bn turnover, and impossible to replace the engines while still cruising.
Matt: I think it is probably a mistake to think that PeerJ will only encroach “upward”, onto the territory of more traditional journals (which is “all of them”). We’ve already talked about it taking business from arXiv (at least ours, although there is the large non-overlap in their respective subject domains–for now, anyway).
But my point is, the question, “Why wouldn’t I send this to PeerJ?” may not only kick in for papers that you might conceivably send elsewhere, but also for manuscripts that you might not conceivably send anywhere.
Matt: Right. And if one is on the fence, shove it on the PeerJ preprint server and see what people have to say.
Mike: I think it’s the first megajournal to have an associated preprint server, and that may yet prove the most important of all its innovations.
Matt: It feels almost … struggling to find the right word, in part because it’s late and I need to go sleep. “Seditious” is not quite it, and neither is “seductive”.
At that point we started talking about something else, so I never did find out what word Matt was groping for. But what’s only gradually become clear to us is how much PeerJ is changing how we think about the academic publishing process. It’s shaking us out of mental ruts that we didn’t even know we were in. Exciting.
Big news yesterday. Identical bills were introduced into the US House of Representatives and Senate that, if passed, will make federally-funded research freely available within six months of publication. Here’s the exact wording, from the press release on Mike Doyle’s (D-PA) website:
The Fair Access to Science and Technology Research Act (FASTR) would require federal agencies with annual extramural research budgets of $100 million or more to provide the public with online access to research manuscripts stemming from funded research no later than six months after publication in a peer-reviewed journal.
As Peter Suber explains here and here, FASTR is a stronger version of FRPAA, the Federal Research Public Access Act, which has been introduced in Congress three times before (2006, 2009, and 2012) but never come up for a vote. However, momentum for open access is gathering, both on the supply side with progressive new outlets like eLife and PeerJ, and on the demand side of, well, citizens demanding access to the research they’ve already paid for, and legislators increasingly agreeing with them. So FASTR has a real shot at getting to a vote, and if voted on, could well pass. Which would be awesome, because we all need access.
I am especially happy that FASTR has bipartisan sponsorship in both houses of Congress. The sponsoring representatives in the House are Mike Doyle (D-PA), Kevin Yoder (R-KS), and Zoe Lofgren (D-CA). The identical Senate bill was introduced by John Cornyn (R-TX) and Ron Wyden (D-OR). So we’ve got Democrats from deeply blue states and Republicans from deeply red states, which is awesome and totally appropriate, because this issue really does cut across party lines. And, hell, last year Elsevier managed to hire bipartisan sponsorship for their toxic–in more ways than one–and rapidly-killed Research Works Act, so it’s nicely symmetrical that politicians from both sides of the aisle have come together to sponsor that bill’s near-opposite.
What can you do? If you live in the US, contact your legislators and tell them to support FASTR! It takes almost no time at all and it makes a big difference. This afternoon I called all five of the sponsoring legislators to thank them, and I called my representative and both California senators to encourage them to support the bill, and all told it took just a little over half an hour. If you skipped the thank yous and just got in touch with the legislators who represent you, it could be done in 15 minutes, and you’ve probably wasted more time than that today daydreaming about dinosaurs. Here’s what you’ll need.
Encourage your legislators:
Thank the bills’ sponsors:
- Senator John Cornyn (R-TX): (202) 224-2934
- Senator Ron Wyden (D-OR): (202) 224-5244
- Representative Mike Doyle (D-PA): (202) 225-2135
- Representative Zoe Lofgren (D-CA): (202) 225-3072
- Representative Kevin Yoder (R-KS): (202) 225-2865
This is big. This matters. Send an email, pick up the phone, make a difference.
I didn’t have any really motivational “contact your legislators!” artwork so the photos in this post are of papier mache dinosaurs–all stinkin’ theropods, I’m afraid–that I’m building with my son. More to come on that soon, but in the meantime, check this out and give it a whirl–after you contact your legislators!
February 14, 2013
There are a lot of things to love about PeerJ, which of course is why we sent our neck-anatomy paper there. I’ll discuss another time how its pricing scheme changes everything for Gold OA in the sciences, and maybe another time write about how well its papers display on mobile devices, or about the quick turnaround or 21st-century graphical design of the PDFs.
But among the most interesting things about PeerJ is its use of open peer review: reviewers are encouraged (though not required) to disclose their identity, and authors are encouraged (but also not required) to make the review history publicly available along with the final papers.
Uptake of open peer-review
Uptake of this option on the initial batch of 30 papers has been OK: 12 papers (40%) have had reviews posted:
(Articles 4, 18, 20, 23, 24, 32 and 35 do not exist — presumably they didn’t make it through review, typesetting and proofing in time for the launch. Or maybe they were rejected after having been assigned numbers.)
It’s interesting to see that most of the earliest papers did elect to publish reviews, but few of the later ones. This may reflect that the “early adopters” — the people who were quickest to get their submissions in after PeerJ opened its doors — also tend to be the more open-oriented people in other respects. It would be great if the authors of some of those other 18 papers were to make their reviews open, too: I’m sure it’s not too late.
What’s the value of open peer-review?
First, it improves transparency. In standard peer-review, three people (and editor and two reviewers) make a decision on behalf of the entire community, and no-one else can see what was done or why. In our case, John Hutchinson was our handling editor. We’ve often said on this blog how much we like and respect him, and it would be easy for someone on the outside to suspect that he’d been tempted to give us an easy ride. Anyone who reads the review history can see for themselves that he didn’t.
Second, it gives credit where it’s due. Reviewers who do a good job often plough in many hours of time that they could be spending on their own work, and it’s right that they should be recognised. In this case, Heinrich Mallison did a careful line-by-line critique of the whole 50-page manuscript and sent up a marked-up copy which was invaluable in making revisions. That sort of work should be acknowledged. [At the moment, that marked-up manuscript is not on the PeerJ review-history page. I've been told they're going to fix that.]
Third, it gives blame where it’s due. Some reviewers who are excessively critical, or criticise in a non-constructive way that can’t be addressed in a revision; others are positive about the manuscript but make no real contribution to improve it. It’s right that reviewers who don’t do their job properly should be called out on that. (Of course anonymity can go some way towards shielding bad reviewers, but even then it’s often quite obvious who’s responsible for a given review.)
Fourth, it encourages good behaviour from reviewers. When they know their good work will receive credit and their bad work will reflect on them, they will have more incentive to do their best. Too often, reviews are seen as a tax on researchers’ time. Making them visible helps to bring them into the mainstream.
Fifth, it avoids wasted effort. Sometimes a review is a serious piece of work in its own right — Matt tells me that for one manuscript we was refereeing, he wrote a detailed critical review that was longer than the manuscript itself. Of course, no-one ever saw that work but the original author and his handling editor, which is a terrible waste. Publishing reviews fixes that.
Sixth, and this is crucial, open peer-review is a fantastic teaching tool. Matt has already explained how showing his Western students real reviews in a real process is going to help them much more than made up ones.
What are the drawbacks of open peer-review?
Search me. I sure as heck can’t think of any.
Changing peer-review culture
PeerJ didn’t invent open peer-review — far from it. It’s been around for a while, practiced by some BMC journals and also adopted more recently by eLIFE — another of the new breed of born-digital open-access journals. Another new publishing initiative, F1000 Research, is built entirely on the concept of open review.
The importance of PeerJ doing the same is that it helps to bring open peer-review into the mainstream. PeerJ’s going to be a big journal — its explicit goal is to be a PLOS ONE-scale megajournal. One of the many things it can achieve is to help shift the default reviewing culture to open.