September 24, 2013
I woke up this morning to find its third substantial review waiting for me.
That means that this paper has now accumulated as much useful feedback in the twenty-seven hours since I submitted it as any previous submission I’ve ever made.
It’s worth reviewing the timeline here:
- Monday 23rd September, 1:19 am: I completed the submission process.
- 7:03 am: the preprint was published. It took less than six hours.
- 10:52 am: received a careful, detailed review from Emanuel Tschopp. It took less than four hours from publication, and so of course less than ten from submission.
- About 5:00 pm: received a second review, this one from Mark Robinson. (I don’t know the exact time because PeerJ’s page doesn’t show an actual timestamp, just “21 hours ago”.)
- Tuesday 24th September, about 4:00 am: received a third review, this from ceratopsian-jockey and open-science guru Andy Farke.
Total time from submission to receiving three substantial reviews: about 27 hours.
It’s worth contrasting that with the times taken to get from submission to the receipt of reviews — usually only two of them — when going through the traditional journal route. Here are a few of mine:
- Diplodocoid phylogenetic nomenclature at the Journal of Paleontology, 2004-5 (the first reviews I ever received): three months and 14 days.
- Revised version of the same paper at PaleoBios, 2005 (my first published paper): one month and 10 days.
- Xenoposeidon description at Palaeontology, 2006: three months and 19 days, although that included a delay as the handling editor sent it to a third, tie-breaking, reviewer.
- Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2008: one month and 11 days.
- Sauropod neck anatomy (eventually to be published in a very different form in PeerJ) at Paleobiology: five months and two days.
- Trivial correction to the Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2010: five months and 11 days, bizarrely for a half-page paper.
Despite the wide variations in submission-to-review time at these journals, it’s clear that you can expect to wait at least a month before getting any feedback at all on your submission at traditional journals. Even PeerJ took 19 days to get the reviews of our neck-anatomy paper back to us.
So I am now pretty such sold on the pre-printing route. As well as getting this early version of the paper out there early so that other palaeontologists can benefit from it (and so that we can’t be pre-emptively plagiarised), issuing a preprint has meant that we’ve got really useful feedback very quickly.
I highly recommend this route.
By the way, in case anyone’s wondering, PeerJ Preprints is not only for manuscripts that are destined for PeerJ proper. They’re perfectly happy for you to use their service as a place to gather feedback for your work before submitting it elsewhere. So even if your work is destined for, say, JVP, there’s a lot to be gained by preprinting it first.
September 10, 2013
I just read Mick Watson’s post Why I resigned as PLOS ONE academic editor on his blog opiniomics. Turns out his frustration with PLOS ONE is not to do with his editorial work but with the long silences he faced as an author at that journal when trying to get a bad decision appealed.
I can totally identify with that, though my most frustrating experiences along these lines have been with other journals. (yes, Paleobiology, I’m looking at you.) So here’s what I wrote in response (lightly edited from the version that appeared as a comment on the original blog).
There’s one thing that PLOS ONE could and should do to mitigate this kind of frustration: communicate. And so should all other journals.
At every step in the appeal process — and indeed the initial review process — an automated email should be sent to the author. So for the initial submission:
- “Your paper has been assigned an academic editor.”
- “Your paper has been sent out to a reviewer.”
- “An invited reviewer has declined to review; we will try another.”
- “An invited reviewer failed to accept or decline within two weeks; we will try another.”
- “A review has been submitted.”
- “A reviewer has failed to submit his report within four weeks; we are making contact again to ask for a quick response.”
- “A reviewer has failed to submit his report within six weeks; we have dropped that reviewer from this process and will try another.”
- “All reviews are in; the editor is considering the decision.”
- Decision letter.
And for the appeal:
- “Your appeal has been noted and is under consideration.”
- “We have contacted the original handling editor.”
- “The original handling editor has responded.”
- “The original handling editor has failed to respond after four weeks; we are escalating to a senior editor.”
- [perhaps] go back into some of all of the submission process.
- Decision letter.
Most if not all of these stages in the process already have workflow logic in the manuscript-handing system. There is no reason not to send the poor author emails when they happen — it’s no extra work for the editor or reviewers.
Speaking as the veteran of plenty of long-drawn-out silences from journals that I’ve submitted to, I know that getting these messages would have made a big difference to me.
July 25, 2013
Last October, we published a sequence of posts about misleading review/reject/resubmit practices by Royal Society journals (Dear Royal Society, please stop lying to us about publication times; We will no longer provide peer reviews for Royal Society journals until they adopt honest editorial policies; Biology Letters does trumpet its submission-to-acceptance time; Lying about submission times at other journals?; Discussing Biology Letters with the Royal Society). As noted in the last of these posts, the outcome was that I had what seemed to be a fruitful conversation with Stuart Taylor, Commercial Director of the society.
Then things went quiet for some time.
On 8 May this year, I emailed Stuart to ask what progress there had been. At his request Phil Hurst (Publisher, The Royal Society) emailed me back on 10 May as follows:
Stuart has asked me to update you on the changes we have made following your conversation last year.
We have reviewed editorial procedures on Biology Letters. Further to this, we now provide Editors with the additional decision option of ‘revise’. This provides a middle way between ‘reject and resubmit’ and ‘accept with minor revisions’. Editors use all three options and it is entirely at their discretion which they select. ‘Revised’ papers retain the original submission date and we account for this in our published acceptance times.
In addition, we now publicise ‘first decision’ times rather than ‘first acceptance’ times on our website. We feel this is more meaningful as it gives prospective authors an indication of the time, irrespective of decision.
The first thing to say is, it’s great to see some progress on this.
The second thing is, I must apologise for my terrible slowness in reporting back. Phil emailed me again on 17 June to remind me to post, and it’s still taken me more than another month.
The third thing is, while this is definitely progress, it doesn’t (yet) fix the problem. That’s for two reasons.
The first problem is that so long as there is a “reject and resubmit” option that does not involve a brand new round of review (like a true resubmission), there is still a loophole by which editors can massage the journals’ figures. Of course, there is nothing wrong with “reject and resubmit” per se, but it does have to result in the resubmission being treated as a brand new submission — it can’t be a fig-leaf for what are actually minor revisions, as in the paper that first made me aware of this practice.
So I would urge the Royal Society either to get rid of the R&R option completely, replacing it with a simple “reject”; or to establish firm, explicit, transparent rules about how such resubmissions are treated.
The second problems is with the reporting. It’s true that the home pages of both Proc. B and Biology Letters do now publicise “Average receipt to first decision time” rather than the misleading old “Average receipt to acceptance time”. This is good news. Proc. B (though for some reason not Biology Letters) even includes a link to an excellent and very explicit page that gives three times (receipt to first decision, receipt to online publication and final decision to online publication) for five journals, and explains exactly what they mean.
Unfortunately, individual articles still include only Received and Accepted dates. You can see examples in recent papers both at Proc. B and at Biology Letters. As far as I can tell, there is no way to determine whether the Received date is for the original submission, or (as I can’t help but suspect) the minor revision that is disguised as a resubmission.
The solution for this is very simple (and was raised when I first talked to Stuart Taylor back in October): just give three dates: Received, Revised and Accepted. Then everything is clear and above board, and there is no scope for anyone to suspect wrongdoing.
Here at SV-POW!, we are an equal-opportunity criticiser of publishers: Springer, PLOS, Elsevier, the Royal Society, Nature, we don’t care. We call problems as we see them, where we see them. Here is one that has lingered for far too long. PLOS ONE’s journal information page says:
Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound.
Which is as we would expect it to be. But their reviewer guidelines page gives more detail as follows (emphasis added):
[Academic Editors] can employ a variety of methods, alone or in combination, to reach a decision in which they are confident:
- They can conduct the peer review themselves, based on their own knowledge and experience
- They can take further advice through discussion with other members of the editorial board
- They can solicit reports from further referees
As has been noted in comments on this blog, this first form, in which the editor makes the decision alone, is “unlike any other first-tier academic journal”. When I submitted my own manuscript to PLOS ONE a few weeks ago, I did it in the expectation that it would be reviewed in the usual way, by two experts chosen by the editor, who would then use those reviews in conjunction with her own expertise to make a decision. I’d hate to think it would go down the easier track, and so not be accorded the recognition that a properly peer-reviewed article gets. (Merely discussing with other editors would also not constitute proper peer-review in many people’s eyes, so only the third track is really the whole deal.)
The problem here is not a widespread one. Back when we first discussed this in any detail, about 13% of PLOS ONE papers slipped through on the editor-only inside lane. But more recent figures (based on the 1,837 manuscripts that received a decision between 1st July and 30th September 2010) say that only 4.2% of articles take this track. Evidently the process was by then in decline; it’s a shame we don’t have more recent numbers.
But the real issue here is lack of transparency. Four and half years ago, Matt said “I really wish they’d just state the review track for each article–i.e., solo editor approved, multiple editor approved, or externally reviewed [...] I also hope that authors are allowed to preferentially request ‘tougher’ review tracks”.
It seems that still isn’t done. Looking at this article, which at the time of writing is the most recent one published by PLOS ONE, there is a little “PEER REVIEWED” logo up at the top, but no detail of which track was taken. PLOS themselves evidently take the line that all three tracks constitute peer-review, as “Academic Editors are not employees [...] they are external peer reviewers“.
So I call on PLOS ONE to either:
A. eliminate the non-traditional peer-review tracks, or
B1. Allow submitting authors to specify they want the traditional track, and
B2. Specify explicitly on each published paper which track was taken.
“The benefit of published work is that if they have passed the muster of peer review future researchers can have faith in the results”, writes a commenter at The Economist. Such statements are commonplace.
I couldn’t disagree more. Nothing is more fatal to the scientific endeavour than having “faith” in a previously published result — as the string of failed replications in oncology and in social psychology is showing. See also the trivial but crucial spreadsheet error in the economics paper that underlies many austerity policies.
Studies have shown that peer-reviewers on average spend about 2-3 hours in evaluating a paper that’s been sent their way. There is simply no way for even an expert to judge in that time whether a paper is correct: the best they can do is say “this looks legitimate, the authors seem to have gone about things the right way”.
Now that is a useful thing to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread assumption that it means more than it does.
Remember: “has passed peer review” only really means “two experts have looked at this for a couple of hours, and didn’t see anything obviously wrong in it”.
Note. I initially wrote this as a comment on a pretty good article about open access at The Economist. That article is not perfect, but it’s essentially correct, and it makes me happy that these issues are now mainstream enough that it’s no longer a surprise when they’re covered by as mainstream an outlet as The Economist.
April 13, 2013
I was really excited to get an invitation to the evolution-or-revolution debate in Oxford, partly for historical reasons. I thought the Oxford Union was where C. S. Lewis, J. R. R. Tolkien and their friends held various debates. Sadly, it turns out I was mistaken, and it was merely the stomping ground for a bunch of lame politicians.
But anyway … It was a great experience — not only for the chance to meet online friends for the first time and make a strong opening statement, but also to hear important ideas batted back and forth — not only between the eight panel members (four on each team) but also with the audience.
Apparently, video of the debate (and of all the talks) will shortly be available. Until then, here is a brief tour of some highlights.
First, we each had four minutes or so to make an opening statement. It was my privilege to go first, and I used essentially the essay from the last post — though in an effort to avoid bloke-reading-from-a-sheet-of-paper syndrome I allowed myself to drift a bit — not really to good effect. One addition was a mention of the steering-a-supertanker analogy.
Cameron Neylon then spoke for evolution, referring to a poem about South American revolutions entitled “Only the beards have changed” — warning that throwing out an old order can result in a new one that is essentially unchanged.
Jason Hoyt gave a short speech about how PeerJ is practically addressing some of the major failures of the prevailing system: slowness, secrecy surrounding review, and enormous overcharging. Those guys aren’t waiting for a revolution, they’re hosting one.
Jason Wilde, like Cameron, emphasised that revolutions historically have a habit of leaving things no better than they found them — to be fair, a point that I have also made at times. I was pleasantly surprised by how much of his statement I agreed with, and look forward to seeing it again when video comes out.
Amelia Andersdotter gave unquestionably the most impassioned, and bluntest, speech — which I had to admit warmed my heart with its clear-sightedness and honesty. She made the point that a revolution has already happened, and not to our advantage, as publishers have seized control of science and driven restrictive IP laws. Amelia’s contention is that the necessary revolution will be easier to achieve without publishers than with their help, and she would happily do away with them all. Tough stuff.
Graham Taylor‘s contribution made quite a contrast. At its core lay the statement “science needs publishing, and publishing needs publishers”. The first half of that statement is unarguable. The second half does not follow, and its truth remains to be demonstrated. And of course even if it is true, it wouldn’t follow that we need the publishers we have now. (By the way, despite my history of eviscerating Taylor in print, he was very pleasant in person, and evidently didn’t bear a grudge.)
Paul Wicks‘s opening line to the evolutioneers was “I’m here from the Internet to negotiate the terms of your surrender”. He laid out an essentially unanswerable case for access to research as a foundation of advances in heath science. If I remember correctly, his opening statement got the biggest round of applause — and rightly so.
Finally in this first phase of the debate, David Tempest was left with the unenviable task of defending Elsevier’s actions as evolutionary rather than reactionary. Rather to my surprise, he adopted the unflattering (but apposite) metaphor of a supertanker heading for the rocks, but said that Elsevier have been engineering tugs to change its direction. (Is Mendeley meant to be one of those tugs?) Well, I wasn’t persuaded — but then I am increasingly of the opinion that the supertanker is not such a great analogy anyway, since the tanker doesn’t disgorge its cargo of poisonous filth until it hits the rocks.
The discussion period was based on four questions, each of which was initially addressed by a member of each team, then thrown open to the floor — at least, that was the intention, but it was pretty flexible. The questions:
- Does the public need access to academic publications?
- Are mandates good for science? Can we still have a journal “quality ladder”?
- In light of content-mining, do we need a new attitude to copyright?
- Will OA lead to higher or lower standards? Will it undermine peer-review?
- What system do we want to see in ten years?
I don’t now remember what was said in response to which question, and of course they overlapped a lot. So here are some highlights from this period, in no particular order.
The most applauded observation was Paul Wicks’s, that publications getting professors promotions are not the end goal of science. It’s all too easy to forget this (especially if you are an academic seeking promotion). We think of publications as being for other researchers; but they’re not, they’re for the world.
The biggest laugh was for Jason Hoyt’s comment on the simplest way to achieve universal access to Elsevier’s content: let them go out of business, and LOCKSS will take care of it. (Sadly, I’m not sure it’s that simple.)
In a response to one of the questions, Jason Wilde noted that at both Nature’s Scientific Reports and at PLOS ONE — both of which review for technical correctness only, not for novelty or importance — the rejection rate is about 40%. (I heard informally from Jason Hoyt that the rate at PeerJ is similar, based on its so-far small sample.) Interesting that the rate seems so consistent, and distressing that so much of what gets submitted to journals is evidently just no darned good.
But the best moment was provoked by David Tempest’s mention of transparency in pricing. Stephen Curry, from the floor, asked Tempest to justify his librarian’s not being allowed to tell him what Imperial’s Elsevier subscriptions cost, due to a confidentiality agreement. Tempest gave an extraordinary response, in which excess verbiage was unable to conceal the core point “We do this to prevent prices from falling”. His explanation finished “otherwise prices would go down and down and down”, to which the eloquent Dr. Curry shrugged bemusedly. A big laugh, but also a lot of real anger.
At some stage near the end, the chair asked for a show-of-hands vote on whether the best approach to pursue is Gold or Green open access — not just as a long-term goal, but as the immediate short-term approach. The vote was about three to one in favour of Gold. (This was from a very mixed audience containing researchers, librarians and publishers in I would guess fairly equal numbers, and a fair few startup founders.)
At the end of the whole event, a vote was taken on who had “won” the debate. “Revolution” came out ahead by a factor of two or three, which was gratifying; but I don’t know how much that was because of the quality of the debating, and how much it was because that’s what people already thought. (I hope the latter.)
And finally …
At the dinner afterwards, the organisers had arranged for bottles of wine to be available at cost price (£7), on the basis that you just take a bottle when you want it, and later on they’ll come round and collect the money. A system very open to abuse, but it turned out that the open-access crowd paid for one more bottle than they drank.
So a happy ending.
The photos above were provided by Simon Bayly and Victoria Watson. My memories of the debate were supplemented by helpful tweets from Simon Bayly (again), Anna Sharman (and again), Victoria Watson (again and again and again), Bryan Vickery, Jonathan Webb (and again) and Andrew Miller,
April 10, 2013
Is there any justification for any of these practices other than tradition?
- Choosing titles that deliberately omit new taxon names.
- Slicing the manuscript to fit an arbitrary length limit.
- Squeezing the narrative into a fixed set of sections (Introduction, Methods, Results, Conclusion).
- Discarding or combining illustrations to avoid exceding an arbitrary count.
- Flattening illustrations to monochrome.
- Using passive instead of active voice (especially in singular: “we did this” may be acceptable but not “I did this” for some reason).
- Giving the taxonomic authority after first use of each formal name.
- Listing institutional abbreviations at end of the Introduction section, several pages into the paper.
- Using initials for names in the acknowledgements.
- Refusing to cite in-prep papers, dissertations and blogs (while accepting pers. comm.)
- Using numbered citations instead of Author+Date.
- Using journal abbreviations such as “J. Vertebr. Paleontol.” in the references.
- Formatting references
- Having references at all, rather than links.
- Putting figure captions and tables at end the end of the manuscript instead of where they occur.
- Arbitrarily relegating parts of the manuscript to Supplementary information.
- Submitting images in TIFF format (even for born-as-JPEG photos).
- Double-spacing manuscripts.
- Writing cover letters for submissions.
- Throwing away reviews once they’ve been handled.
- Allowing the final product to go behind a paywall.
Did I miss any?
April 3, 2013
Gah! No time, no time. I am overdue on some things, so this is a short pointer post, not the thorough breakdown this paper deserves. The short, short version: Schachner et al. (2013) is out in PeerJ, describing airflow in the lungs of Nile crocs, and showing how surprisingly birdlike croc lungs actually are. If you’re reading this, you’re probably aware of the papers by Colleen Farmer and Kent Sanders a couple of years ago describing unidirectional airflow in alligator lungs. Hang on to your hat, because this new work is even more surprising.
I care about this not only because dinosaurian respiration is near and dear to my heart but also because I was a reviewer on this paper, and I am extremely happy to say that Schachner et al. elected to publish the review history alongside the finished paper. I am also pleasantly surprised, because as you’ll see when you read the reviews and responses, the process was a little…tense. But it all worked out well in the end, with a beautiful, solid paper by Schachner et al., and a totally transparent review process available for the world to see. Kudos to Emma, John, and Colleen on a fantastic, important paper, and for opting for maximal transparency in publishing!
UPDATE the next morning: Today’s PeerJ Blog post is an interview with lead author Emma Schachner, where it emerges that open review was one of the major selling points of PeerJ for her:
Once I was made aware of the transparent peer review process, along with the fact that the journal is both open access and very inexpensive to publish in, I was completely sold. [...] The review process was fantastic. It was transparent and fast. The open review system allowed for direct communication between the authors and reviewers, generating a more refined final manuscript. I think that having open reviews is a great first step towards fixing the peer review system.
That post also links to this one, so now the link cycle is complete.
Schachner, E.R., Hutchinson, J.R., and Farmer, C.G. 2013. Pulmonary anatomy in the Nile crocodile and the evolution of unidirectional airflow in Archosauria. PeerJ 1:e60 http://dx.doi.org/10.7717/peerj.60
March 19, 2013
I find myself reading a lot recently about “portable peer-review” — posts like Take me as I am, and my paper as it is? by scicurious at Neurotic Physiology, which excellently diagnoses a terrible, wasteful problem in scientific publishing:
My papers don’t often get in with minor revisions. Often I’ve got a ridiculously puffed head about my own work (apparently), and send them to places which reject them out of hand, or suggest major revisions and piles of new experiments which we just cannot do for various reasons. Then the paper ends up shuttled around. Send it in, wait 3 months, get rejected. Reformat (+2 mo or even more depending on collaborators and how much other crap you’ve got on your plate at the time) and send it out again. Years go by. In the meantime, suggested reviewers begin to hate me and I run out of new ones (only so many people in the field!).
I really wish there was a way to get out of this. This sort of thing contributes to the long lag times and slowness of scientific advance.
What a waste! What a drag on the progress of science! What a ridiculous situation we’re got ourselves into, with our chasing-after-prestigious-journals games.
An inadequate solution
The solution proposed by scicurious is:
You submit a paper to a large umbrella of journals of several “tiers”. It goes out for review. The reviewers make their criticisms. Then they say “this paper is fine, but it’s not impactful enough for journal X unless major experiments A, B, and C are done. However, it could fit into journal Y with only experiment A, or into journal Z with only minor revisions”.
As an incremental improvement on the current system, this is good, if rather impractical to implement.
But it doesn’t go nearly far enough. It still wastes time by going to multiple journals, probably with different formatting requirements, requiring assessment (albeit more lightweight) by several editors. And it does all that in the name of getting a designer label onto the paper by placing it in a “good” journal.
What are we, fourteen?
High-school kids are dumb enough to judge other kids by how fashionable their clothes are, by the labels on them, by whether they’re the clothes other kids think are cool.
Have we really not got beyond that?
The ugly truth
Trying to get into “good” journals is an idiot game. (Notice I don’t say “an idiot’s game” — more on this distinction below.) Although the political and bean-counting value of getting into Nature is huge, the scientific value of getting into Nature is zero. A paper in Nature is literally no better at all than the same paper would be in PLOS ONE. (In fact, it’s probably less good, because it will be butchered to fit the draconian space requirements.) Spending time and effort in trying to get a given piece of research into Nature is just about the least useful thing that can be done for that research.
I think deep down everyone knows this. But of course scientists still waste innumerable hours formatting their work first for Nature, then for Science when it gets rejected, then for PNAS when it gets rejected again, and so on “down the ladder”. But that direction is only “down” by agreement. And the reason of course is because it’s widely (though not universally) believed that wearing these designer clothes is the way to get jobs and grants. That’s why people who are not idiots play this idiot game.
(Thanks heavens for funders and assessors who explicitly state that the journal a work is published in has no effect on how it’s evaluated. You can find such statements from The Wellcome Trust, and regarding the Research Excellence Framework (REF). I want to see more granting and evaluation bodies make similar statements, and I look forward to seeing a university hiring policy that says the same.)
A better way
Happily for me, I don’t need a job or a grant, so I have the luxury of standing on the sidelines, shaking my head sagely yet smugly at the ridiculous manouevres happening on the pitch.
I admit to my shame that I have played the getting-into-a-good-journal game in the past, just because I blindly copied what I saw my colleagues doing without really thinking about it. One result is that our neck-anatomy paper was needlessly held up for more than four years. No-one benefits from these delays. They are a completely avoidable net loss for science.
No more. I am done with having my work rejected for spurious (i.e. non-scientific reasons). I’m only planning on submitting to journals that don’t do that. I reject the idiot notion that the natural lifecycle of a piece of work involves multiple submissions-review-reject cycles. From now on, my cycle is: do some work, write it up, submit it, see it published, move on to the next thing.
And note that “move on to the next thing” is a crucial step here. What really burns me is not the four-year delays on the papers I mentioned above, but all the other work that I’ve not done because I’ve been buggering about, excuse my French, with the corpses of these long-dead projects instead of getting the next thing done. And if that’s true for me, I bet it’s true for you, as well. Yes, you, reading this!
As of now, except in exceptional circumstances, my plan is only to submit to venues where I know scientifically sound work will be accepted. That means “megajournals” like PLOS ONE, PeerJ and (I don’t know, I will look into it) maybe some or all BMC journals. It also means edited volumes that I’m invited to contribute to (though they have their own issues). It probably also means certain other journals, such as PalArch, though they don’t make it explicit (and it would be good if they did).
First clarification: to be clear, I am not arrogant enough to think this means I will never again have a paper rejected. No doubt there will be occasions where I’ve made significant scientific errors, and reviewers will have to point those out and recommend rejection. I don’t mind that: it’s peer-review actually doing its job, and I’d rather fix those mistakes before publication. What I’m done with is rejections on the basis of “not impacty enough for this journal”, or the often equally specious “not a good fit”.
Second clarification: I don’t absolutely rule out exceptions. There might be occasions where, say, an impact-selective journal announces plans to put out a special volume that I want to be part of. I might submit to that; then again, I might not. I’ll judge it as it comes. But the point is, any exceptions will be exceptions. When I start thinking “where shall I send this?”, my list won’t start with Palaeontology and JVP. I’m glad to have got those notches on my bedpost, but I don’t feel any great need to go back to them.
Third clarification: I do understand that others might not be in a position to make the same leap. I am 99.7% certain that Darren won’t, for example, as he is convinced of the absolute necessity of Science‘n’Nature papers to advance his career. Matt, on the other hand, can and I think will — he’s got a tenure-track job at a university that he likes, has no plans to move on, and doesn’t need “prestigious” papers for his tenure case, only good ones.
(It pained me to have to make that distinction. What a stupid world, where “prestigious papers” and “good papers” are not synonymous, and don’t even overlap that much.)
But for people who, like me, don’t need to have an eye on the possible job-power of “prestige”, it seems obviously better to do what advances science best and fastest. And what a tragedy that advancing science isn’t what gets jobs.
February 27, 2013
I was reading Stephen Curry’s excellent summary of Monday’s Royal Society’s conference on “Open access in the UK and what it means for scientific research”. One point that Stephen made is:
[David Willetts's] argument is that pursuance of green OA leads to an unstable situation in which the cancellation of subscriptions (because readers have free access) drains the system of the funds needed to manage peer review and other publishing costs.
As an analysis of the difficulties of Green OA, this is admirably precise. But my eye was caught by that phrase “funds needed to manage peer review and other publishing costs.”
I think we should make an effort to wean ourselves off the habit of talking about “managing peer review and other publishing costs”. We all recognise that publishers do not provide peer-review — we do. But it’s also true that publishers don’t manage peer-review, either. Once again, we do that, by acting as unpaid academic editors.
I know that this is not news. We all know this. But a habit of speech is affording publishers a degree of credit that their efforts don’t merit, and that clouds the debate. Let’s apportion credit where it belongs.
Of course there are still “other publishing costs”. These are real and not negligible (even though PeerJ’s financial model suggests they are much less than we have sometimes assumed). It’s right that we should acknowledge that there really are publishing costs; and that whatever financial model we end up will need to pay them somehow. But let’s make an effort to be more precise about what those publishing costs are. Managing peer-review is not one of them.