January 26, 2017
It’s now been widely discussed that Jeffrey Beall’s list of predatory and questionable open-access publishers — Beall’s List for short — has suddenly and abruptly gone away. No-one really knows why, but there are rumblings that he has been hit with a legal threat that he doesn’t want to defend.
To get this out of the way: it’s always a bad thing when legal threats make information quietly disappear; to that extent, at least, Beall has my sympathy.
That said — over all, I think making Beall’s List was probably not a good thing to do in the first place, being an essentially negative approach, as opposed to DOAJ’s more constructive whitelisting approach. But under Beall’s sole stewardship it was a disaster, due to his well-known ideological opposition to all open access. So I think it’s a net win that the list is gone.
But, more than that, I would prefer that it not be replaced.
Researchers need to learn the very very basic research skills required to tell a real journal from a fake one. Giving them a blacklist or a whitelist only conceals the real issue, which is that you need those skills if you’re going to be a researcher.
Finally, and I’m sorry if this is harsh, I have very little sympathy with anyone who is caught by a predatory journal. Why would you be so stupid? How can you expect to have a future as a researcher if your critical thinking skills are that lame? Think Check Submit is all the guidance that anyone needs; and frankly much more than people really need.
Here is the only thing you need to know, in order to avoid predatory journals, whether open-access or subscription-based: if you are not already familiar with a journal — because it’s published research you respect, or colleagues who you respect have published in it or are on the editorial board — then do not submit your work to that journal.
It really is that simple.
So what should we do now Beall’s List has gone? Nothing. Don’t replace it. Just teach researchers how to do research. (And supervisors who are not doing that already are not doing their jobs.)
September 18, 2016
I have before me the reviews for a submission of mine, and the handling editor has provided an additional stipulation:
Authority and date should be provided for each species-level taxon at first mention. Please ensure that the nominal authority is also included in the reference list.
In other words, the first time I mention Diplodocus, I should say “Diplodocus Marsh 1878″; and I should add the corresponding reference to my bibliography.
What do we think about this?
I used to do this religiously in my early papers, just because it was the done thing. But then I started to think about it. To my mind, it used to make a certain amount of sense 30 years ago. But surely in 2016, if anyone wants to know about the taxonomic history of Diplodocus, they’re going to go straight to Wikipedia?
I’m also not sure what the value is in providing the minimal taxonomic-authority information rather then, say, morphological information. Anyone who wants to know what Diplodocus is would be much better to go to Hatcher 1901, so wouldn’t we serve readers better if we referred to “Diplodocus (Hatcher 1901)”
Now that I come to think of it, I included “Giving the taxonomic authority after first use of each formal name” in my list of
Idiot things that we we do in our papers out of sheer habit three and a half years ago.
Should I just shrug and do this pointless busywork to satisfy the handling editor? Or should I simply refuse to waste my time adding information that will be of no use to anyone?
- Hatcher, Jonathan B. 1901. Diplodocus (Marsh): its osteology, taxonomy and probable habits, with a restoration of the skeleton. Memoirs of the Carnegie Museum 1:1-63 and plates I-XIII.
- Marsh, O. C. 1878. Principal characters of American Jurassic dinosaurs, Part I. American Journal of Science, series 3 16:411-416.
July 4, 2015
I got back this lunchtime from something a bit different in my academic career. I attended Court and Spark: an International Symposium on Joni Mitchell, hosted by the university of Lincoln and organised by Ruth Charnock.
I went mostly because I love Joni Mitchell’s music. But also partly because, as a scientist, I have a necessarily skewed perspective on scholarship as a whole, and I want to see whether I could go some way to correcting that by immersing myself in the world of the humanities for a day.
My own talk was on “Musical progress and emotional stasis from Blue (1971) to Hejira (1976)”. I’ve posted the abstract and the slides on my publications list, and you can get a broad sense of what was in it from this blog-post about Hejira which talks a lot about Blue. (The talk was inspired by that blog-post, but it had a lot of new material as well.) I plan to write it up as a paper when I get a moment.
I was up in session 3, after lunch, so I’d had a couple of sessions to get used to how things were done. As far as I can tell, it seemed to go over pretty well, and there was some good discussion afterwards.
So how does a humanities conference stack up against a science one?
They were much less different than I’d imagined they would be. The main difference is that talks are called “papers”. As in “Did you hear the paper about X?”, or “I gave a paper on Y”. There was perhaps a little more time dedicated to discussion than at SVP or SVPCA.
Because I didn’t know how to dress, I erred on the side of conservative. As a result, I was the only man in the building wearing a tie, and was consequently the most overdressed person present — something that has never happened to me before, and likely never will again. (I typically wear a tie two or three times a year.)
All in all I had a great time. I’m currently in the process of trying to get my eldest son to appreciate Joni (he’s more of a prog-metal fan, which I can respect); against that backdrop, it was great to be surrounded be people who get it, who know all the repertoire, and who recognise allusions dropped into conversation. Also: beers with fellow-travellers between the main conference and the Maka Maron interview event in the evening; wine reception afterwards; Chinese food after that; after-party when we couldn’t eat any more food. (It was nice being invited along to that, given that I’d never met any of the people before yesterday, and only even exchanged email with one of them.)
I’d had to get up 4:45 in the morning to drive up to Lincoln in time for the conference, so all in all it was a long day. But well worth doing.
I’d do it again in a heartbeat.
January 25, 2013
As noted a few days ago, I recently had an article published on the Guardian site entitled Hiding your research behind a paywall is immoral. The reaction to that article was fascinating, exhilarating and distressing in fairly equal parts. Fascinating because it generated a fertile stream of 156 comments, most of them substantial. Exhilarating because of some very positive responses. And distressing because some people who I like and respect absolutely hated it.
Those people’s main objections were nicely summed up by a response piece by Chris Chambers, published a few days later on the same site: Those who publish research behind paywalls are victims not perpetrators. It’s a good, measured article, and I highly recommend it — not least because it’s apparent that while Chris thinks my tactics are all off, he makes it clear that he shares the goal of universal open access and further significant reform in scholarly communications.
So I’d like to clarify a couple of points that I didn’t make clearly enough in the original articles (but which I addressed in two separate comments on Chris’s article); and then I want to throw the floor open to see if we can hack through the more difficult issues that it raises.
Do scientists who follow accepted publishing practices deserve to be labelled “immoral”, as Taylor claims?
The intention of my original article was not to say that the individuals who allow their work to go behind paywalls are immoral people, but that the act it itself immoral. If that feels like a fine distinction, it’s not. For a variety of pragmatic reasons, essentially moral people commit immortal acts all the time. At the trivial end of the scale, something as insignificant as not bothering to sort the recycling; at the other end, while no-one would claim dropping atomic bombs on civilian populations is an essentially moral act, many people would accept that in the context of WWII, the Hiroshima and Nagasaki bombs were justified or even necessary. (And please: no-one cite this as “Mike says publishing behind a paywall is exactly like nuking civilians”!)
So my goal in the original piece was not to castigate individuals as immoral people, but to push us all into deliberately thinking through the moral implications of our publication choices — decisions that all too many scientists still make without thought for the accessibility or otherwise of their work. I stand by my original assertion that it’s immoral to accept public funding to do research, then hide the fruits of that research from the public that paid for it. But that doesn’t mean that I am “labelling” anyone. My apologies if that distinction wasn’t clear.
To summarise the intent of my article: the decision of where to publish is a moral one. Please, all you moral people out there, make a moral choice.
The curse of journal prestige
And so we come to the vexed subject of journal rank. First of all, it’s encouraging to see that most people seem to agree at least that the effects of journal rank are A Bad Thing — that judging scientists by what journals they have published in is at best corner cutting, if not outright dereliction. This is not controversial any more, if it ever was: the ridiculous experience of PLOS Medicine as they negotiated (yes, negotiated) their initial impact factor tells you all you need to know about such metrics.
As Chris wrote in his article:
In many (if not most) fields, the journals in which we publish are judged to be an indicator of professional quality. […] Science is bad at being scientific: the actual quality takes second place to the perception of quality, which is so strong that journal rank creates its own biosphere.
The problem here seem to be one of wrenching an entire community out of a delusion at once. Because the things I hear over and over again are: 1. “Of course, I personally would never judge a paper by what journal it’s in, or judge a scientist by what journals her papers are in”. And 2. “I need to get my papers in glamourous journals so that people will judge me well”. Everyone is worried about being judged by the very criterion that they insist they would never judge by.
I don’t pretend to have a solution to this absurd circle. Well, I do: we should all just stop it. But I don’t have a strategy for reaching that solution. One thing that is infuriating to see is that even when the REF and the Wellcome Trust so very explicitly say “We don’t care what journal your work is in”, researchers continue to disbelieve them. I would love to hear constructive thoughts on what can be done about this.
One useful contribution would for more assessment exercises, funding bodes and recruitment programs to explicitly state that they will be assessing the quality of work, not the reputation of the place where it’s published.
Who is going to make change happen?
And so we come to another disturbing circle. Chris wrote:
[Publishing only in OA journals] amounts to sacrificing career opportunities (promotions, grants, research time) for the good of the cause. […] Beyond the considerations of self-preservation, scientists are impelled to protect and support younger researchers under their wings.
I accept that there is truth in this — at least, more than I did when I wrote the original article. (That’s largely due to an email exchange behind the scenes with someone who is welcome to identify himself or herself if he or she wishes; otherwise I’ll preserve anonymity.)
But here’s what worries me about it. I hear researchers at all stages of their careers finding reasons to keep feeding paywalls. Early career researchers say “Well, I’m just getting started, I have to establish my reputation first”. People who are running their own labs say “I have to aim for prestige, for the sake of my students”. Long-established senior figures are in most cases still sceptical of this new-fangled open access thing (and indeed of anything not printed on paper).
So where is the change going to come from?
I must say it warms my heart when I read clear declarations from young researchers. Yesterday Erin McKiernan tweeted:
I wanted to cheer. And Scott Weingart commented on my original article:
Enough idealistic students like us, and maybe something will actually change, rather than just having us all live in a self-perpetuating system which we all know is flawed but are too worried about our careers to do anything about.
These are good people. I hope with all my heart that they get the careers their principled stands deserve.
We’re in a very strange situation now. As Scott points out, none of us wants to propagate the current situation, where where you publish counts as well as — or even more than — what you publish. Yet we conspire to keep the circle unbroken. Chris’s article says that people who publish behind paywalls are victics rather than perpetrators; but if they are victims, then who are they victims of? The very same system that they are part of. They are both victims and perpetrators.
Folks, we as an academic community are doing this to ourselves.
How can we stop it?
January 17, 2013
My new article is up at the Guardian. This time, I have taken off the Conciliatory Hat, and I’m saying it how I honestly believe it is: publishing your science behind a paywall is immoral. And the reasons we use to persuade ourselves it’s acceptable really don’t hold up.
Because for all that we rightly talk about the financial efficiencies of open access, when it comes right down to it OA is primarily a moral, or if you prefer idealogical, issue. It’s not really about saving money, though that’s a welcome side-effect. It’s about doing what’s right.
I’m expecting some kick-back on this one. Fire away; I’ll enjoy the discussion.
October 10, 2012
The reason most of my work is in the form of journal articles is that I didn’t know there were other ways to communicate. Now that I know that there are other and in some ways demonstrably better ways (arXiv, etc.), my enthusiasm for sending stuff to journals is flagging. Whereas before I was happy to do it and the tenure beans were a happy side-effect, now I can see that the tenure beans are in fact shackles preventing me from taking a better path.
I’ve recently written about my increasing disillusionment with the traditional pre-publication peer-review process [post 1, post 2, post 3]. By coincidence, it was in between writing the second and third in that series of posts that I had another negative peer-review experience — this time from the other side of the fence — which has left me even more ambivalent about the way we do things.
On 17 July I was asked to review a paper for Biology Letters. Having established that it was to be published as open access, I agreed, was sent the manuscript, and two days later sent a response that recommended acceptance after only minor revision. Eleven days later, I was sent a copy of the editor’s decision — a message that included all three reviewers’ comments. I can summarise those reviewers’ comments by directly quoting as follows:
Revewer 1: “It is good to have this data published with good histological images. I have only minor comments – I think the ms should generally be accepted as it is.”
Reviewer 2 (that’s me): “This is a strong paper that brings an important new insight into a long-running palaeobiological issue […] and should be published in essentially its current form.”
Reviewer 3: “This manuscript reports exciting results regarding sauropod biomechanics […] The only significant addition I feel necessary is to the concluding paragraph.”
So imagine my surprise when the decision letter said:
I am writing to inform you that your manuscript […] has been rejected for publication in Biology Letters.
This action has been taken on the advice of referees, who have recommended that substantial revisions are necessary. With this in mind we would like to invite a resubmission, provided the comments of the referees are taken into account. This is not a provisional acceptance.
The resubmission will be treated as a new manuscript.
I can’t begin to imagine how they turned three “accept with very minor revisions” reviews into “your manuscript has been rejected … on the advice of referees, who have recommended that substantial revisions are necessary”.
In fact, let’s dump the “I can’t imagine how” euphemism and say it how it is: “reviewers recommended substantial revisions” is an outright lie. The reviewers recommended no such thing. The rejection can only be because it’s what the editor wanted to do in spite of the reviewers’ comments not because of them. It left me wondering why I bothered to waste my time offering them an opinion that they were only ever going to ignore.
Then six days ago I heard from the lead author, who had just had a revised version of the same manuscript accepted. (It had not come back to me for review, as the editor had said would happen with any resubmission).
The author wrote to me:
The paper will be published (open access) at the 3rd of Octobre. When I had submitted the corrected version of the ms acceptance was only a formality. So [name] was right, they just want to keep time between submission and publishing date short.
Well. We have a word for this. We call it “lying”. When the editor wrote “your manuscript […] has been rejected for publication in Biology Letters … With this in mind we would like to invite a resubmission … This is not a provisional acceptance. The resubmission will be treated as a new manuscript”, what she really meant was “your manuscript […] has been provisionally accepted, please sent a revision. The resubmission will not be treated as a new manuscript”.
I find this lack of honesty disturbing.
Because we’re not talking here about some shady, obscure little third-world publisher that no-one’s ever heard of with fictional people on the editorial board. We’re talking about the Royal Freaking Society of London. We’re talking about a journal (Biology Letters) that was calved off a journal (Proceedings B) that emerged from the oldest continuously published academic journal in the world (Philosophical Transactions). We’re talking about nearly three and a half centuries of academic heritage.
And they’re lying to us about their publication process.
When did they get the idea that this was acceptable?
And what else are they lying to us about? Can we trust (for example) that when editors or members submit papers, they are subjected to the same degree of rigorous filtering as every other submission? I would have assumed that, yes, of course they do. But I just don’t know any more.
The paper in question is Klein et al.’s (2012) histological study confirming that the bony cervical ribs of sauropods are, as we suspected, ossified tendons — as we assumed in our recently arXiv’d sauropod-neck paper. I am delighted to be able to say that it is freely available. At the bottom of the first page, it says “Received 21 August 2012; Accepted 13 September 2012”, for a submission-to-acceptance time of 23 days. But I know that the initial submission — and remember, the final published version is essentially identical to that initial submission — was made before 17 July, because that’s when I was asked to provide a peer-review. Honest reporting would give a submission-to-acceptance time of 58 days, which is two and a half times as long as the claimed figure.
Now the only reason for a journal to report dates of submission and acceptance at all is to convey the speed of turnaround, and lying about that turnaround time completely removes any utility those numbers might have. It would be better to not report them at all than to fudge the data.
This is another way that the high-impact fast-turnaround publishing system is so ridiculously gamed that it actually hurts science. We have the journal lying to authors about the status of their manuscripts so that it can then lie to the readers about its turnaround times. That’s deeply screwed up. And it’s hard for authors to blow the whistle — they don’t want to alienate the journals and the editors who have some veto power over their tenure beans, and reviewers don’t usually have all the information. The obvious solution is to make the peer-review process more open, and to make editorial decisions more transparent.
That, really, is only what we’d respect from the Royal Society. Isn’t it?
Note. Nicole Klein did not know I was going to post about this. I want to make that clear so that no-one at the Royal Society thinks that she or any of her co-authors is making trouble. All the trouble is of my making (and, more to the point, the Royal Society’s). Someone really has to shine a light on this misbehaviour.
Update (12 March 2014)
I should have noted this before, but on 10 May 2013, the Royal Society sent me an update, explaining some improvements in their process. But as noted in my write-up, it doesn’t actually solve the problem. Doing so would simply require giving three dates: Received, Revised and Accepted. But as I write this, new Proc. B articles still only show Received and Accepted dates.
- Klein, Nicole, Andreas Christian, and P. Martin Sander. 2012. Histology shows that elongated neck ribs in sauropod dinosaurs are ossiﬁed tendons. Biology Letters, online first. doi:10.1098/rsbl.2012.0778
Subsequent posts discuss how this issue is developing:
- We will no longer provide peer reviews for Royal Society journals until they adopt honest editorial policies
- Biology Letters does trumpet its submission-to-acceptance time
- Lying about submission times at other journals?
- Discussing Biology Letters with the Royal Society
- The Royal Society has taken some steps to improving reporting of submit/resubmit/accept times
- Fumbling towards transparency: the Royal Society’s “reject & resubmit” and submitted/published dates