March 20, 2014
In discussion of Samuel Gershman’s rather good piece The Exploitative Economics Of Academic Publishing, I got into this discusson on Twitter with David Mainwaring (who is usually one of the more interesting legacy-publisher representatives on these issues) and Daniel Allingon (who I don’t know at all).
I’ll need to give a bit of background before I reach the key part of that discussion, so here goes. I said that one of David’s comments was a patronising evasion, and that I expected better of him, and also that it was an explicit refusal to engage. David’s response was interesting:
First, to clear up the first half, I wasn’t at all saying that David hasn’t engaged in OA, but that in this instance he’d rejected engagement — and that his previous record of engaging with the issues was why I’d said “I expect better from you” at the outset.
Now with all that he-said-she-said out of the way, here’s the point I want to make.
David’s tweet quoted above makes a very common but insidious assumption: that a “nuanced” argument is intrinsically preferable to a simple one. And we absolutely mustn’t accept that.
We see this idea again and again: open-access advocates are criticised for not being nuanced, with the implication that this equates with not being right. But the right position is not always nuanced. Recruiting Godwin to the cause of a reductio ad absurdum, we can see this by asking the question “was Hitler right to commit genocide?” If you say “no”, then I will agree with you; I won’t criticise your position for lacking nuance. In this argument, nuance is superfluous.
[Tedious but probably necessary disclaimer: no, I am not saying that paywall-encumbered publishing is morally equivalent to genocide. I am saying that the example of genocide shows that nuanced positions are not always correct, and that therefore it's wrong to assume a priori that a nuanced position regarding paywalls is correct. Maybe a nuanced position is correct: but that is something to be demonstrated, not assumed.]
So when David says “What I do hold to is that a rounded view, nuance, w/ever you call it, is important”, I have to disagree. What matters is to be right, not nuanced. Again, sometimes the right position is nuanced, but there’s no reason to assume that from the get-go.
Here’s why this is dangerous: a nuanced, balanced, rounded position sounds so grown up. And by contrast, a straightforward, black-and-white one sounds so adolescent. You know, a straightforward, black-and-white position like “genocide is bad”. The idea of nuance plays on our desire to be respected. It sounds so flattering.
We mustn’t fall for this. Our job is to figure out what’s true, not what sounds grown-up.
January 3, 2014
The Scholarly Kitchen is the blog of the Society of Scholarly Publishers, and as such discusses lots of issues that are of interest to us. But a while back, I gave up commenting there two reasons. First, it seemed rare that fruitful discussions emerged, rather than mere echo-chamberism; and second, my comments would often be deliberately delayed for several hours “to let others get in first”, and randomly discarded completely for reasons that I found completely opaque.
But since June, when David Crotty took over as Editor-in-Chief from Kent Anderson, I’ve sensed a change in the wind: more thoughtful pieces, less head-in-the-sandism over the inevitable coming changes in scholarly publishing, and even genuinely fruitful back-and-forth in the comments. I was optimistic that the Kitchen could become a genuine hub of cross-fertilisation.
But then, this: The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero [cached copy]. Ex-editor Kent Anderson has risen from the grave to give us this attack piece on a fifteen-year-old.
I’m frankly astonished that David Crotty allowed this spiteful piece on the blog he edits. Is Kent Anderson so big that no-one can tell him “no”? Embarrassingly, he is currently president of the SSP, which maybe gives him leverage over the blog. But I’m completely baffled over how Crotty, Anderson or anyone else can think this piece will achieve anything other than to destroy the reputation of the Kitchen.
As Eva Amsen says, “I got as far as the part where he says Jack is not a “layperson” because his parents are middle class. (What?) Then closed tab.” I could do a paragraph-by-paragraph takedown of Anderson’s article, as Michael Eisen did for Jeffrey Beall’s anti-OA coming-out letter; but it really doesn’t deserve that level of attention.
So why am I even mentioning it? Because Jack Andraka doesn’t deserve to be hunted by a troll. I’m not going to be the only one finally giving up on The Scholarly Kitchen if David Crotty doesn’t do something to control his attack dog.
Seriously, David. You’re better than that. You have to be.
Anderson, Kent. 2014. The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero. Society of Scholarly Publishers. The Scholarly Kitchen, Society of Scholarly Publishers. URL:http://scholarlykitchen.sspnet.org/2014/01/03/the-jack-andraka-story-uncovering-the-hidden-contradictions-of-an-oa-paragon/. Accessed: 2014-01-03. (Archived by WebCite® at http://www.webcitation.org/6MLiAaC9o)
December 17, 2013
I thought Elsevier was already doing all it could to alienate the authors who freely donate their work to shore up the corporation’s obscene profits. The thousands of takedown notices sent to Academia.edu represent at best a grotesque PR mis-step, an idiot manoeuvre that I thought Elsevier would immediately regret and certainly avoid repeating.
Which just goes to show that I dramatically underestimated just how much Elsevier hate it when people read the research they publish, and the lengths they’re prepared to go to when it comes to ensuring the work stays unread.
Now, they’re targeting individual universities.
The University of Calgary has just sent this notice to all staff:
The University of Calgary has been contacted by a company representing the publisher, Elsevier Reed, regarding certain Elsevier journal articles posted on our publicly accessible university web pages. We have been provided with examples of these articles and reviewed the situation. Elsevier has put the University of Calgary on notice that these publicly posted Elsevier journal articles are an infringement of Elsevier Reed’s copyright and must be taken down.
That’s it, folks. Elsevier have taken the gloves off. I’ve tried repeatedly to think the best of them, to interpret their actions in the most charitable light. I even wrote a four-part series on how they can regain the trust of researchers and librarians (part 0, part 1, part 2, part 3), under the evidently mistaken impression that that was what they wanted.
But now it’s apparent that I was far too optimistic. They have no interest in working with authors, universities, businesses or anyone else. They just want to screw every possible cent out of all parties in the short term.
Because this is, obviously, a very short-term move. Whatever feeble facade Elsevier have till now maintained of being partners in the ongoing process of research is gone forever. They’ve just tossed it away, instead desperately trying to cling onto short-term profit. In going after the University of Calgary (and I imagine other universities as well, unless this is a pilot harassment), Elsevier have declared their position as unrepentant enemies of science.
In essence, this move is an admission of defeat. It’s a classic last-throw-of-the-dice manoeuvre. It signals a recognition from Elsevier that they simply aren’t going to be able to compete with actual publishers in the 21st century. They’re burning the house down on their way out. They’re asset-stripping academia.
Elsevier are finished as a credible publisher. I can’t believe any researcher who knows what they’re doing is going to sign away their rights to Elsevier journals after this. I hope to see the editorial boards of Elsevier-encumbered journals breaking away from the dead-weight of the publisher, and finding deals that actually promote the work of those journals rather than actively hindering it.
And a reminder, folks: for those of you who want to publicly declare that you’re done with Elsevier, you can sign the Cost Of Knowledge declaration. That’s often been described as a petition, but it’s not. A petition exists to persuade someone to do something, but we’re not asking Elsevier to change. It’s evidently far, far too late for that. As a publisher, Elsevier is dead. The Cost of Knowledge is just a declaration that we’re walking away from the corpse before the stench becomes unbearable.
December 13, 2013
It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.
As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.
To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):
I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.
There are a few things worth unpacking on that graph.
First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.
My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.
My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology – unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)
In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)
Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)
What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.
- Farke, Andrew A., Michael P. Taylor and Mathew J. Wedel. 2009. Sharing: public databases combat mistrust and secrecy. Nature 461:1053.
- Lozano, George A., Vincent Larivière and Yves Gingras. 2012. The weakening relationship between the impact factor and papers’ citations in the digital age. Journal of the American Society for Information Science and Technology 63(11):2140-2145. doi:10.1002/asi.22731 [arXiv preprint]
- Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrae Paleontology 29(3):787-806.
- Taylor, Michael P., Mathew J. Wedel and Darren Naish. 2009. Head and neck posture in sauropod dinosaurs inferred from extant animals. Acta Palaeontologica Polonica 54(2):213-230.
November 26, 2013
Reading the Government’s comments on the recent BIS hearing on open access, I see this:
As a result of the Finch Group’s work, a programme devised by publishers, through the Publishers Licensing Society, and without funding from Government, will culminate in a Public Library Initiative. A technical pilot was successfully started on 9 September 2013
Following the link provided, I read:
The Report recommended that the existing proposal to make the majority of journals available for free to walk-in users at public libraries throughout the UK should be supported and pursued vigorously.
I’m completely, completely baffled by this. The idea that people should get in a car and drive to a special magic building in order to read papers that their own computers are perfectly capable of downloading is so utterly wrong-headed I struggle to find words for it. It’s a nineteenth-century solution to a twentieth-century problem. In 2013.
Who thought this was a good idea?
And what were they smoking at the time?
I can tell you now that the take-up for this misbegotten initiative will be zero. Because although it’s a painful waste of time to negotiate the paywalls erected by those corporations we laughably call “publishers”, this “solution” will be more of a waste of time still. (Not to mention a waste of petrol).
I can only assume that was always the intention of the barrier-based publishers on the Finch committee that came up with this initiative: to deliver a stillborn access initiative that they can point to and say “See, no-one wants open access”. Meanwhile, everyone will be over on Twitter using #icanhazpdf and other such 21st-century workarounds.
October 7, 2013
Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.
What would you do?
You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.
But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.
Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.
Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)
Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.
But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.
Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.
To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.
Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”
Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.
Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.
Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.
Everyone reading this will, I am sure, have recognised that I’m talking about John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.
Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.
Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.
Update (9 October)
Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.
September 4, 2013
I recently handled the revisions on a paper that hopefully will be in press very soon. One of the review comments was “Be very careful not to make ad hominem attacks”.
I was a bit surprised to see that — I wasn’t aware that I’d made any — so I went back over the manuscript, and sure enough, there were no ad homs in there.
There was criticism, though, and I think that’s what the reviewer meant.
Folks, “ad hominem” has a specific meaning. An “ad hominem attack” doesn’t just mean criticising something strongly, it means criticising the author rather than the work. The phrase is Latin for “to the man”. Here’s a pair of examples:
- “This paper by Wedel is terrible, because the data don’t support the conclusion” — not ad hominem.
- “Wedel is a terrible scientist, so this paper can’t be trusted” – ad hominem.
What’s wrong with ad hominem criticism? Simply, it’s irrelevant to evaluation of the paper being reviewed. It doesn’t matter (to me as a scientist) whether Wedel strangles small defenceless animals for pleasure in his spare time; what matters is the quality of his work.
Note that ad hominems can also be positive — and they are just as useless there. Here’s another pair of examples:
- “I recommend publication of Naish’s paper because his work is explained carefully and in detail” — not ad hominem.
- “I recommend publication of Naish’s paper because he is a careful and detailed worker” — ad hominem.
It makes no difference whether Naish is a careful and detailed worker, or if he always buys his wife flowers on their anniversary, or even if he has a track-record of careful and detailed work. What matters is whether this paper, the one I’m reviewing, is good. That’s all.
As it happens the very first peer-review I ever received — for the paper that eventually became Taylor and Naish (2005) on diplodocoid phylogenetic nomenclature — contained a classic ad hominem, which I’ll go ahead and quote:
It seems to me perfectly reasonable to expect revisers of a major clade to have some prior experience/expertise in the group or in phylogenetic taxonomy before presenting what is intended to be the definitive phylogenetic taxonomy of that group. I do not wish to demean the capabilities of either author – certainly Naish’s “Dinosaurs of the Isle of Wight” is a praiseworthy and useful publication in my opinion – but I question whether he and Taylor can meet their own desiderata of presenting a revised nomenclature that balances elegance, consistency, and stability.
You see what’s happening here? The reviewer was not reviewing the paper, but the authors. There was no need for him or her to question whether we could meet our desiderata: he or she could just have read the manuscript and found out.
(Happy ending: that paper was rejected at the journal we first sent it to, but published at PaleoBios in revised form, and bizarrely is my equal third most-cited paper. I never saw that coming.)
July 9, 2013
Robin Osborne, professor of ancient history at King’s College, Cambridge, had an article in the Guardian yesterday entitled “Why open access makes no sense“. It was described by Peter Coles as “a spectacularly insular and arrogant argument”, by Peter Webster as an “Amazingly wrong-headed piece” and by Glyn Moody as “easily the most arrogant & dim-witted article I’ve ever read on OA”.
Here’s my response (posted as a comment on the original article):
At a time when the world as a whole is waking up to the open-access imperative, it breaks my heart to read this fusty, elitist, reactionary piece, in which Professor Osborne ends up arguing strongly for his own irrelevance. What a tragic lack of vision, and of ambition.
There is still a discussion to be had over what routes to take to universal open access, how quickly to move, and what other collateral changes need to be made (such as changing how research is evaluated for the purposes of job-searches and promotion). But Osborne’s entitled bleat is no part of that discussion. He has opted out.
The fundamental argument for providing open access to academic research is that research that is funded by the tax-payer should be available to the tax-payer.
That is not the fundamental argument for providing open access (although it’s certainly a compelling secondary one). The fundamental argument is that the job of a researcher is to create new knowledge and understanding; and that it’s insane to then take that new knowledge and understanding and lock it up where only a tiny proportion of the population can benefit from it. That’s true whether the research is funded publicly or by a private charity.
The problem is that the two situations are quite different. In the first case [academic research], I propose both the research questions and the dataset to which I apply them. In the second [commercial research] the company commissioning the work supplies the questions.
Osborne’s position here seem to be that because he is more privileged than a commercial researcher in one respect (being allowed to choose the subject of his research) he should also be more privileged in another (being allowed to choose to restrict his results to an elite). How can such an attitude be explained? I find it quite baffling. Why would allowing researchers to choose their own subjects mean that funders would be happy to allow the results to be hidden from the world?
Publishing research is a pedagogical exercise, a way of teaching others
Yes. Which is precisely why there is no justification for withholding it from those others.
At the end of the day the paper published in a Gold open access journal becomes less widely read. [...] UK scholars who are obliged to publish in Gold open access journals will end up publishing in journals that are less international and, for all that access to them is cost-free, are less accessed in fact. UK research published through Gold open access will end up being ignored.
As a simple matter of statistics, this is flatly incorrect. Open-access papers are read, and cited, significantly more than paywalled papers. The meta-analysis of Swan (2010) surveyed 31 previous studies of the open-access citation advantage, showing that 27 of them found advantages of between 45% are 600%. I did a rough-and-ready calculation on the final table of that report, averaging the citation advantages given for each of ten academic fields (using the midpoints of ranges when given), and found that on average open-access articles are cited 176% more often — that is, 2.76 times as often — as non-open.
There can be no such thing as free access to academic research. Academic research is not something to which free access is possible.
… because saying it twice makes it more true.
Like it or not, the primary beneficiary of research funding is the researcher, who has managed to deepen their understanding by working on a particular dataset.
Just supposing this strange assertion is true (which I don’t at all accept), I’m left wondering what Osborne thinks the actual purpose of his research is. On what basis does he think our taxes should pay him to investigate questions which (as he himself reminds us) he has chosen as being of interest to him? Does he honestly believe that the state owes him not just a living, but a living doing the work that he chooses on the subject that he chooses with no benefit accruing to anyone but him?
No, it won’t do. We fund research so that we can all be enriched by the new knowledge, not just an entited elite. Open access is not just an economic necessity, it’s a moral imperative.
July 1, 2013
Want to get rich? Heck, yes! So which business should you be in?
According to Forbes, the most profitable U.S. Industries, based on private-company annual statements filed for 2012/13 are:
|Oil and gas extraction||24.1%|
|Accounting, tax preparation, bookkeeping||21.2%|
|Commercial and industrial machine leasing||18.5%|
|Outpatient care centres||17.8%|
|Offices of dentists||16.5%|
Those are some healthy profit margins! It must be impossible to beat them — right?
Well, unless you’re a barrier-based legacy publisher, of course. Recall that the 2010/11 profit margins for the Big Four academic publishers came in at 32.4% for Informa, 33.9% for Springer, 36% for Elsevier and 42% for Wiley. And they’re still rising.
The average profit margin of the Big Four academic publishers — 36% — is half as good again as the highest profit margin Forbes could find for any industry.
Why do I keep banging on about this? Because, as Scott Aaronson wrote:
In my view, what’s missing at this point is mostly anger — a justified response to being asked to donate our time, not to Amnesty International or the Sierra Club, but to the likes of Kluwer and Elsevier. One would think such a request would anger everyone: conservatives and libertarians because of the unpaid labor, liberals because of the beneficiaries of that labor.
It’s genuinely great that the open-access movement has level heads like Peter Suber, Cameron Neylon and Stephen Curry. We’d get nowhere without advocates like them — people who can keep their cool in the face of the lies and propaganda of entrenched interests, and who can speak clearly and level-headedly to administrators and as-yet unconvinced researchers.
But dammit, these publishers are parasites, and we really do need to face it. Pretending they’re our partners is simply self-delusion, an academic Stockholm syndrome. What they want (to continue to walk away with 32.4-42% of all the money spent on academic publishing) is directly opposed to what we, our institutions, our funders, our governments and our taxpayers want. And every attempt we make to increase the availability and utility of the work we do, they oppose.
Time for us to walk away.
April 2, 2013
Juvenile sauropods have proportionally short cervicals (Wedel et al. 200: 368–369, Fig. 14, and Table 4)
And reformatting them as:
Juvenile sauropods have proportionally short cervicals : 368–369, Fig. 14, and Table 4.
Which doesn’t look right at all.
My question: how, when using numbered references, can I properly refer to page-range and figure number? Because there has to be a way — doesn’t there?
Surely it can’t be the case that in the culture of numbered-reference journals, you just don’t bother to specify with any more precision than pointing at a 46-page paper? I know Science ‘n’ Nature don’t care much about science or nature, but they can’t be that sloppy, can they? And if they are, I’d be horrified to find that the PLOS journals are so infected with me-too that they’re prepared to copy such poor practice?