Last night, I did a Twitter interview with Open Access Nigeria (@OpenAccessNG). To make it easy to follow in real time, I created a list whose only members were me and OA Nigeria. But because Twitter lists posts in reverse order, and because each individual tweet is encumbered with so much chrome, it’s rather an awkward way to read a sustained argument.

So here is a transcript of those tweets, only lightly edited. They are in bold; I am in regular font. Enjoy!

So @MikeTaylor Good evening and welcome. Twitterville wants to meet you briefly. Who is Mike Taylor?

In real life, I’m a computer programmer with Index Data, a tiny software house that does a lot of open-source programming. But I’m also a researching scientist — a vertebrate palaeontologist, working on sauropods: the biggest and best of the dinosaurs. Somehow I fit that second career into my evenings and weekends, thanks to a very understanding wife (Hi, Fiona!) …

As of a few years ago, I publish all my dinosaur research open access, and I regret ever having let any of my work go behind paywalls. You can find all my papers online, and read much more about them on the blog that I co-write with Matt Wedel. That blog is called Sauropod Vertebra Picture of the Week, or SV-POW! for short, and it is itself open access (CC By)

Sorry for the long answer, I will try to be more concise with the next question!

Ok @MikeTaylor That’s just great! There’s been so much noise around twitter, the orange colour featuring prominently. What’s that about?

Actually, to be honest, I’m not really up to speed with open-access week (which I think is what the orange is all about). I found a while back that I just can’t be properly on Twitter, otherwise it eats all my time. So these days, rather selfishly, I mostly only use Twitter to say things and get into conversations, rather than to monitor the zeitgeist.

That said, orange got established as the colour of open access a long time ago, and is enshrined in the logo:

OAlogo

In the end I suppose open-access week doesn’t hit my buttons too strongly because I am trying to lead a whole open-access life.

… uh, but thanks for inviting me to do this interview, anyway! :-)

You’re welcome @MikeTaylor. So what is open access?

Open Access, or OA, is the term describing a concept so simple and obvious and naturally right that you’d hardly think it needs a name. It just means making the results of research freely available on the Internet for anyone to read, remix and otherwise use.

You might reasonably ask, why is there any other kind of published research other than open access? And the only answer is, historical inertia. For reasons that seemed to make some kind of sense at the time, the whole research ecosystem has got itself locked into this crazy equilibrium where most published research is locked up where almost no-one can see it, and where even the tiny proportion of people who can read published works aren’t allowed to make much use of them.

So to answer the question: the open-access movement is an attempt to undo this damage, and to make the research world sane.

Are there factors perpetuating this inertia you talked about?

Oh, so many factors perpetuting the inertia. Let me list a few …

  1. Old-school researchers who grew up when it was hard to find papers, and don’t see why young whippersnappers should have it easier
  2. Old-school publishers who have got used to making profits of 30-40% turnover (they get content donated to them, then charge subscriptions)
  3. University administrators who make hiring/promotion/tenure decisions based on which old-school journals a researcher’s papers are in.
  4. Feeble politicians who think it’s important to keep the publishing sector profitable, even at the expense of crippling research.

I’m sure there are plenty of others who I’ve overlooked for the moment. I always say regarding this that there’s plenty of blame to go round.

(This, by the way, is why I called the current situation an equilibrium. It’s stable. Won’t fix itself, and needs to be disturbed.)

So these publishers who put scholarly articles behind paywalls online, do they pay the researchers for publishing their work?

HAHAHAHAHAHAHAHAHAHA!

Oh, sorry, please excuse me while I wipe the tears of mirth from my eyes. An academic publisher? Paying an author? Hahahahaha! No.

Not only do academic publishers never pay authors, in many cases they also levy page charges — that is, they charge the authors. So they get paid once by the author, in page-charges, then again by all the libraries that subscribe to read the paywalled papers. Which of course is why, even with their gross inefficiencies, they’re able to make these 30-40% profit margins.

So @MikeTaylor why do many researchers continue to take their work to these restricted access publishers and what can we do about it?

There are a few reasons that play into this together …

Part of it is just habit, especially among more senior researchers who’ve been using the same journals for 20 or 30 years.

But what’s more pernicious is the tendency of academics — and even worse, academic administrators — to evaluate research not by its inherent quality, but by the prestige of the journal that publishes it. It’s just horrifyingly easy for administrators to say “He got three papers out that year, but they were in journals with low Impact Factors.”

Which is wrong-headed on so many levels.

First of all, they should be looking at the work itself, and making an assessment of how well it was done: rigour, clarity, reproducibility. But it’s much easier just to count citations, and say “Oh, this has been cited 50 times, it must be good!” But of course papers are not always cited because they’re good. Sometimes they’re cited precisely because they’re so bad! For example, no doubt the profoundly flawed Arsenic Life paper has been cited many times — by people pointing out its numerous problems.

But wait, it’s much worse than that! Lazy or impatient administrators won’t count how many times a paper has been cited. Instead they will use a surrogate: the Impact Factor (IF), which is a measure not of papers but of journals.

Roughly, the IF measures the average number of citations received by papers that are published in the journal. So at best it’s a measure of journal quality (and a terrible measure of that, too, but let’s not get into that). The real damage is done when the IF is used to evaluate not journals, but the papers that appear in them.

And because that’s so widespread, researchers are often desperate to get their work into journals that have high IFs, even if they’re not OA. So we have an idiot situation where a selfish, rational researcher is best able to advance her career by doing the worst thing for science.

(And BTW, counter-intuitively, the number of citations an individual paper receives is NOT correlated significantly with the journal’s IF. Bjorn Brembs has discussed this extensively, and also shows that IF is correlated with retraction rate. So in many respects the high-IF journals are actually the worst ones you can possibly publish your work in. Yet people feel obliged to.)

*pant* *pant* *pant* OK, I had better stop answering this question, and move on to the next. Sorry to go on so long. (But really! :-) )

This is actually all so enlightening. You just criticised Citation Index along with Impact Factor but OA advocates tend to hold up a higher Citation Index as a reason to publish Open Access. What do you think regarding this?

I think that’s realpolitik. To be honest, I am also kind of pleased that the PLOS journals have pretty good Impact Factors: not because I think the IFs mean anything, but because they make those journals attractive to old-school researchers.

In the same way, it is a well-established fact that open-access articles tend to be cited more than paywalled ones — a lot more, in fact. So in trying to bring people across into the OA world, it makes sense to use helpful facts like these. But they’re not where the focus is.

But the last thing to say about this is that even though raw citation-count is a bad measure of a paper’s quality, it is at least badly measuring the right thing. Evaluating a paper by its journal’s IF is like judging someone by the label of their clothes

So @MikeTaylor Institutions need to stop evaluating research papers based on where they are published? Do you know of any doing it right?

I’m afraid I really don’t know. I’m not privy to how individual institution do things.

All I know is, in some countries (e.g. France) abuse of IF is much more strongly institutionalised. It’s tough for French researchers

What are the various ways researchers can make their work available for free online?

Brilliant, very practical question! There are three main answers. (Sorry, this might go on a bit …)

First, you can post your papers on preprint servers. The best known one is arXiv, which now accepts papers from quite a broad subject range. For example, a preprint of one of the papers I co-wrote with Matt Wedel is freely available on arXiv. There are various preprint servers, including arXiv for physical sciences, bioRxiv, PeerJ Preprints, and SSRN (Social Science Research Network).

You can put your work on a preprint server whatever your subsequent plans are for it — even if (for some reason) it’s going to a paywall. There are only a very few journals left that follow the “Ingelfinger rule” and refuse to publish papers that have been preprinted.

So preprints are option #1. Number 2 is Gold Open Access: publishing in an open-access journal such as PLOS ONE, a BMC journal or eLife. As a matter of principle, I now publish all my own work in open-access journals, and I know lots of other people who do the same — ranging from amateurs like me, via early-career researchers like Erin McKiernan, to lab-leading senior researchers like Michael Eisen.

There are two potential downsides to publishing in an OA journal. One, we already discussed: the OA journals in your field may not be be the most prestigious, so depending on how stupid your administrators are you could be penalised for using an OA journal, even though your work gets cited more than it would have done in a paywalled journal.

The other potential reason some people might want to avoid using an OA journal is because of Article Processing Charges (APC). Because OA publishers have no subscription revenue, one common business model is to charge authors an APC for publishing services instead. APCs can vary wildly, from $0 up to $5000 in the most extreme case (a not-very-open journal run by the AAAS), so they can be offputting.

There are three things to say about APCs.

First, remember that lots of paywalled journals demand page charges, which can cost more!

But second, please know that more than half of all OA journals actually charge no APC at all. They run on different models. For example in my own field, Acta Palaeontologica Polonica and Palaeontologia Electronica are well respected OA journals that charge no APC.

And the third thing is APC waivers. These are very common. Most OA publishers have it as a stated goal that no-one should be prevented from publishing with them by lack of funds for APCs. So for example PLOS will nearly always give a waiver when requested. Likewise Ubiquity, and others.

So there are lots of ways to have your work appear in an OA journal without paying for it to be there.

Anyway, all that was about the second way to make your work open access. #1 was preprints, #2 is “Gold OA” in OA journals …

And #3 is “Green OA”, which means publishing in a paywalled journal, but depositing a copy of the paper in an open repository. The details of how this works can be a bit complicated: different paywall-based publishers allow you to do different things, e.g. it’s common to say “you can deposit your peer-reviewed, accepted but unformatted manuscript, but only after 12 months“.

Opinions vary as to how fair or enforceable such rules are. Some OA advocates prefer Green. Others (including me) prefer Gold. Both are good.

See this SV-POW! post on the practicalities of negotiating Green OA if you’re publishing behind a paywall.

So to summarise:

  1. Deposit preprints
  2. Publish in an OA journal (getting a fee waiver if needed)
  3. Deposit postprints

I’ve written absolutely shedloads on these subjects over the last few years, including this introductory batch. If you only read one of my pieces about OA, make it this one: The parable of the farmers & the Teleporting Duplicator.

Last question – Do restricted access publishers pay remuneration to peer reviewers?

I know of no publisher that pays peer reviewers. But actually I am happy with that. Peer-review is a service to the community. As soon as you encumber it with direct financial incentives, things get more complicated and there’s more potential for Conflict of interest. What I do is, I only perform peer-reviews for open-access journals. And I am happy to put that time/effort in knowing the world will benefit.

And so we bring this edition to a close. We say a big thanks to our special guest @MikeTaylor who’s been totally awesome and instructive.

Thanks, it’s been a privilege.

As recently noted, it was my pleasure and privilege on 25 June to give a talk at the ESOF2014 conference in Copenhagen (the EuroScience Open Forum). My talk was one of four, followed by a panel discussion, in a session on the subject “Should science always be open?“.

Banner

I had just ten minutes to lay out the background and the problem, so it was perhaps a bit rushed. But you can judge for yourself, because the whole session was recorded on video. The image is not the greatest (it’s hard to make out the slides) and the audio is also not all it could be (the crowd noise is rather loud). But it’s not too bad, and I’ve embedded it below. (I hope the conference organisers will eventually put out a better version, cleaned up by video professionals.)

Subbiah Arunachalam (from Arun, Chennai, India) asked me whether the full text of the talk was available — the echoey audio is difficult for non-native English speakers. It wasn’t but I’ve sinced typed out a transcript of what I said (editing only to remove “er”s and “um”s), and that is below. Finally, you may wish to follow the slides rather than the video: if so, they’re available in PowerPoint format and as a PDF.

Enjoy!

It’s very gracious of you all to hold this conference in English; I deeply appreciate it.

“Should science always be open?” is our question, and I’d like to open with one of the greatest scientists there’s ever been, Isaac Newton, who humility didn’t come naturally to. But he did manage to say this brilliant humble thing: “If I have seen further, it’s by standing on the shoulders of giants.”

And the reason I love this quote is not just because it’s insightful in itself, but because he stole it from something John of Salisbury said right back in 1159. “Bernard of Chartres used to say that we were like dwarfs seated on the shoulders of giants. If we see more and further than they, it is not due to our own clear eyes or tall bodies, but because we are raised on high and upborne by their gigantic bigness.”

Well, so Newton — I say he stole this quote, but of course he did more than that: he improved it. The original is long-winded, it goes around the houses. But Newton took that, and from that he made something better and more memorable. So in doing that, he was in fact standing on the shoulders of giants, and seeing further.

And this is consistently where progress comes from. It’s very rare that someone who’s locked in a room on his own thinking about something will have great insights. It’s always about free exchange of ideas. And we see this happening in lots of different fields.

Over the last ten or fifteen years, enormous advances in the kinds of things computers working in networks can do. And that’s come from the culture of openness in APIs and protocols, in Silicon Valley and elsewhere, where these things are designed.

Going back further and in a completely different field, the Impressionist painters of Paris lived in a community where they were constantly — not exactly working together, but certainly nicking each other’s ideas, improving each other’s techniques, feeding back into this developing sense of what could be done. Resulting in this fantastic art.

And looking back yet further, Florence in the Renaissance was a seat of all sorts of advances in the arts and the sciences. And again, because of this culture of many minds working together, and yielding insights and creativity that would not have been possible with any one of them alone.

And this is because of network effects; or Metcalfe’s Law expresses this by saying that the value of a network is proportional to the square of the number of nodes in that network. So in terms of scientific reasearch, what that means is that if you have a corpus of published research output, of papers, then the value of that goes — it doesn’t just increase with the number of papers, it goes up with the square of the number of papers. Because the value isn’t so much in the individual bits of research, but in the connections between them. That’s where great ideas come from. One researcher will read one paper from here and one from here, and see where the connection or the contradiction is; and from that comes the new idea.

So it’s very important to increase the size of the network of what’s available. And that’s why we have a very natural tendency, I think among scientists particularly, but I think we can say researchers in other areas as well, have a natural tendency to share.

Now until recently, the big difficulty we’ve had with sharing has been logistical. It was just difficult to make and distribute copies of pieces of research. So this [picture of a printing press] is how we made copies, this [picture of stacks of paper] was what we stored them on, and this was how we transmitted them from one researcher to another.

And they were not the most efficient means, or at least not as efficient as what we now have available. And because of that, and because of the importance of communication and the links between research, I would argue that maybe the most important invention of the last hundred years is the Internet in general and the World Wide Web in particular. And the purpose of the Web, as it was initially articulated in the first public post that Tim Berners-Lee made in 1991 — he explained not just what the Web was but what it was for, and he said: “The project started with the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups.”

So that’s what the Web is for; and here’s why it’s important. I’m quoting here from Cameron Neylon, who’s great at this kind of thing. And again it comes down to connections, and I’m just going to read out loud from his blog: “Like all developments of new communication networks, SMS, fixed telephones, the telegraph, the railways, and writing itself, the internet doesn’t just change how well we can do things, it qualitatively changes what we can do.” And then later on in the same post: “At network scale the system ensures that resources get used in unexpected ways. At scale you can have serendipity by design, not by blind luck.”

Now that’s a paradox; it’s almost a contradiction, isn’t it? Serendipity by definition is what you get by blind luck. But the point is, when you have enough connections — enough papers floating around the same open ecosystem — all the collisions happening between them, it’s inevitable that you’re going to get interesting things coming out. And that’s what we’re aiming towards.

And of course it’s never been more important, with health crises, new diseases, the diminishing effectiveness of antibiotics, the difficulties of feeding a world of many billions of people, and the results of climate change. It’s not as though we’re short of significant problems to deal with.

So I love this Jon Foley quote. He said, “Your job” — as a researcher — “Your job is not to get tenure! Your job is to change the world”. Tenure is a means to an end, it’s not what you’re there for.

So this is the importance of publishing. Of course the word “publish” comes from the same root as the word “public”: to publish a piece of research means to make that piece of research public. And the purpose of publishing is to open research up to the world, and so open up the world itself.

And that’s why it’s so tragic when we run into this [picture of a paywalled paper]. I think we’ve all seen this at various times. You go to read a piece of research that’s valuable, that’s relevant to either the research you’re doing, or the job you’re doing in your company, or whatever it might be. And you run into this paywall. Thirty five dollars and 95 cents to read this paper. It’s a disaster. Because what’s happened is we’ve got a whole industry whose existence is to make things public, and who because of accidents of history have found themselves doing the exact opposite. Now no-one goes into publishing with the intent of doing this. But this is the unfortunate outcome.

So what we end up with is a situation where we’re re-imposing on the research community barriers that were necessarily imposed by the inadequate technology of 20 or 30 years ago, but which we’ve now transcended in technological terms but we’re still strugging with for, frankly, commercial reasons. This is why we’re struggling with this.

And I don’t like to be critical, but I think we have to just face the fact that there is a real problem when organisations, for many years have been making extremely high profits — these [36%, 32%, 34%, 42%] are the profit margins of the “big four” academic publishers which together hugely dominate the scholarly publishing market — and as you can see they’re in the range 32% to 42% of revenue, is sheer profit. So every time your university library spends a dollar on subscriptions, 40% of that goes straight out of the system to nowhere.

And it’s not surprising that these companies are hanging on desperately to the business model that allows them to do that.

Now the problem we have in advocating for open access is that when we stand against publishers who have an existing very profitable business model, they can complain to governments and say, “Look, we have a market that’s economically significant, it’s worth somewhere in the region of 10-15 billion US dollars a year.” And they will say to governments, “You shouldn’t do anything that might damage this.” And that sounds effective. And we struggle to argue against that because we’re talking about an opportunity cost, which is so much harder to measure.

You know, I can stand here — as I have done — and wave my hands around, and talk about innovation and opportunity, and networks and connections, but it’s very hard to quantify in a way that can be persuasive to people in a numeric way. Say, they have a 15 billion dollar business, we’re talking about saving three trillion’s worth of economic value (and I pulled that number out of thin air). So I would love, if we can, when we get to the discussions, to brainstorm some way to quantify the opportunity cost of not being open. But this is what it looks like [picture of flooding due to climate change]. Economically I don’t know what it’s worth. But in terms of the world we live in, it’s just essential.

So we’ve got to remember the mission that we’re on. We’re not just trying to save costs by going to open access publishing. We’re trying to transform what research is, and what it’s for.

So should science always be open? Of course, the name of the session should have been “Of course science should always be open”.

 

[NOTE: see the updates at the bottom. In summary, there’s nothing to see here and I was mistaken in posting this in the first place.]

Elsevier’s War On Access was stepped up last year when they started contacting individual universities to prevent them from letting the world read their research. Today I got this message from a librarian at my university:

babys-first-takedown

The irony that this was sent from the Library’s “Open Access Team” is not lost on me. Added bonus irony: this takedown notification pertains to an article about how openness combats mistrust and secrecy. Well. You’d almost think NPG wants mistrust and secrecy, wouldn’t you?

It’s sometimes been noted that by talking so much about Elsevier on this blog, we can appear to be giving other barrier-based publishers a free ride. If we give that impression, it’s not deliberate. By initiating this takedown, Nature Publishing Group has self-identified itself as yet another so-called academic publisher that is in fact an enemy of science.

So what next? Anyone who wants a PDF of this (completely trivial) letter can still get one very easily from my own web-site, so in that sense no damage has been done. But it does leave me wondering what the point of the Institutional Repository is. In practice it seems to be a single point of weakness allowing “publishers” to do the maximum amount of damage with a single attack.

But part of me thinks the thing to do is take the accepted manuscript and format it myself in the exact same way as Nature did, and post that. Just because I can. Because the bottom line is that typesetting is the only actual service they offered Andy, Matt and me in exchange for our right to show our work to the world, and that is a trivial service.

The other outcome is that this hardens my determination never to send anything to Nature again. Now it’s not like my research program is likely to turn up tabloid-friendly results anyway, so this is a bit of a null resolution. But you never know: if I happen to stumble across sauropod feather impressions in an overlooked Wealden fossil, then that discovery is going straight to PeerJ, PLOS, BMC, F1000 Research, Frontiers or another open-access publisher, just like all my other work.

And that’s sheer self-interest at work there, just as much as it’s a statement. I will not let my best work be hidden from the world. Why would anyone?

Let’s finish with another outing for this meme-ready image.

Publishers ... You're doing it wrong

Update (four hours later)

David Mainwaring (on Twitter) and James Bisset (in the comment below) both pointed out that I’ve not seen an actual takedown request from NPG — just the takedown notification from my own library. I assumed that the library were doing this in response to hassle from NPG, but of course it’s possible that my own library’s Open Access Team is unilaterally trying to prevent access to the work of its university’s researchers.

I’ve emailed Lyn Duffy to ask for clarification. In the mean time, NPG’s Grace Baynes has tweeted:

So it looks like this may be even more bizarre than I’d realised.

Further bulletins as events warrant.

Update 2 (two more hours later)

OK, consensus is that I read this completely wrong. Matt’s comment below says it best:

I have always understood institutional repositories to be repositories for author’s accepted manuscripts, not for publisher’s formatted versions of record. By that understanding, if you upload the latter, you’re breaking the rules, and basically pitting the repository against the publisher.

Which is, at least, not a nice thing to do to the respository.

So the conclusion is: I was wrong, and there’s nothing to see here apart from me being embarrassed. That’s why I’ve struck through much of the text above. (We try not to actually delete things from this blog, to avoid giving a false history.)

My apologies to Lyn Duffy, who was just doing her job.

Update 3 (another hour later)

This just in from Lyn Duffy, confirming that, as David and James guessed, NPG did not send a takedown notice:

Dear Mike,

This PDF was removed as part of the standard validation work of the Open Access team and was not prompted by communication from Nature Publishing. We validate every full-text document that is uploaded to Pure to make sure that the publisher permits posting of that version in an institutional repository. Only after validation are full-text documents made publicly available.

In this case we were following the regulations as stated in the Nature Publishing policy about confidentiality and pre-publicity. The policy says, ‘The published version — copyedited and in Nature journal format — may not be posted on any website or preprint server’ (http://www.nature.com/authors/policies/confidentiality.html). In the information for authors about ‘Other material published in Nature’ it says, ‘All articles for all sections of Nature are considered according to our usual conditions of publication’ (http://www.nature.com/nature/authors/gta/others.html#correspondence). We took this to mean that material such as correspondence have the same posting restrictions as other material published by Nature Publishing.

If we have made the wrong decision in this case and you do have permission from Nature Publishing to make the PDF of your correspondence publicly available via an institutional repository, we can upload the PDF to the record.

Kind regards,
Open Access Team

Appendix

Here’s the text of the original notification email so search-engines can pick it up. (If you read the screen-grab above, you can ignore this.)

University of Bristol — Pure

Lyn Duffy has added a comment

Sharing: public databases combat mistrust and secrecy
Farke, A. A., Taylor, M. P. & Wedel, M. J. 22 Oct 2009 In : Nature. 461, 7267, p. 1053

Research output: Contribution to journal › Article

Lyn Duffy has added a comment 7/05/14 10:23

Dear Michael, Apologies for the delay in checking your record. It appears that the document you have uploaded alongside this record is the publishers own version/PDF and making this version openly accessible in Pure is prohibited by the publisher, as a result the document has been removed from the record. In this particular instance the publisher would allow you to make accessible the postprint version of the paper, i.e., the article in the form accepted for publication in the journal following the process of peer review. Please upload an acceptable version of the paper if you have one. If you have any questions about this please get back to us, or send an email directly to open-access@bristol.ac.uk Kind regards, Lyn Duffy Library Open Access Team.

In discussion of Samuel Gershman’s rather good piece The Exploitative Economics Of Academic Publishing, I got into this discusson on Twitter with David Mainwaring (who is usually one of the more interesting legacy-publisher representatives on these issues) and Daniel Allingon (who I don’t know at all).

I’ll need to give a bit of background before I reach the key part of that discussion, so here goes. I said that one of David’s comments was a patronising evasion, and that I expected better of him, and also that it was an explicit refusal to engage. David’s response was interesting:

First, to clear up the first half, I wasn’t at all saying that David hasn’t engaged in OA, but that in this instance he’d rejected engagement — and that his previous record of engaging with the issues was why I’d said “I expect better from you” at the outset.

Now with all that he-said-she-said out of the way, here’s the point I want to make.

David’s tweet quoted above makes a very common but insidious assumption: that a “nuanced” argument is intrinsically preferable to a simple one. And we absolutely mustn’t accept that.

We see this idea again and again: open-access advocates are criticised for not being nuanced, with the implication that this equates with not being right. But the right position is not always nuanced. Recruiting Godwin to the cause of a reductio ad absurdum, we can see this by asking the question “was Hitler right to commit genocide?” If you say “no”, then I will agree with you; I won’t criticise your position for lacking nuance. In this argument, nuance is superfluous.

[Tedious but probably necessary disclaimer: no, I am not saying that paywall-encumbered publishing is morally equivalent to genocide. I am saying that the example of genocide shows that nuanced positions are not always correct, and that therefore it’s wrong to assume a priori that a nuanced position regarding paywalls is correct. Maybe a nuanced position is correct: but that is something to be demonstrated, not assumed.]

So when David says “What I do hold to is that a rounded view, nuance, w/ever you call it, is important”, I have to disagree. What matters is to be right, not nuanced. Again, sometimes the right position is nuanced, but there’s no reason to assume that from the get-go.

Here’s why this is dangerous: a nuanced, balanced, rounded position sounds so grown up. And by contrast, a straightforward, black-and-white one sounds so adolescent. You know, a straightforward, black-and-white position like “genocide is bad”. The idea of nuance plays on our desire to be respected. It sounds so flattering.

We mustn’t fall for this. Our job is to figure out what’s true, not what sounds grown-up.

The Scholarly Kitchen is the blog of the Society of Scholarly Publishers, and as such discusses lots of issues that are of interest to us. But a while back, I gave up commenting there two reasons. First, it seemed rare that fruitful discussions emerged, rather than mere echo-chamberism; and second, my comments would often be deliberately delayed for several hours “to let others get in first”, and randomly discarded completely for reasons that I found completely opaque.

But since June, when David Crotty took over as Editor-in-Chief from Kent Anderson, I’ve sensed a change in the wind: more thoughtful pieces, less head-in-the-sandism over the inevitable coming changes in scholarly publishing, and even genuinely fruitful back-and-forth in the comments. I was optimistic that the Kitchen could become a genuine hub of cross-fertilisation.

But then, this: The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero [cached copy]. Ex-editor Kent Anderson has risen from the grave to give us this attack piece on a fifteen-year-old.

I’m frankly astonished that David Crotty allowed this spiteful piece on the blog he edits. Is Kent Anderson so big that no-one can tell him “no”? Embarrassingly, he is currently president of the SSP, which maybe gives him leverage over the blog. But I’m completely baffled over how Crotty, Anderson or anyone else can think this piece will achieve anything other than to destroy the reputation of the Kitchen.

As Eva Amsen says, “I got as far as the part where he says Jack is not a “layperson” because his parents are middle class. (What?) Then closed tab.” I could do a paragraph-by-paragraph takedown of Anderson’s article, as Michael Eisen did for Jeffrey Beall’s anti-OA coming-out letter; but it really doesn’t deserve that level of attention.

So why am I even mentioning it? Because Jack Andraka doesn’t deserve to be hunted by a troll. I’m not going to be the only one finally giving up on The Scholarly Kitchen if David Crotty doesn’t do something to control his attack dog.

Seriously, David. You’re better than that. You have to be.

Reference

Anderson, Kent. 2014. The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero. Society of Scholarly Publishers. The Scholarly Kitchen, Society of Scholarly Publishers. URL:http://scholarlykitchen.sspnet.org/2014/01/03/the-jack-andraka-story-uncovering-the-hidden-contradictions-of-an-oa-paragon/. Accessed: 2014-01-03. (Archived by WebCite® at http://www.webcitation.org/6MLiAaC9o)

I thought Elsevier was already doing all it could to alienate the authors who freely donate their work to shore up the corporation’s obscene profits. The thousands of takedown notices sent to Academia.edu represent at best a grotesque PR mis-step, an idiot manoeuvre that I thought Elsevier would immediately regret and certainly avoid repeating.

Which just goes to show that I dramatically underestimated just how much Elsevier hate it when people read the research they publish, and the lengths they’re prepared to go to when it comes to ensuring the work stays unread.

Now, they’re targeting individual universities.

The University of Calgary has just sent this notice to all staff:

The University of Calgary has been contacted by a company representing the publisher, Elsevier Reed, regarding certain Elsevier journal articles posted on our publicly accessible university web pages. We have been provided with examples of these articles and reviewed the situation. Elsevier has put the University of Calgary on notice that these publicly posted Elsevier journal articles are an infringement of Elsevier Reed’s copyright and must be taken down.

That’s it, folks. Elsevier have taken the gloves off. I’ve tried repeatedly to think the best of them, to interpret their actions in the most charitable light. I even wrote a four-part series on how they can regain the trust of researchers and librarians (part 0, part 1, part 2, part 3), under the evidently mistaken impression that that was what they wanted.

But now it’s apparent that I was far too optimistic. They have no interest in working with authors, universities, businesses or anyone else. They just want to screw every possible cent out of all parties in the short term.

Because this is, obviously, a very short-term move. Whatever feeble facade Elsevier have till now maintained of being partners in the ongoing process of research is gone forever. They’ve just tossed it away, instead desperately trying to cling onto short-term profit. In going after the University of Calgary (and I imagine other universities as well, unless this is a pilot harassment), Elsevier have declared their position as unrepentant enemies of science.

In essence, this move is an admission of defeat. It’s a classic last-throw-of-the-dice manoeuvre. It signals a recognition from Elsevier that they simply aren’t going to be able to compete with actual publishers in the 21st century. They’re burning the house down on their way out. They’re asset-stripping academia.

Elsevier are finished as a credible publisher. I can’t believe any researcher who knows what they’re doing is going to sign away their rights to Elsevier journals after this. I hope to see the editorial boards of Elsevier-encumbered journals breaking away from the dead-weight of the publisher, and finding deals that actually promote the work of those journals rather than actively hindering it.

And a reminder, folks: for those of you who want to publicly declare that you’re done with Elsevier, you can sign the Cost Of Knowledge declaration. That’s often been described as a petition, but it’s not. A petition exists to persuade someone to do something, but we’re not asking Elsevier to change. It’s evidently far, far too late for that. As a publisher, Elsevier is dead. The Cost of Knowledge is just a declaration that we’re walking away from the corpse before the stench becomes unbearable.

It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.

As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.

To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):

berlin11-satellite-taylor-what-we-can-do--impact-factor-graph

I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.

There are a few things worth unpacking on that graph.

First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.

My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.

My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology — unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)

In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)

Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)

What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.

References

 

Follow

Get every new post delivered to your Inbox.

Join 414 other followers