As recently noted, it was my pleasure and privilege on 25 June to give a talk at the ESOF2014 conference in Copenhagen (the EuroScience Open Forum). My talk was one of four, followed by a panel discussion, in a session on the subject “Should science always be open?“.
I had just ten minutes to lay out the background and the problem, so it was perhaps a bit rushed. But you can judge for yourself, because the whole session was recorded on video. The image is not the greatest (it’s hard to make out the slides) and the audio is also not all it could be (the crowd noise is rather loud). But it’s not too bad, and I’ve embedded it below. (I hope the conference organisers will eventually put out a better version, cleaned up by video professionals.)
Subbiah Arunachalam (from Arun, Chennai, India) asked me whether the full text of the talk was available — the echoey audio is difficult for non-native English speakers. It wasn’t but I’ve sinced typed out a transcript of what I said (editing only to remove “er”s and “um”s), and that is below. Finally, you may wish to follow the slides rather than the video: if so, they’re available in PowerPoint format and as a PDF.
It’s very gracious of you all to hold this conference in English; I deeply appreciate it.
“Should science always be open?” is our question, and I’d like to open with one of the greatest scientists there’s ever been, Isaac Newton, who humility didn’t come naturally to. But he did manage to say this brilliant humble thing: “If I have seen further, it’s by standing on the shoulders of giants.”
And the reason I love this quote is not just because it’s insightful in itself, but because he stole it from something John of Salisbury said right back in 1159. “Bernard of Chartres used to say that we were like dwarfs seated on the shoulders of giants. If we see more and further than they, it is not due to our own clear eyes or tall bodies, but because we are raised on high and upborne by their gigantic bigness.”
Well, so Newton — I say he stole this quote, but of course he did more than that: he improved it. The original is long-winded, it goes around the houses. But Newton took that, and from that he made something better and more memorable. So in doing that, he was in fact standing on the shoulders of giants, and seeing further.
And this is consistently where progress comes from. It’s very rare that someone who’s locked in a room on his own thinking about something will have great insights. It’s always about free exchange of ideas. And we see this happening in lots of different fields.
Over the last ten or fifteen years, enormous advances in the kinds of things computers working in networks can do. And that’s come from the culture of openness in APIs and protocols, in Silicon Valley and elsewhere, where these things are designed.
Going back further and in a completely different field, the Impressionist painters of Paris lived in a community where they were constantly — not exactly working together, but certainly nicking each other’s ideas, improving each other’s techniques, feeding back into this developing sense of what could be done. Resulting in this fantastic art.
And looking back yet further, Florence in the Renaissance was a seat of all sorts of advances in the arts and the sciences. And again, because of this culture of many minds working together, and yielding insights and creativity that would not have been possible with any one of them alone.
And this is because of network effects; or Metcalfe’s Law expresses this by saying that the value of a network is proportional to the square of the number of nodes in that network. So in terms of scientific reasearch, what that means is that if you have a corpus of published research output, of papers, then the value of that goes — it doesn’t just increase with the number of papers, it goes up with the square of the number of papers. Because the value isn’t so much in the individual bits of research, but in the connections between them. That’s where great ideas come from. One researcher will read one paper from here and one from here, and see where the connection or the contradiction is; and from that comes the new idea.
So it’s very important to increase the size of the network of what’s available. And that’s why we have a very natural tendency, I think among scientists particularly, but I think we can say researchers in other areas as well, have a natural tendency to share.
Now until recently, the big difficulty we’ve had with sharing has been logistical. It was just difficult to make and distribute copies of pieces of research. So this [picture of a printing press] is how we made copies, this [picture of stacks of paper] was what we stored them on, and this was how we transmitted them from one researcher to another.
And they were not the most efficient means, or at least not as efficient as what we now have available. And because of that, and because of the importance of communication and the links between research, I would argue that maybe the most important invention of the last hundred years is the Internet in general and the World Wide Web in particular. And the purpose of the Web, as it was initially articulated in the first public post that Tim Berners-Lee made in 1991 — he explained not just what the Web was but what it was for, and he said: “The project started with the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups.”
So that’s what the Web is for; and here’s why it’s important. I’m quoting here from Cameron Neylon, who’s great at this kind of thing. And again it comes down to connections, and I’m just going to read out loud from his blog: “Like all developments of new communication networks, SMS, fixed telephones, the telegraph, the railways, and writing itself, the internet doesn’t just change how well we can do things, it qualitatively changes what we can do.” And then later on in the same post: “At network scale the system ensures that resources get used in unexpected ways. At scale you can have serendipity by design, not by blind luck.”
Now that’s a paradox; it’s almost a contradiction, isn’t it? Serendipity by definition is what you get by blind luck. But the point is, when you have enough connections — enough papers floating around the same open ecosystem — all the collisions happening between them, it’s inevitable that you’re going to get interesting things coming out. And that’s what we’re aiming towards.
And of course it’s never been more important, with health crises, new diseases, the diminishing effectiveness of antibiotics, the difficulties of feeding a world of many billions of people, and the results of climate change. It’s not as though we’re short of significant problems to deal with.
So I love this Jon Foley quote. He said, “Your job” — as a researcher — “Your job is not to get tenure! Your job is to change the world”. Tenure is a means to an end, it’s not what you’re there for.
So this is the importance of publishing. Of course the word “publish” comes from the same root as the word “public”: to publish a piece of research means to make that piece of research public. And the purpose of publishing is to open research up to the world, and so open up the world itself.
And that’s why it’s so tragic when we run into this [picture of a paywalled paper]. I think we’ve all seen this at various times. You go to read a piece of research that’s valuable, that’s relevant to either the research you’re doing, or the job you’re doing in your company, or whatever it might be. And you run into this paywall. Thirty five dollars and 95 cents to read this paper. It’s a disaster. Because what’s happened is we’ve got a whole industry whose existence is to make things public, and who because of accidents of history have found themselves doing the exact opposite. Now no-one goes into publishing with the intent of doing this. But this is the unfortunate outcome.
So what we end up with is a situation where we’re re-imposing on the research community barriers that were necessarily imposed by the inadequate technology of 20 or 30 years ago, but which we’ve now transcended in technological terms but we’re still strugging with for, frankly, commercial reasons. This is why we’re struggling with this.
And I don’t like to be critical, but I think we have to just face the fact that there is a real problem when organisations, for many years have been making extremely high profits — these [36%, 32%, 34%, 42%] are the profit margins of the “big four” academic publishers which together hugely dominate the scholarly publishing market — and as you can see they’re in the range 32% to 42% of revenue, is sheer profit. So every time your university library spends a dollar on subscriptions, 40% of that goes straight out of the system to nowhere.
And it’s not surprising that these companies are hanging on desperately to the business model that allows them to do that.
Now the problem we have in advocating for open access is that when we stand against publishers who have an existing very profitable business model, they can complain to governments and say, “Look, we have a market that’s economically significant, it’s worth somewhere in the region of 10-15 billion US dollars a year.” And they will say to governments, “You shouldn’t do anything that might damage this.” And that sounds effective. And we struggle to argue against that because we’re talking about an opportunity cost, which is so much harder to measure.
You know, I can stand here — as I have done — and wave my hands around, and talk about innovation and opportunity, and networks and connections, but it’s very hard to quantify in a way that can be persuasive to people in a numeric way. Say, they have a 15 billion dollar business, we’re talking about saving three trillion’s worth of economic value (and I pulled that number out of thin air). So I would love, if we can, when we get to the discussions, to brainstorm some way to quantify the opportunity cost of not being open. But this is what it looks like [picture of flooding due to climate change]. Economically I don’t know what it’s worth. But in terms of the world we live in, it’s just essential.
So we’ve got to remember the mission that we’re on. We’re not just trying to save costs by going to open access publishing. We’re trying to transform what research is, and what it’s for.
So should science always be open? Of course, the name of the session should have been “Of course science should always be open”.
New (but very old) preprint: A survey of dinosaur diversity by clade, age, place of discovery and year of description
July 11, 2014
Today, available for the first time, you can read my 2004 paper A survey of dinosaur diversity by clade, age, place of discovery and year of description. It’s freely available (CC By 4.0) as a PeerJ Preprint. It’s one of those papers that does exactly what it says on the tin — you should be able to find some interesting patterns in the diversity of your own favourite dinosaur group.
“But Mike”, you say, “you wrote this thing ten years ago?”
Yes. It’s actually the first scientific paper I ever wrote (bar some scraps of computer science) beginning in 2003. It’s so old that all the illustrations are grey-scale. I submitted it to Acta Palaeontologica Polonica way back on on 24 October 2004 (three double-spaced hard-copies in the post!) , but it was rejected without review. I was subsequently able to publish a greatly truncated version (Taylor 2006) in the proceedings of the 2006 Symposium on Mesozoic Terrestrial Ecosystems, but that was only one tenth the length of the full manuscript — much potentially valuable information was lost.
My finally posting this comes (as so many things seem to) from a conversation with Matt. Off work sick, he’d been amusing himself by re-reading old SV-POW! posts (yes, we do this). He was struck by my exhortation in Tutorial 14: “do not ever give a conference talk without immediately transcribing your slides into a manuscript”. He bemoaned how bad he’s been at following that advice, and I had to admit I’ve done no better, listing a sequence of old my SVPCA talks that have still never been published as papers.
The oldest of these was my 2004 presentation on dinosaur diversity. Commenting on this, I wrote in email: “OK, I got the MTE four-pager out of this, but the talk was distilled from a 40ish-page manuscript that was never published and never will be.” Quick as a flash, Matt replied:
If I had written this and sent it to you, you’d tell me to put it online and blog about how I went from idea to long paper to talk to short paper, to illuminate the process of science.
And of course he was right — hence this preprint.
I will never update this manuscript, as it’s based on a now wildly outdated database and I have too much else happening. (For one thing, I really ought to get around to finishing up the paper based on my 2005 SVPCA talk!) So in a sense it’s odd to call it a “pre-print” — it’s not pre anything.
Despite the data being well out of date, this manuscript still contains much that is (I think) of interest, and my sense is that the ratios of taxon counts, if not the absolute numbers, are still pretty accurate.
I don’t expect ever to submit a version of this to a journal, so this can be considered the final and definitive version.
- Taylor, Michael P. 2006. Dinosaur diversity analysed by clade, age, place and year of description. pp. 134-138 in Paul M. Barrett and Susan E. Evans (eds.), Ninth international symposium on Mesozoic terrestrial ecosystems and biota, Manchester, UK. Cambridge Publications. Natural History Museum, London, UK. 187 pp.
- Taylor, Michael P. 2014 (written in 2004). A survey of dinosaur diversity by clade, age, place of discovery and year of description. PeerJ PrePrints 2:e434v1. doi:10.7287/peerj.preprints.434v1
Got this in my inbox this morning. I presume this means that the 30 days start now. But if you’re interested in this stuff, don’t tarry.
And you should be interested in this stuff. This volume brings together some very active and knowledgeable researchers–including our fellow SV-POW!sketeer, Darren Naish, and sometime coauthor Dave Hone–writing on a broad range of interesting topics under the umbrella of behavior.
In a couple of weeks (in the early afternoon of 25 June), I’ll be speaking at ESOF 2014 (the EuroScience Open Forum) in Copenhagen, Denmark. The session I’m part of is entitled “Should science always be open?“, and the irony is not lost on me that, as that page says, “You must be registered and signed in to download session materials.”
So here is the abstract for my talk — one of four in the session, to be followed by an open discussion.
Yes, of course science should always be open!
“If I have seen further it is by standing on the shoulders of giants”, said Isaac Newton. Since the earliest days of science, progress has always been achieved by the free exchange and re-use of ideas. Understanding this, scientists have always leaned in the direction of openness. Science outside of trade secrets and state secrets has a natural tendency to be open.
Until recently, the principle barrier to sharing science has been the logistic difficulty of printing and distributing copies of papers. The World Wide Web was originally designed to solve precisely this problem. By making research freely available worldwide, the Web doesn’t just change how well we can do things, it changes what we can do. As Cameron Neylon has observed, at network scale you achieve serendipity by design, not by blind luck. At a time when the world is in dire need of scientific breakthroughs, the removal of barriers and use of content-mining promises progress in health, climate, agriculture and other crucial areas.
So it’s nothing short of tragic when publishers — whose job it is to make research public — purposely erect barriers that prevent this. The iniquity of paywalls is not just that they prevent citizens from accessing work their taxes pay for. Much more fundamentally, paywalls deliberately destroy the incredible value that the Web creates.
Openness is indispensable simply because the opportunity cost not being open is appalling and incalculable. Publishers must find business models that don’t break science, or they must go away.
The idea is to present this as slickly as possible in ten minutes, in a “TED-like” format. I might try to make a video of it here at home once I have it all straight in my mind, and all the slides done.
May 13, 2014
“In the public interest” is an article that was published in C&RL News back in July/August 2005. It’s Sharon Terry’s first-person account of being the parent of children with a pseudoxanthoma elasticum (PXE), a genetic disease. It recounts her and her husband’s attempts to find out about PXE, and eventually to contribute to the research on it.
Here are the lengths they were driven to early in the process:
We spent hours copying articles from bound journals. But fees gate the research libraries of private medical schools. These fees became too costly for us to manage, and we needed to gain access to the material without paying for entry into the library each time.
We learned that by volunteering at a hospital associated with a research library, we could enter the library for free. After several months of this, policies changed and we resorted to masking our outdated volunteer badge and following a legitimate student (who would distract the guard) into the library. When that became too risky we knew we would have to ﬁnd a way to access information in a more cost effective and reasonable manner.
Did the arrival of PubMed change everything?
Today, ten years after our children’s diagnosis, I can use a wonderful, freely accessible tool created by the National Library of Medicine (NLM), called PubMed. I can call up bibliographic information on the hundreds of papers relative to PXE in a few seconds. Further, I can narrow the ﬁeld to just a dozen papers on which I have been an author. Then, as I click on each article, I am not able to access any of them.
And so things continue much as before:
I am still forced to do end-runs around the system. I travel to libraries and photocopy. I hire students in large medical schools to go to the stacks and copy articles for me, I “borrow” the journal login information from colleagues.
Terry provides a prescient diagnosis of what enables this dysfunctional and exploitative system to continue — the acquiescence of researchers working under perverse incentives:
We see how the barriers to access to publicly funded science are part of a larger system that seems to place a higher value on prestigious publications, tenure, and continued public support than on ensuring the most rapid exchange of knowledge to ease human suffering
Towards the end comes this optimistic projection:
Fortunately, change is in the works. NIH Director Elias Zerhouni conﬁrmed some months ago that the “status quo is unacceptable.” In fact, under his direction and endorsed by the U.S. House of Representatives, NIH has implemented a cost-effective and balanced policy that, for the ﬁrst time, will make virtually all NIH-funded research free and accessible online to all Americans through the NLM.
Here we are, nine years later. PubMed Central proudly proclaims “3 MILLION Articles are archived in PMC” on its front page, which is great. Yet only in 2012 did its compliance rate reach 75% (having been at only 49% as recently as 2008). Which means that a quarter of NIH-funded research is still not available to the world.
There’s no need for me to add much commentary to this. Please go and read the original article to get the full sense of what its like for such parents. And check out the Who Needs Access? site for other (shorter) stories of non-academics who desperately need access to research.
May 7, 2014
[NOTE: see the updates at the bottom. In summary, there's nothing to see here and I was mistaken in posting this in the first place.]
Elsevier’s War On Access was stepped up last year when they started contacting individual universities to prevent them from letting the world read their research. Today I got this message from a librarian at my university:
The irony that this was sent from the Library’s “Open Access Team” is not lost on me. Added bonus irony: this takedown notification pertains to an article about how openness combats mistrust and secrecy. Well. You’d almost think NPG wants mistrust and secrecy, wouldn’t you? It’s sometimes been noted that by talking so much about Elsevier on this blog, we can appear to be giving other barrier-based publishers a free ride. If we give that impression, it’s not deliberate. By initiating this takedown, Nature Publishing Group has self-identified itself as yet another so-called academic publisher that is in fact an enemy of science. So what next? Anyone who wants a PDF of this (completely trivial) letter can still get one very easily from my own web-site, so in that sense no damage has been done. But it does leave me wondering what the point of the Institutional Repository is. In practice it seems to be a single point of weakness allowing “publishers” to do the maximum amount of damage with a single attack.
But part of me thinks the thing to do is take the accepted manuscript and format it myself in the exact same way as Nature did, and post that. Just because I can. Because the bottom line is that typesetting is the only actual service they offered Andy, Matt and me in exchange for our right to show our work to the world, and that is a trivial service.
The other outcome is that this hardens my determination never to send anything to Nature again. Now it’s not like my research program is likely to turn up tabloid-friendly results anyway, so this is a bit of a null resolution. But you never know: if I happen to stumble across sauropod feather impressions in an overlooked Wealden fossil, then that discovery is going straight to PeerJ, PLOS, BMC, F1000 Research, Frontiers or another open-access publisher, just like all my other work.
And that’s sheer self-interest at work there, just as much as it’s a statement. I will not let my best work be hidden from the world. Why would anyone?
Let’s finish with another outing for this meme-ready image.
David Mainwaring (on Twitter) and James Bisset (in the comment below) both pointed out that I’ve not seen an actual takedown request from NPG — just the takedown notification from my own library. I assumed that the library were doing this in response to hassle from NPG
, but of course it’s possible that my own library’s Open Access Team is unilaterally trying to prevent access to the work of its university’s researchers.
I’ve emailed Lyn Duffy to ask for clarification. In the mean time, NPG’s Grace Baynes has tweeted:
So it looks like this may be even more bizarre than I’d realised.
Further bulletins as events warrant.
OK, consensus is that I read this completely wrong. Matt’s comment below says it best:
I have always understood institutional repositories to be repositories for author’s accepted manuscripts, not for publisher’s formatted versions of record. By that understanding, if you upload the latter, you’re breaking the rules, and basically pitting the repository against the publisher.
Which is, at least, not a nice thing to do to the respository.
So the conclusion is: I was wrong, and there’s nothing to see here apart from me being embarrassed. That’s why I’ve struck through much of the text above. (We try not to actually delete things from this blog, to avoid giving a false history.)
My apologies to Lyn Duffy, who was just doing her job.
This just in from Lyn Duffy, confirming that, as David and James guessed, NPG did not send a takedown notice:
This PDF was removed as part of the standard validation work of the Open Access team and was not prompted by communication from Nature Publishing. We validate every full-text document that is uploaded to Pure to make sure that the publisher permits posting of that version in an institutional repository. Only after validation are full-text documents made publicly available.
In this case we were following the regulations as stated in the Nature Publishing policy about confidentiality and pre-publicity. The policy says, ‘The published version — copyedited and in Nature journal format — may not be posted on any website or preprint server’ (http://www.nature.com/authors/policies/confidentiality.html). In the information for authors about ‘Other material published in Nature’ it says, ‘All articles for all sections of Nature are considered according to our usual conditions of publication’ (http://www.nature.com/nature/authors/gta/others.html#correspondence). We took this to mean that material such as correspondence have the same posting restrictions as other material published by Nature Publishing.
If we have made the wrong decision in this case and you do have permission from Nature Publishing to make the PDF of your correspondence publicly available via an institutional repository, we can upload the PDF to the record.
Open Access Team
Here’s the text of the original notification email so search-engines can pick it up. (If you read the screen-grab above, you can ignore this.)
University of Bristol — Pure
Lyn Duffy has added a comment
Sharing: public databases combat mistrust and secrecy
Farke, A. A., Taylor, M. P. & Wedel, M. J. 22 Oct 2009 In : Nature. 461, 7267, p. 1053
Research output: Contribution to journal › Article
Lyn Duffy has added a comment 7/05/14 10:23
Dear Michael, Apologies for the delay in checking your record. It appears that the document you have uploaded alongside this record is the publishers own version/PDF and making this version openly accessible in Pure is prohibited by the publisher, as a result the document has been removed from the record. In this particular instance the publisher would allow you to make accessible the postprint version of the paper, i.e., the article in the form accepted for publication in the journal following the process of peer review. Please upload an acceptable version of the paper if you have one. If you have any questions about this please get back to us, or send an email directly to email@example.com Kind regards, Lyn Duffy Library Open Access Team.
March 31, 2014
This morning sees the publication of the new Policy for open access in the post-2014 Research Excellence Framework from HEFCE, the Higher Education Funding Council for England. It sets out in details HEFCE’s requirement that papers must be open-access to be eligible for the next (post-2014) Research Excellence Framework (REF).
Here is the core of it, quoted direct from the Executive Summary:
The policy states that, to be eligible for submission to the post-2014 REF, authors’ final peer-reviewed manuscripts must have been deposited in an institutional or subject repository on acceptance for publication. Deposited material should be discoverable, and free to read and download, for anyone with an internet connection [...] The policy applies to research outputs accepted for publication after 1 April 2016, but we would strongly urge institutions to implement it now.
There are lots of ifs, buts and maybes, but overall this is excellent news, and solid confirmation that the UK really is committed to an open-access transition. Before we go into those caveats, let’s take a moment to applaud the real, significant progress that this policy represents. For the first time ever, universities’ funding levels, and so individual academics’ careers, will be directly tied to the openness of their output. Congratulations to HEFCE!
Also commendable: the actual policy document is very carefully written, and includes details such as “Outputs whose text is encoded only as a scanned image do not meet the requirement that the text be searchable electronically.” It’s evident that a lot of careful thought has gone into this.
Now for those caveats:
The policy will not apply to monographs, book chapters, other long-form publications, working papers, creative or practice-based research outputs, or data.
This is a shame, but understandable, especially in the case of books. I would have hoped that chapters within edited volumes would have been included. But the main document notes that “Where a higher education institution (HEI) can demonstrate that it has taken steps towards enabling open access for outputs outside the scope of this definition, credit will be given in the research environment component of the post-2014 REF.”
The policy allows repositories to respect embargo periods set by publications. Where a publication specifies an embargo period, authors can comply with the policy by making a ‘closed’ deposit on acceptance. Closed deposits must be discoverable to anyone with an Internet connection before the full text becomes available for read and download (which will occur after the embargo period has elapsed). Closed deposits will be admissible to the REF.
I would of course have wanted all embargo periods to be eliminated, or at the very least capped at six months as in the old, pre-watering-down, RCUK policy. But that was too much to hope for in the political environment that publishers have somehow managed to create.
More positively, it’s a good sop that deposit must be made on acceptance — not when the embargo expires, or even on publication, but on acceptance. These “closed deposits” are like a formal promise of openness, with an automated implementation. We don’t have good experimental data on this, but it seems likely that this approach will result in much better compliance rates than just telling authors “you have to come back six to 24 months after publication and make a deposit”.
There are a number of exceptions to the various requirements that will be automatically allowed by the policy. These exceptions cover circumstances where deposit was not possible, or where open access to deposited material could not be achieved within the policy requirements. These exceptions will allow institutions to achieve near-total compliance, but the post-2014 REF will also include a mechanism for considering any other exceptional cases where an output could not otherwise meet the requirements.
The exceptions encourage weasel-wordage, of course, and some of the specific exceptions listed in Appendix C are particularly weak: “Author was unable to secure the use of a repository”, “Publication is print-only (no electronic version)”, and the lamentable “Publication does not offer a compliant green or gold option”, which really means “HEFCE authors should not be using this publication”.
But when you read into the details, this approach with specific exceptions is actually rather better than the alternative that had been on the table: a percentage-based target, where some specific proportion of REF submissions would need to be open access. Instead of saying “80% of submissions must be open access” (or some other percentage), HEFCE is saying that it wants them all to be open access except where a specific excuse is given. I’d like them to be much less accommodating with what excuses they’ll accept, but the important thing here is that they have set the default to open.
Now for the most regrettable part of the policy:
While we do not request that outputs are made available under any particular licence, we advise that outputs licensed under a Creative Commons Attribution Non-Commercial Non-Derivative (CC BY-NC-ND) licence would meet this requirement.
I won’t rehearse again all the reasons that Non-Commercial and No-Derivatives clauses are poison, I’ll just note that works published under this licence are not open access according to the original definition of that term, which allows us to “use [OA works] for any other lawful purpose, without financial, legal, or technical barriers”.
Yet even here, the general tenor of the policy is positive. While it accepts NC-ND, the policy adds that “where an HEI can demonstrate that outputs are presented in a form that allows re-use of the work, including via text-mining, credit will be given in the research environment component of the post-2014 REF”.
One last observation: HEFCE should be commended on having provided an excellent, detailed explanation of feedback they received to their consultations. As always, reading such documents can be frustrating because they necessarily contain some views very different from mine; but it’s useful to see the range of opinions laid out so explicitly.
No open-access policy document I’ve ever seen has been perfect, and this one is no exception. But overall, the HEFCE open-access policy is a significant and welcome step forward, and carries the promise of further positive moves in the future.
March 20, 2014
In discussion of Samuel Gershman’s rather good piece The Exploitative Economics Of Academic Publishing, I got into this discusson on Twitter with David Mainwaring (who is usually one of the more interesting legacy-publisher representatives on these issues) and Daniel Allingon (who I don’t know at all).
I’ll need to give a bit of background before I reach the key part of that discussion, so here goes. I said that one of David’s comments was a patronising evasion, and that I expected better of him, and also that it was an explicit refusal to engage. David’s response was interesting:
First, to clear up the first half, I wasn’t at all saying that David hasn’t engaged in OA, but that in this instance he’d rejected engagement — and that his previous record of engaging with the issues was why I’d said “I expect better from you” at the outset.
Now with all that he-said-she-said out of the way, here’s the point I want to make.
David’s tweet quoted above makes a very common but insidious assumption: that a “nuanced” argument is intrinsically preferable to a simple one. And we absolutely mustn’t accept that.
We see this idea again and again: open-access advocates are criticised for not being nuanced, with the implication that this equates with not being right. But the right position is not always nuanced. Recruiting Godwin to the cause of a reductio ad absurdum, we can see this by asking the question “was Hitler right to commit genocide?” If you say “no”, then I will agree with you; I won’t criticise your position for lacking nuance. In this argument, nuance is superfluous.
[Tedious but probably necessary disclaimer: no, I am not saying that paywall-encumbered publishing is morally equivalent to genocide. I am saying that the example of genocide shows that nuanced positions are not always correct, and that therefore it's wrong to assume a priori that a nuanced position regarding paywalls is correct. Maybe a nuanced position is correct: but that is something to be demonstrated, not assumed.]
So when David says “What I do hold to is that a rounded view, nuance, w/ever you call it, is important”, I have to disagree. What matters is to be right, not nuanced. Again, sometimes the right position is nuanced, but there’s no reason to assume that from the get-go.
Here’s why this is dangerous: a nuanced, balanced, rounded position sounds so grown up. And by contrast, a straightforward, black-and-white one sounds so adolescent. You know, a straightforward, black-and-white position like “genocide is bad”. The idea of nuance plays on our desire to be respected. It sounds so flattering.
We mustn’t fall for this. Our job is to figure out what’s true, not what sounds grown-up.
March 11, 2014
I hate to keep flogging a dead horse, but since this issue won’t go away I guess I can’t, either.
1. Two years ago, I wrote about how you have to pay to download Elsevier’s “open access” articles. I showed how their open-access articles claimed “all rights reserved”, and how when you use the site’s facilities to ask about giving one electronic copy to a student, the price is £10.88. As I summarised at the time: “Free” means “we take the author’s copyright, all rights are reserved, but you can buy downloads at a 45% discount from what they would otherwise cost.” No-one from Elsevier commented.
2. Eight months ago, Peter Murray-Rust explained that Elsevier charges to read #openaccess articles. He showed how all three of the randomly selected open-access articles he looked at had download fees of $31.50. No-one from Elsevier commented (although see below).
3. A couple of days ago, Peter revisited this issue, and found that Elsevier are still charging THOUSANDS of pounds for CC-BY articles. IMMORAL, UNETHICAL , maybe even ILLEGAL.This time he picked another Elsevier OA article at random, and was quoted £8000 for permission to print 100 copies. The one he looked at says “Open Access” in gold at the top and “All rights reserved” at the bottom. Its “Get rights and content” link takes me to RightsLink, where I was quoted £1.66 to supply a single electronic copy to a student on a course at the University of Bristol:
(Why was I quoted a wildly different price from Peter? I don’t know. Could be to do with the different university, or because he proposed printing copies instead of using an electronic one.)
On Peter’s last article, an Elsevier representative commented:
Alicia Wise says:
March 10, 2014 at 4:20 pm
As noted in the comment thread to your blog back in August we are improving the clarity of our OA license labelling (eg on ScienceDirect) and metadata feeds (eg to Rightslink). This is work in progress and should be completed by summer. I am working with the internal team to get a more clear understanding of the detailed plan and key milestones, and will tweet about these in due course.
With kind wishes,
Dr Alicia Wise
Director of Access and Policy
(Oddly, I don’t see the referenced comment in the August blog-entry, but perhaps it was on a different article.)
Now here is my problem with this.
First of all, either this is deliberate fraud on Elsevier’s part — charging for the use of something that is free to use — or it’s a bug. Following Hanlon’s razor, I prefer the latter explanation. But assuming it’s a bug, why has it taken two years to address? And why is it still not fixed?
Elsevier, remember, are a company with an annual revenue exceeding £2bn. That’s £2,000,000,000. (Rather pathetically, their site’s link to the most recent annual report is broken, but that’s a different bug for a different day.) Is it unreasonable to expect that two years should be long enough for them to fix a trivial bug?
All that’s necessary is to change the “All rights reserved” message and the “Get rights and content” link to say “This is an open-access article, and is free to re-use”. We know that the necessary metadata is there because of the “Open Access” caption at the top of the article. So speaking from my perspective as a professional software developer of more than thirty years’ standing, this seems like a ten-line fix that should take maybe a man-hour; at most a man-day. A man-day of programmer time would cost Elsevier maybe £500 — that is, 0.000025% of the revenue they’ve taken since this bug was reported two years ago. Is it really too much to ask?
(One can hardly help comparing this performance with that of PeerJ, who have maybe a ten-thousandth of Elsevier’s income and resources. When I reported three bugs to them in a course of a couple of days, they fixed them all with an average report-to-fix time of less than 21 hours.)
Now here’s where it turns sinister.
The PeerJ bugs I mentioned above cost them — not money, directly, but a certain amount of reputation. By fixing them quickly, they fixed that reputation damage (and indeed gained reputation by responding so quickly). By contrast, the Elsevier bug we’re discussing here doesn’t cost them anything. It makes them money, by misleading people into paying for permissions that they already have. In short, not fixing this bug is making money for Elsevier. It’s hard not to wonder: would it have remained unfixed for two years if it was costing them money?
But instead of a rush to fix the bug, we have this kind of thing:
I find that very hard to accept. However complex your publishing platform is, however many different modules interoperate, however much legacy code there is — it’s not that hard to take the conditional that emits “Open Access” in gold at the top of the article, and make the same test in the other relevant places.
As John Mark Ockerbloom observes:
Come on, Elsevier. You’re better than this. Step up. Get this done.
Ten days layer, Elsevier have finally responded. To give credit where it’s due, it’s actually pretty good: it notes how many customers made payments they needn’t have made (about 50), how much they paid in total (about $4000) and says that they are actively refunding these payments.
It would be have been nice, mind you, had this statement contained an actual apology: the words “sorry”, “regret” and “apologise” are all notably absent.
And I remain baffled that the answer to “So when will this all be reliable?” is “by the summer of 2014″. As noted above, the pages in question already have the information that the articles are open access, as noted in the gold “Open Access” text at top right of the pages. Why it’s going to take several more months to use that information elsewhere in the same pages is a mystery to me.
As noted by Alicia in a comment below, Elsevier employee Chris Shillum has posted a long comment on Elsevier’s response, explaining in more detail what the technical issues are. Unfortunately there seems to be no way to link directly to the comment, but it’s the fifth one.
February 25, 2014
Hey, remember this? Your bound-for-PeerJ manuscript is like our Mauritian friend here, and the March 1 deadline is approaching like a hungry sailor with a club. So if you still want a voucher, let me know ASAP.