As recently noted, it was my pleasure and privilege on 25 June to give a talk at the ESOF2014 conference in Copenhagen (the EuroScience Open Forum). My talk was one of four, followed by a panel discussion, in a session on the subject “Should science always be open?“.
I had just ten minutes to lay out the background and the problem, so it was perhaps a bit rushed. But you can judge for yourself, because the whole session was recorded on video. The image is not the greatest (it’s hard to make out the slides) and the audio is also not all it could be (the crowd noise is rather loud). But it’s not too bad, and I’ve embedded it below. (I hope the conference organisers will eventually put out a better version, cleaned up by video professionals.)
Subbiah Arunachalam (from Arun, Chennai, India) asked me whether the full text of the talk was available — the echoey audio is difficult for non-native English speakers. It wasn’t but I’ve sinced typed out a transcript of what I said (editing only to remove “er”s and “um”s), and that is below. Finally, you may wish to follow the slides rather than the video: if so, they’re available in PowerPoint format and as a PDF.
It’s very gracious of you all to hold this conference in English; I deeply appreciate it.
“Should science always be open?” is our question, and I’d like to open with one of the greatest scientists there’s ever been, Isaac Newton, who humility didn’t come naturally to. But he did manage to say this brilliant humble thing: “If I have seen further, it’s by standing on the shoulders of giants.”
And the reason I love this quote is not just because it’s insightful in itself, but because he stole it from something John of Salisbury said right back in 1159. “Bernard of Chartres used to say that we were like dwarfs seated on the shoulders of giants. If we see more and further than they, it is not due to our own clear eyes or tall bodies, but because we are raised on high and upborne by their gigantic bigness.”
Well, so Newton — I say he stole this quote, but of course he did more than that: he improved it. The original is long-winded, it goes around the houses. But Newton took that, and from that he made something better and more memorable. So in doing that, he was in fact standing on the shoulders of giants, and seeing further.
And this is consistently where progress comes from. It’s very rare that someone who’s locked in a room on his own thinking about something will have great insights. It’s always about free exchange of ideas. And we see this happening in lots of different fields.
Over the last ten or fifteen years, enormous advances in the kinds of things computers working in networks can do. And that’s come from the culture of openness in APIs and protocols, in Silicon Valley and elsewhere, where these things are designed.
Going back further and in a completely different field, the Impressionist painters of Paris lived in a community where they were constantly — not exactly working together, but certainly nicking each other’s ideas, improving each other’s techniques, feeding back into this developing sense of what could be done. Resulting in this fantastic art.
And looking back yet further, Florence in the Renaissance was a seat of all sorts of advances in the arts and the sciences. And again, because of this culture of many minds working together, and yielding insights and creativity that would not have been possible with any one of them alone.
And this is because of network effects; or Metcalfe’s Law expresses this by saying that the value of a network is proportional to the square of the number of nodes in that network. So in terms of scientific reasearch, what that means is that if you have a corpus of published research output, of papers, then the value of that goes — it doesn’t just increase with the number of papers, it goes up with the square of the number of papers. Because the value isn’t so much in the individual bits of research, but in the connections between them. That’s where great ideas come from. One researcher will read one paper from here and one from here, and see where the connection or the contradiction is; and from that comes the new idea.
So it’s very important to increase the size of the network of what’s available. And that’s why we have a very natural tendency, I think among scientists particularly, but I think we can say researchers in other areas as well, have a natural tendency to share.
Now until recently, the big difficulty we’ve had with sharing has been logistical. It was just difficult to make and distribute copies of pieces of research. So this [picture of a printing press] is how we made copies, this [picture of stacks of paper] was what we stored them on, and this was how we transmitted them from one researcher to another.
And they were not the most efficient means, or at least not as efficient as what we now have available. And because of that, and because of the importance of communication and the links between research, I would argue that maybe the most important invention of the last hundred years is the Internet in general and the World Wide Web in particular. And the purpose of the Web, as it was initially articulated in the first public post that Tim Berners-Lee made in 1991 — he explained not just what the Web was but what it was for, and he said: “The project started with the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups.”
So that’s what the Web is for; and here’s why it’s important. I’m quoting here from Cameron Neylon, who’s great at this kind of thing. And again it comes down to connections, and I’m just going to read out loud from his blog: “Like all developments of new communication networks, SMS, fixed telephones, the telegraph, the railways, and writing itself, the internet doesn’t just change how well we can do things, it qualitatively changes what we can do.” And then later on in the same post: “At network scale the system ensures that resources get used in unexpected ways. At scale you can have serendipity by design, not by blind luck.”
Now that’s a paradox; it’s almost a contradiction, isn’t it? Serendipity by definition is what you get by blind luck. But the point is, when you have enough connections — enough papers floating around the same open ecosystem — all the collisions happening between them, it’s inevitable that you’re going to get interesting things coming out. And that’s what we’re aiming towards.
And of course it’s never been more important, with health crises, new diseases, the diminishing effectiveness of antibiotics, the difficulties of feeding a world of many billions of people, and the results of climate change. It’s not as though we’re short of significant problems to deal with.
So I love this Jon Foley quote. He said, “Your job” — as a researcher — “Your job is not to get tenure! Your job is to change the world”. Tenure is a means to an end, it’s not what you’re there for.
So this is the importance of publishing. Of course the word “publish” comes from the same root as the word “public”: to publish a piece of research means to make that piece of research public. And the purpose of publishing is to open research up to the world, and so open up the world itself.
And that’s why it’s so tragic when we run into this [picture of a paywalled paper]. I think we’ve all seen this at various times. You go to read a piece of research that’s valuable, that’s relevant to either the research you’re doing, or the job you’re doing in your company, or whatever it might be. And you run into this paywall. Thirty five dollars and 95 cents to read this paper. It’s a disaster. Because what’s happened is we’ve got a whole industry whose existence is to make things public, and who because of accidents of history have found themselves doing the exact opposite. Now no-one goes into publishing with the intent of doing this. But this is the unfortunate outcome.
So what we end up with is a situation where we’re re-imposing on the research community barriers that were necessarily imposed by the inadequate technology of 20 or 30 years ago, but which we’ve now transcended in technological terms but we’re still strugging with for, frankly, commercial reasons. This is why we’re struggling with this.
And I don’t like to be critical, but I think we have to just face the fact that there is a real problem when organisations, for many years have been making extremely high profits — these [36%, 32%, 34%, 42%] are the profit margins of the “big four” academic publishers which together hugely dominate the scholarly publishing market — and as you can see they’re in the range 32% to 42% of revenue, is sheer profit. So every time your university library spends a dollar on subscriptions, 40% of that goes straight out of the system to nowhere.
And it’s not surprising that these companies are hanging on desperately to the business model that allows them to do that.
Now the problem we have in advocating for open access is that when we stand against publishers who have an existing very profitable business model, they can complain to governments and say, “Look, we have a market that’s economically significant, it’s worth somewhere in the region of 10-15 billion US dollars a year.” And they will say to governments, “You shouldn’t do anything that might damage this.” And that sounds effective. And we struggle to argue against that because we’re talking about an opportunity cost, which is so much harder to measure.
You know, I can stand here — as I have done — and wave my hands around, and talk about innovation and opportunity, and networks and connections, but it’s very hard to quantify in a way that can be persuasive to people in a numeric way. Say, they have a 15 billion dollar business, we’re talking about saving three trillion’s worth of economic value (and I pulled that number out of thin air). So I would love, if we can, when we get to the discussions, to brainstorm some way to quantify the opportunity cost of not being open. But this is what it looks like [picture of flooding due to climate change]. Economically I don’t know what it’s worth. But in terms of the world we live in, it’s just essential.
So we’ve got to remember the mission that we’re on. We’re not just trying to save costs by going to open access publishing. We’re trying to transform what research is, and what it’s for.
So should science always be open? Of course, the name of the session should have been “Of course science should always be open”.
May 7, 2014
[NOTE: see the updates at the bottom. In summary, there’s nothing to see here and I was mistaken in posting this in the first place.]
Elsevier’s War On Access was stepped up last year when they started contacting individual universities to prevent them from letting the world read their research. Today I got this message from a librarian at my university:
The irony that this was sent from the Library’s “Open Access Team” is not lost on me. Added bonus irony: this takedown notification pertains to an article about how openness combats mistrust and secrecy. Well. You’d almost think NPG wants mistrust and secrecy, wouldn’t you? It’s sometimes been noted that by talking so much about Elsevier on this blog, we can appear to be giving other barrier-based publishers a free ride. If we give that impression, it’s not deliberate. By initiating this takedown, Nature Publishing Group has self-identified itself as yet another so-called academic publisher that is in fact an enemy of science. So what next? Anyone who wants a PDF of this (completely trivial) letter can still get one very easily from my own web-site, so in that sense no damage has been done. But it does leave me wondering what the point of the Institutional Repository is. In practice it seems to be a single point of weakness allowing “publishers” to do the maximum amount of damage with a single attack.
But part of me thinks the thing to do is take the accepted manuscript and format it myself in the exact same way as Nature did, and post that. Just because I can. Because the bottom line is that typesetting is the only actual service they offered Andy, Matt and me in exchange for our right to show our work to the world, and that is a trivial service.
The other outcome is that this hardens my determination never to send anything to Nature again. Now it’s not like my research program is likely to turn up tabloid-friendly results anyway, so this is a bit of a null resolution. But you never know: if I happen to stumble across sauropod feather impressions in an overlooked Wealden fossil, then that discovery is going straight to PeerJ, PLOS, BMC, F1000 Research, Frontiers or another open-access publisher, just like all my other work.
And that’s sheer self-interest at work there, just as much as it’s a statement. I will not let my best work be hidden from the world. Why would anyone?
Let’s finish with another outing for this meme-ready image.
David Mainwaring (on Twitter) and James Bisset (in the comment below) both pointed out that I’ve not seen an actual takedown request from NPG — just the takedown notification from my own library. I assumed that the library were doing this in response to hassle from NPG
, but of course it’s possible that my own library’s Open Access Team is unilaterally trying to prevent access to the work of its university’s researchers.
I’ve emailed Lyn Duffy to ask for clarification. In the mean time, NPG’s Grace Baynes has tweeted:
So it looks like this may be even more bizarre than I’d realised.
Further bulletins as events warrant.
OK, consensus is that I read this completely wrong. Matt’s comment below says it best:
I have always understood institutional repositories to be repositories for author’s accepted manuscripts, not for publisher’s formatted versions of record. By that understanding, if you upload the latter, you’re breaking the rules, and basically pitting the repository against the publisher.
Which is, at least, not a nice thing to do to the respository.
So the conclusion is: I was wrong, and there’s nothing to see here apart from me being embarrassed. That’s why I’ve struck through much of the text above. (We try not to actually delete things from this blog, to avoid giving a false history.)
My apologies to Lyn Duffy, who was just doing her job.
This just in from Lyn Duffy, confirming that, as David and James guessed, NPG did not send a takedown notice:
This PDF was removed as part of the standard validation work of the Open Access team and was not prompted by communication from Nature Publishing. We validate every full-text document that is uploaded to Pure to make sure that the publisher permits posting of that version in an institutional repository. Only after validation are full-text documents made publicly available.
In this case we were following the regulations as stated in the Nature Publishing policy about confidentiality and pre-publicity. The policy says, ‘The published version — copyedited and in Nature journal format — may not be posted on any website or preprint server’ (http://www.nature.com/authors/policies/confidentiality.html). In the information for authors about ‘Other material published in Nature’ it says, ‘All articles for all sections of Nature are considered according to our usual conditions of publication’ (http://www.nature.com/nature/authors/gta/others.html#correspondence). We took this to mean that material such as correspondence have the same posting restrictions as other material published by Nature Publishing.
If we have made the wrong decision in this case and you do have permission from Nature Publishing to make the PDF of your correspondence publicly available via an institutional repository, we can upload the PDF to the record.
Open Access Team
Here’s the text of the original notification email so search-engines can pick it up. (If you read the screen-grab above, you can ignore this.)
University of Bristol — Pure
Lyn Duffy has added a comment
Sharing: public databases combat mistrust and secrecy
Farke, A. A., Taylor, M. P. & Wedel, M. J. 22 Oct 2009 In : Nature. 461, 7267, p. 1053
Research output: Contribution to journal › Article
Lyn Duffy has added a comment 7/05/14 10:23
Dear Michael, Apologies for the delay in checking your record. It appears that the document you have uploaded alongside this record is the publishers own version/PDF and making this version openly accessible in Pure is prohibited by the publisher, as a result the document has been removed from the record. In this particular instance the publisher would allow you to make accessible the postprint version of the paper, i.e., the article in the form accepted for publication in the journal following the process of peer review. Please upload an acceptable version of the paper if you have one. If you have any questions about this please get back to us, or send an email directly to email@example.com Kind regards, Lyn Duffy Library Open Access Team.
March 20, 2014
In discussion of Samuel Gershman’s rather good piece The Exploitative Economics Of Academic Publishing, I got into this discusson on Twitter with David Mainwaring (who is usually one of the more interesting legacy-publisher representatives on these issues) and Daniel Allingon (who I don’t know at all).
I’ll need to give a bit of background before I reach the key part of that discussion, so here goes. I said that one of David’s comments was a patronising evasion, and that I expected better of him, and also that it was an explicit refusal to engage. David’s response was interesting:
First, to clear up the first half, I wasn’t at all saying that David hasn’t engaged in OA, but that in this instance he’d rejected engagement — and that his previous record of engaging with the issues was why I’d said “I expect better from you” at the outset.
Now with all that he-said-she-said out of the way, here’s the point I want to make.
David’s tweet quoted above makes a very common but insidious assumption: that a “nuanced” argument is intrinsically preferable to a simple one. And we absolutely mustn’t accept that.
We see this idea again and again: open-access advocates are criticised for not being nuanced, with the implication that this equates with not being right. But the right position is not always nuanced. Recruiting Godwin to the cause of a reductio ad absurdum, we can see this by asking the question “was Hitler right to commit genocide?” If you say “no”, then I will agree with you; I won’t criticise your position for lacking nuance. In this argument, nuance is superfluous.
[Tedious but probably necessary disclaimer: no, I am not saying that paywall-encumbered publishing is morally equivalent to genocide. I am saying that the example of genocide shows that nuanced positions are not always correct, and that therefore it’s wrong to assume a priori that a nuanced position regarding paywalls is correct. Maybe a nuanced position is correct: but that is something to be demonstrated, not assumed.]
So when David says “What I do hold to is that a rounded view, nuance, w/ever you call it, is important”, I have to disagree. What matters is to be right, not nuanced. Again, sometimes the right position is nuanced, but there’s no reason to assume that from the get-go.
Here’s why this is dangerous: a nuanced, balanced, rounded position sounds so grown up. And by contrast, a straightforward, black-and-white one sounds so adolescent. You know, a straightforward, black-and-white position like “genocide is bad”. The idea of nuance plays on our desire to be respected. It sounds so flattering.
We mustn’t fall for this. Our job is to figure out what’s true, not what sounds grown-up.
January 3, 2014
The Scholarly Kitchen is the blog of the Society of Scholarly Publishers, and as such discusses lots of issues that are of interest to us. But a while back, I gave up commenting there two reasons. First, it seemed rare that fruitful discussions emerged, rather than mere echo-chamberism; and second, my comments would often be deliberately delayed for several hours “to let others get in first”, and randomly discarded completely for reasons that I found completely opaque.
But since June, when David Crotty took over as Editor-in-Chief from Kent Anderson, I’ve sensed a change in the wind: more thoughtful pieces, less head-in-the-sandism over the inevitable coming changes in scholarly publishing, and even genuinely fruitful back-and-forth in the comments. I was optimistic that the Kitchen could become a genuine hub of cross-fertilisation.
But then, this: The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero [cached copy]. Ex-editor Kent Anderson has risen from the grave to give us this attack piece on a fifteen-year-old.
I’m frankly astonished that David Crotty allowed this spiteful piece on the blog he edits. Is Kent Anderson so big that no-one can tell him “no”? Embarrassingly, he is currently president of the SSP, which maybe gives him leverage over the blog. But I’m completely baffled over how Crotty, Anderson or anyone else can think this piece will achieve anything other than to destroy the reputation of the Kitchen.
As Eva Amsen says, “I got as far as the part where he says Jack is not a “layperson” because his parents are middle class. (What?) Then closed tab.” I could do a paragraph-by-paragraph takedown of Anderson’s article, as Michael Eisen did for Jeffrey Beall’s anti-OA coming-out letter; but it really doesn’t deserve that level of attention.
So why am I even mentioning it? Because Jack Andraka doesn’t deserve to be hunted by a troll. I’m not going to be the only one finally giving up on The Scholarly Kitchen if David Crotty doesn’t do something to control his attack dog.
Seriously, David. You’re better than that. You have to be.
Anderson, Kent. 2014. The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero. Society of Scholarly Publishers. The Scholarly Kitchen, Society of Scholarly Publishers. URL:http://scholarlykitchen.sspnet.org/2014/01/03/the-jack-andraka-story-uncovering-the-hidden-contradictions-of-an-oa-paragon/. Accessed: 2014-01-03. (Archived by WebCite® at http://www.webcitation.org/6MLiAaC9o)
December 17, 2013
I thought Elsevier was already doing all it could to alienate the authors who freely donate their work to shore up the corporation’s obscene profits. The thousands of takedown notices sent to Academia.edu represent at best a grotesque PR mis-step, an idiot manoeuvre that I thought Elsevier would immediately regret and certainly avoid repeating.
Which just goes to show that I dramatically underestimated just how much Elsevier hate it when people read the research they publish, and the lengths they’re prepared to go to when it comes to ensuring the work stays unread.
Now, they’re targeting individual universities.
The University of Calgary has just sent this notice to all staff:
The University of Calgary has been contacted by a company representing the publisher, Elsevier Reed, regarding certain Elsevier journal articles posted on our publicly accessible university web pages. We have been provided with examples of these articles and reviewed the situation. Elsevier has put the University of Calgary on notice that these publicly posted Elsevier journal articles are an infringement of Elsevier Reed’s copyright and must be taken down.
That’s it, folks. Elsevier have taken the gloves off. I’ve tried repeatedly to think the best of them, to interpret their actions in the most charitable light. I even wrote a four-part series on how they can regain the trust of researchers and librarians (part 0, part 1, part 2, part 3), under the evidently mistaken impression that that was what they wanted.
But now it’s apparent that I was far too optimistic. They have no interest in working with authors, universities, businesses or anyone else. They just want to screw every possible cent out of all parties in the short term.
Because this is, obviously, a very short-term move. Whatever feeble facade Elsevier have till now maintained of being partners in the ongoing process of research is gone forever. They’ve just tossed it away, instead desperately trying to cling onto short-term profit. In going after the University of Calgary (and I imagine other universities as well, unless this is a pilot harassment), Elsevier have declared their position as unrepentant enemies of science.
In essence, this move is an admission of defeat. It’s a classic last-throw-of-the-dice manoeuvre. It signals a recognition from Elsevier that they simply aren’t going to be able to compete with actual publishers in the 21st century. They’re burning the house down on their way out. They’re asset-stripping academia.
Elsevier are finished as a credible publisher. I can’t believe any researcher who knows what they’re doing is going to sign away their rights to Elsevier journals after this. I hope to see the editorial boards of Elsevier-encumbered journals breaking away from the dead-weight of the publisher, and finding deals that actually promote the work of those journals rather than actively hindering it.
And a reminder, folks: for those of you who want to publicly declare that you’re done with Elsevier, you can sign the Cost Of Knowledge declaration. That’s often been described as a petition, but it’s not. A petition exists to persuade someone to do something, but we’re not asking Elsevier to change. It’s evidently far, far too late for that. As a publisher, Elsevier is dead. The Cost of Knowledge is just a declaration that we’re walking away from the corpse before the stench becomes unbearable.
December 13, 2013
It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.
As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.
To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):
I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.
There are a few things worth unpacking on that graph.
First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.
My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.
My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology — unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)
In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)
Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)
What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.
- Farke, Andrew A., Michael P. Taylor and Mathew J. Wedel. 2009. Sharing: public databases combat mistrust and secrecy. Nature 461:1053.
- Lozano, George A., Vincent Larivière and Yves Gingras. 2012. The weakening relationship between the impact factor and papers’ citations in the digital age. Journal of the American Society for Information Science and Technology 63(11):2140-2145. doi:10.1002/asi.22731 [arXiv preprint]
- Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrae Paleontology 29(3):787-806.
- Taylor, Michael P., Mathew J. Wedel and Darren Naish. 2009. Head and neck posture in sauropod dinosaurs inferred from extant animals. Acta Palaeontologica Polonica 54(2):213-230.