December 15, 2014
I wrote last week that I can’t support Nature’s new broken-access initiative for two reasons: practically, I can’t rely on it; and philosophically I can’t abide work being done to reduce utility.
More recently I read a post on Nature’s blog: Content sharing is *not* open access and why NPG is committed to both. It’s well worth reading: concise, clear and helpful. The key point they make is that “This is not a step back from open access or an attempt to undermine it. We see content sharing as an additional offering to open access, not instead of it”. But do read the article, as it provides useful background on NPG’s moves towards open access.
So NPG do look pretty much like the good guys here. They are not taking anything away; they are adding a thing that no-one is obliged to use; and they are carefully not claiming that this thing is something it’s not. What’s not to like? Surely at worst this has to have net zero value, yes?
The first thing is that for me the value is not more than zero, because articles that might evaporate at any moment are simply not of value to me as a researcher. If I am going to cite them, I need to have permanent copies, so I can check back on what I meant.
All right — but doesn’t that leave the value at last no less than zero?
Well, it depends. When I wrote last year about the travesty that is “walk-in access” — the ridiculous idea that you can physically go to a special magic building to use their anointed computers to read documents your own computer is perfectly capable of reading — I speculated:
I can only assume that was always the intention of the barrier-based publishers on the Finch committee that came up with this initiative: to deliver a stillborn access initiative that they can point to and say “See, no-one wants open access”.
It’s easy to imagine barrier-based publishers making the same point when take-up of NPG’s broken access is low. That’s one possible bad outcome that would make the broken-access offer a net negative.
Another, much more serious, one would the fragmentation of the literature into multiple mutually incompatible subsets. In this dystopia, you’d have to read NPG papers on ReadCube, Elsevier papers using Mendeley, and so on. As Peter Murray-Rust noted:
Maybe we’ll shortly return to the browser-wars “this paper only viewable on Read-Cube”. If readers are brainwashed into compliance by technology restrictions our future is grim.
Say what you want about PDFs — and there is plenty to dislike about them — the format is at least defined by an open standard: anyone can write software to read and display it, and lots of different groups have created implementations. The idea of papers that can only be read by a specific program (almost certainly a proprietary one) is a horrifyingly retrograde one.
And here’s a third possible bad consequence. ReadCube is one of those applications that “phones home” — it tracks what you read. NPG say that this data is anonymised, but the opportunities for abuse are obvious. Suppose you look up a lot of papers about cancer and find that your health insurance premiums have gone up. You read papers about communist theory, and can’t get a place at the university you thought was keen to take you. Right now, this isn’t happening (so NPG assure us) but history does not give us reason to be optimistic about corporations owning big databases about user behaviour.
So the outcomes of NPG’s kind offer, intentionally or not, could include anti-OA propaganda based on poor update, fragmentation of the literature into technically incompatible subsets, and violation of researcher privacy.
Not a pretty prospect.
But here’s why I feel even worse about this: pointing it out feels like throwing a generous offer back in the faces of the people who made it. When I read Timo Hannay’s visionary exposition of what broken access is meant to achieve, and Steven Inchcoombe and Grace Baynes clear explanation of what it is and isn’t, I see good people honestly trying to do good work, and I hate to be so negative about it.
So my heartfelt apologies to Timo, Steven and Grace; but I gotta call ‘em like I see ‘em, and to me broken access looks like an offer with very low value, and carrying several significant threats.
What I would really like to see from NPG — an unequivocal good that I could celebrate unreservedly — would be for them to make all their articles properly open access (CC By) after one year. That would be a genuine and valuable contribution to the progress of research.
December 10, 2014
Today sees the description of Aquilops americanus (“American eagle face”), a new basal neoceratopsian from the Cloverly Formation of Montana, by Andy Farke, Rich Cifelli, Des Maxwell, and myself, with life restorations by Brian Engh. The paper, which has just been published in PLOS ONE, is open access, so you can download it, read it, share it, repost it, remix it, and in general do any of the vast scope of activities allowed under a CC-BY license, as long as we’re credited. Here’s the link – have fun.
Obviously ceratopsians are much more Andy’s bailiwick than mine, and you should go read his intro post here. In fact, you may well be wondering what the heck a guy who normally works on huge sauropod vertebrae is doing on a paper about a tiny ceratopsian skull. The short, short version is that I’m here because I know people.
The slightly longer version is that OMNH 34557, the holotype partial skull of Aquilops, was discovered by Scott Madsen back in 1999, on one of the joint Cloverly expeditions that Rich and Des had going on at the time. That the OMNH had gotten a good ceratopsian skull out of Cloverly has been one of the worst-kept secrets in paleo. But for various complicated reasons, it was still unpublished when I got to Claremont in 2008. Meanwhile, Andy Farke was starting to really rock out on ceratopsians at around that time.
For the record, the light bulb did not immediately go off over my head. In fact, it took a little over a year for me to realize, “Hey, I know two people with a ceratopsian that needs describing, and I also know someone who would really like to head that up. I should put these folks together.” So I proposed it to Rich, Des, and Andy in the spring of 2010, and here we are. My role on the paper was basically social glue and go-fer. And I drew the skull reconstruction – more on that in the next post.
Anyway, it’s not my meager contribution that you should care about. I am fairly certain that, just as Brontomerus coasted to global fame on the strength of Paco Gasco’s dynamite life restoration, whatever attention Aquilops gets will be due in large part to Brian Engh’s detailed and thoughtful work in bringing it to life – Brian has a nice post about that here. I am very happy to report that the three pieces Brian did for us – the fleshed-out head that appears at the top of this post and as Figure 6C in the paper, the Cloverly environment scene with the marauding Gobiconodon, and the sketch of the woman holding an Aquilops - are also available to world under the CC-BY license. So have fun with those, too.
Finally, I need to thank a couple of people. Steve Henriksen, our Vice President for Research here at Western University of Health Sciences, provided funds to commission the art from Brian. And Gary Wisser in our scientific visualization center used his sweet optical scanner to generate the hi-res 3D model of the skull. That model is also freely available online, as supplementary information with the paper. So if you have access to a 3D printer, you can print your own Aquilops – for research, for teaching, or just for fun.
Next time: Aquilöps gets röck döts.
Farke, A.A., Maxwell, W.D., Cifelli, R.L., and Wedel, M.J. 2014. A ceratopsian dinosaur from the Lower Cretaceous of Western North America, and the biogeography of Neoceratopsia. PLoS ONE 9(12): e112055. doi:10.1371/journal.pone.0112055
December 9, 2014
It’s been a week since Nature announced what they are now calling “read-only sharing by subscribers” — a much more accurate title than the one they originally used on that piece, “Nature makes all articles free to view” [old link, which now redirects]. I didn’t want to leap straight in with a comment at the time, because this is a complex issue and I felt it better to give my thoughts time to percolate.
Meanwhile, other commentators have weighed in, and have mostly been pretty negative. John Wilbanks described it as “canonization of a system that says a small number of companies not only do control the world’s knowledge, but should control all the world’s knowledge”; Ross Mounce characterised it as “beggar access”; Peter Murray-Rust says “Nature’s fauxpen access leaves me very sad and very angry”. Perhaps surprisingly, Michael Eisen is more temperate, asking whether Nature’s policy is “a magnanimous gesture or a cynical ploy”, and concluding only “At the end of the day, this is a pretty cynical move”.
I am a bit more optimistic (although as you will see, still not really happy).
First of all, let’s say clearly that this is a step in a good direction. Nature‘s papers are now at least somewhat easier for regular people to get hold of, and that is to be applauded. Even if Mike Eisen’s cynical reading is correct, it’s still a net good.
But — and it’s a big but — I have a huge problem with the use of ReadCube, or any equivalent, to provide a crippled form of access. Rather than Ross’s term “beggar access”, which focusses on the need to get a subscriber to share a link, I think the best term to describe what Nature is offering here is “broken access”. Broken by deliberately locking the content into the ReadCube jail, to prevent printing, downloading, copy-pasting, etc.
My issue with this is two-fold: both practical and philosophical. Practically, PDFs are very far from perfect, but there’s a lot we can do with them (including printing, downloading, copy-pasting, etc.) Most crucially, when I download a PDF, I have it forever. I can refer back to it whenever I need it, without depending on a third party. It becomes part of my research toolkit. I know it’s not going to vanish when my back is turned.
By contrast, we never know when we’re going to be able to read these Nature papers. Certainly not when we’re offline. Maybe not when there’s a service outage. Probably not after the end of the one-year pilot. And you can’t build research on something that you can’t rely on existing. It’s not real.
But the philosophical issue is really burns is that ReadCube exists precisely in order to take away functionality. Its purpose is to make access limited, ephemeral, unreliable and less useful. And I find that offensive. The idea of doing work to remove functionality hurts me. The idea of all those clever people doing all that hard work to take functionality away. It’s wrong. It’s burning value.
So I end up feeling conflicted about the new Nature policy. It is a forward step; but one that I literally don’t ever see myself taking advantage of. A much more useful policy (to me anyway) would be to keep new articles under lock and key, but make them truly open after, say, a year. Because for a scientist, usefulness trumps timeliness.
Finally, Matt makes this point:
Nature papers are short, typically 5 pages or fewer. With big, modern monitors, you can usually get away with screen-shotting a whole page in one go, or in two takes and the world’s easiest GIMP stitch at worst. So by not allowing people to download the PDFs, all they’ve done is ensure that the people who really need their own offline copy will have to waste maybe 15 minutes assembling one. So the ‘barrier’ they’ve put up is low and crossable, it’s just annoying. Is that what Nature wants to be known for, annoying their users to death?
November 27, 2014
Despite the flagrant trolling of its title, Nature‘s recent opinion-piece Open access is tiring out peer reviewers is mostly pretty good. But the implication that the rise of open-access journals has increased the aggregate burden of peer-review is flatly wrong, so I felt obliged to leave a comment explaining why. Here is that comment, promoted to a post of its own (with minor edits for clarity):
Much of what is said here is correct and important. Although it would be nice if Nature could make a bit more of an effort to avoid the obvious conflict-of-interest issues that lead it to title the piece so misleadingly as an attack on open access. I am glad that so many of the other commenters on this piece saw straight through that rather snide piece of propaganda.
Only one important error of interpretation here, I think. I quote:
The rise of the open-access (OA) movement compounds this effect [i.e. the increasing number of articles needing peer-review.] The business case for online OA journals, to which authors pay submission fees, works best at high volume. And for many of these journals, submitted work is published as long as it is methodologically sound. It does not have to demonstrate, for example, the novelty or societal relevance that some traditional journals demand.
The implication is that journals of this kind (PLOS ONE, PeerJ, the various Frontiers journals) increase the total peer-review burden. In fact, the exact opposite is the case. They greatly reduce the the total amount of peer reviewing.
It’s an open secret that nearly every paper eventually gets published somewhere. Under the old regime, the usual approach is to “work down the ladder”, submitting the same paper repeatedly to progressively less prestigious journals until it reached one that was prepared to publish work of the supplied level of sexiness. As a result, many papers go through four, five or more rounds of peer-review before finally finding a home. Instead, such papers when submitted to a review-for-soundness-only venue such as PLOS ONE require only a single round of review. (Assuming of course that they are indeed methodologically sound!)
The rise of review-for-soundness-only journals (“megajournals”) is an unequivocal improvement in the scientific publishing landscape, and should be welcomed by all parties: authors, who no longer have to submit to the monumental waste of time and effort that is the work-down-the-ladder system; readers, who get access to new research much more quickly; and editors and reviewers who no longer have to burn hours re-reviewing and re-re-reviewing perfectly good papers that have already been repeatedly rejected for a perceived lack of glamour.
November 22, 2014
Matt’s post yesterday was one of several posts on this blog that have alluded to Clay Shirky’s now-classic article How We Will Read [archived copy]. Here is the key passage that we keep coming back to:
Publishing is not evolving. Publishing is going away. Because the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.
In ye olden times of 1997, it was difficult and expensive to make things public, and it was easy and cheap to keep things private. Privacy was the default setting. We had a class of people called publishers because it took special professional skill to make words and images visible to the public. Now it doesn’t take professional skills. It doesn’t take any skills. It takes a WordPress install.
… and of course as SV-POW! itself demonstrates, it doesn’t even need a WordPress install — you can just use the free online service.
This passage has made a lot of people very excited; and a lot other people very unhappy and even angry. There are several reasons for the widely differing responses, but I think one of the important ones is a pun on the word “publish”.
When Shirky uses the word, he is talking about making something public, available to the world. Which after all is its actual meaning.
But when academics use the word “publish” they usually mean something quite different — they mean the whole process that a research paper goes through between submission and a PDF appearing in a stable location (and in some cases, copies being printed). That process involves many other aspects besides actual publishing — something that in fact Shirky goes straight on to acknowledge:
The question isn’t what happens to publishing — the entire category has been evacuated. The question is, what are the parent professions needed around writing? Publishing isn’t one of them. Editing, we need, desperately. Fact-checking, we need. For some kinds of long-form texts, we need designers.
And this is dead on target. Many writers need editors[*], to varying degrees. Fact-checking could be equated with peer-review, which we pretty much all agree is still very important. Most academic publishers do a certain amount of design (although I suspect that in the great majority of cases this is 99% automatic, and probably involves human judgement only in respect of where to position the illustrations).
But due to the historical accident that it used to be difficult and costly to make and distribute copies, all those other tasks — relatively inexpensive ones, back in the days when distribution was the expensive thing — have become bundled with the actual publishing. With hilarious consequences, as they say. You know, “hilarious” in the sense of “tragic, and breathtakingly frustrating”.
That’s why we’re stuck in an idiot world where, when we need someone to peer-review our manuscript, we usually trade away our copyright in exchange (and not even to the people who provide the expert review). If you stop and think about that for a moment, it makes absolutely no sense. When I recently wrote a book about Doctor Who, I had several people proofread it, but I didn’t hand over copyright to any of them. My ability to distribute copies was not hobbled by having had independent eyes look it over. There is no reason why it should have, and there is no reason why our ability to distribute copies of our academic works should be limited, either.
What we need is the ability to pay a reasonable fee for the services we need — peer-review, layout design, reference linking — and have the work published freely.
Well, wouldja lookit that. Looks like I just invented Gold Open Access.
Is publishing just a button? Yes. Making things public is now trivial to do, and in fact much of what so-called publishers now do is labouring to prevent things from being public. But we do need other things apart from actual publishing — things that publishers have historically provided, for reasons that used to make sense but no longer do.
Exactly what those things are, and how extensive and important they are, is a discussion for another day, but they do exist.
[*] Note: the whole issue of academic publishing is further confused by another pun, this one on the word “editor”. When Shirky refers to editors, he means people who sharpen up an author’s prose — cutting passages, changing word choices, etc. Academic editors very rarely do that, and would be resented if they did. In our world, an “editor” is usually the nominally independent third party who solicits and evaluates peer-reviews, and makes the accept/reject decision. Do we need editors, in this academic sense? We’ll discuss that properly another time, but I’ll say now that I am inclined to think we do.
November 16, 2014
At the end of October, we published a short piece called CC-By documents cannot be re-enclosed if their publisher is acquired. In an interesting discussion in the comments, moominoid asked:
Isn’t this what happened when DeGruyter acquired BEPress?
This is the announcement of the acquisition. If you visit the journals now, they are behind paywalls, when they were OA before the acquisition.
Having previously read (and commented favourably on) an interview with bepress CEO Jean-Gabriel Bankier, I was disappointed to think this might be true. I emailed him to ask for clarification, and he passed my message on to Irene Kamotsky, bepress’s Director of Strategic Initiatives. A little later, she send a helpful a detailed response, which I now reproduce with her permission.
Date: 10 November 2014 14:27
From: Irene Kamotsky
To: Mike Taylor
Cc: Jean-Gabriel Bankier
Subject: Re: Previous OA journals enclosed behind paywalls?
I apologize for sitting on this for so long — thank you so much for following up, and for clarifying what was, indeed, always a bit confusing about the bepress-published journals that are now with deGruyter.
To answer your question, the bepress journals were not open access in the formal (Budapest) definition of the term, and they never used a CC license. The copyright was traditional publisher-owned copyright, with permission to authors to post their articles on their websites and university IRs.
The bepress journals did have an unusual access policy: we made all articles available to readers for free, as a way to demonstrate demand and urge libraries to subscribe. Basically, if a guest filled out a short form we would grant them access to the article. We would tally those forms by institution and then call the library and ask them to subscribe. There’s an article in Learned Publishing that describes the model in more detail. It wasn’t open access but it was a good balance for many years. Unfortunately, libraries facing strong budget pressures stopped subscribing. They said “we can’t justify paying for a title that our authors can get for free. We have to spend the money on titles that are otherwise unavailable.”
At the same time, we had already developed our institutional repository and publishing platform called Digital Commons. This platform allowed libraries to host and publish their own faculty’s and students’ journals (among all the other digital scholarly content produced on campus), and this has turned out to be an extremely successful approach. There are now nearly 800 journals published by libraries using Digital Commons, the vast majority of which are open access (and none charge author article fees). You can see a brief overview of this new model in a recent report.
I’d be happy to talk more about the new direction in library-led publishing; I know this is a growing interest among UK libraries. Is this something you’re seeing as well?
Thanks again for getting in touch, and clarifying this point.
November 6, 2014
You linked to the preprint of your The neck of Barosaurus was not only longer but also wider than those of Diplodocus and other diplodocines submission – does this mean that it has not yet been formally published?
As so often in these discussions, it depends what we mean by our terms. The Barosaurus paper, like this one on neck cartilage, is “published” in the sense that it’s been released to the public, and has a stable home at a well known location maintained by a reputable journal. It’s open for public comment, and can be cited in other publications. (I notice that it’s been cited in Wikipedia). It’s been made public, which after all is the root meaning of the term “publish”.
On the other hand, it’s not yet “published” in the sense of having been through a pre-publication peer-review process, and perhaps more importantly it’s not yet been made available via other channels such as PubMed Central — so, unlike say our previous PeerJ paper on sauropod neck anatomy, it would in some sense go away if PeerJ folded or were acquired by a hostile entity. But then the practical truth is of course that we’d just make it directly available here on SV-POW!, where any search would find it.
In short, the definition of what it means for a paper to be “published” is rather fluid, and is presently in the process of drifting. More than that, conventions vary hugely between fields. In maths and astronomy, posting a preprint on arXiv (their equivalent of PeerJ Preprints, roughly) pretty much is publication. No-one in those fields would dream of not citing a paper that had been published in that way, and reputations in those fields are made on the basis of arXiv preprints. [Note: I was mistaken about this, or at least oversimplified. See David Roberts’ and Michael Richmond’s comments below.]
Maybe the most practical question to ask about the published-ness or otherwise of a paper is, how does it affect the author’s job prospects? When it comes to evaluation by a job-search panel, or a promotion committee, or a tenure board, what counts? And that is a very hard question to answer, as it depends largely on the institution in question, the individuals on the committee, and the particular academic field. My gut feeling is that if I were looking for a job in palaeo, the Barosaurus preprint and this cartilage paper would both count for very little, if anything. But, candidly, I consider that a bug in evaluation methods, not a problem with pre-printing per se. But then again, it’s very easy for me to say that, as I’m in the privileged position of not needing to look for a job in palaeo.
For Matt and me, at least as things stand right now, we do feel that we have unfinished business with these papers. In their present state, they represent real work and a real (if small) advance in the field; but we don’t feel that our work here is done. That’s why I submitted the cartilage paper for peer-review at the same time as posting it as a preprint (it’s great that PeerJ lets you do both together); and it’s why one of Matt’s jobs in the very near future will be getting the Barosaurus revised in accordance with the very helpful reviews that we received, and then also submitted for peer-review. We do still want that “we went through review” badge on our work (without believing it means more than it really does) and the archiving in PubMed Central and CLOCKSS, and the removal of any reason for anyone to be unsure whether those papers “really count”.
But I don’t know whether in ten years, or even five, our attitude will be the same. After all, it changed long ago in maths and astronomy, where — glory be! — papers are judged primarily on their content rather than on where they end up published.