The longest cell in Andy Farke is one of the primary afferent (sensory) neurons responsible for sensing vibration or fine touch, which runs from the tip of his big toe to his brainstem. (NB: I have not actually dissected Andy to confirm this, or performed any viral neuron tracing on him, this is assumed based on comparative anatomy.) Here’s a diagram:
Longest cell in Andy Farke

This is what happens when (a) I need to create a diagram to illustrate the longest cell in the human body for my students, and (b) my friends put stuff online with a CC-BY license.

Found this while I was checking out Aquilops art online:

Aquilops_scale

It’s a derivative work by Andy IJReid, from this Wikimedia page, based on two PhyloPic silhouettes Andy created (go here for the pathetically tiny lower vertebrate and here for Aquilops).

wedel-rln-fig2

From there it was pretty straighforward to mash up Andy’s silhouette with the nerve stuff from Wedel (2012: fig. 2).

So if you want the full deets on licensing – which I am obligated to provide whether you want them or not – the image up top is a derivative image by me, based on work by Andy published at PhlyoPic under the Creative Commons Attribution 3.0 unported (CC-BY 3.0) license, and based on my own image published in Acta, also under a CC-BY license.

If you’d like to know more about the science behind very long nerves in vertebrates, please see these posts:

Also, keep making stuff and putting it online under a license people can actually use. It’s beneficial for science and education, and hugely entertaining for me.

Reference

Wedel, M.J. 2012. A monument of inefficiency: the presumed course of the recurrent laryngeal nerve in sauropod dinosaurs. Acta Palaeontologica Polonica 57(2):251-256.

We as a community often ask ourselves how much it should cost to publish an open-access paper. (We know how much it does cost, roughly: typically $3000 with a legacy publisher, or an average of $900 with a born-open publisher, or nothing at all for many journals.)

We know that peer-review is essentially free to publishers, being donated free by scholars. We know that most handling editors also work for free or for peanuts. We know that hosting things on the Web is cheap (“publishing [in this sense] is just a button“).

Publishers have costs associated with rejecting manuscripts — checking that they’re by real people at real institutions, scanning for obvious pseudo-scholarship, etc. But let’s ignore those costs for now, as being primarily for the benefit of the publishers rather than the author. (When I pay a publisher an APC, they’re not serving me directly by running plagiarism checks.)

The tendency of many discussions I’ve been involved with has been that the main technical contribution of publishers is the process that is still, for historical reasons, known as “typesetting” — that is, the transformation of the manuscript from from an opaque form like an MS-Word file (or indeed a stack of hand-written sheets) into a semantically rich representation such as JATS XML. From there, actual typesetting into HTML or a pretty PDF can be largely automated.

So: what does it cost to typeset a manuscript?

First data point: I have heard that Kaveh Bazargan’s River Valley Technologies (the typesetter that PeerJ and many more mainstream publishers use) charges between £3.50 and £9 per page, including XML, graphics, PDF generation and proof correction.

Second data point: in a Scholarly Kitchen post that Kent Anderson intended as a criticism of PubMed Central but which in fact makes a great case for what good value it provides, he quotes an email from Kent A. Smith, a former Deputy Director of the NLM:

Under the % basis I am using here $47 per article. John [Mullican, a program analyst at NCBI] and I looked at this yesterday and based the number on a sampling of a few months billings. It consists on the average of about $34-35 per tagged article plus $10-11 for Q/A plus administrative fees of $2-3, where applicable.

Using the quoted figure of $47 per PMC article and the £6.25 midpoint of River Valley’s range of per-page prices (= $9.68 per page), that would be consistent with typical PMC articles being a bit under five pages long. The true figure is probably somewhat higher — maybe twice as long or more — but this seems to be at least in the same ballpark.

Third data point: Charles H. E. Ault, in a comment on that Scholarly Kitchen post, wrote:

As a production director at a small-to-middling university press that publishes no journals, I’m a bit reluctant to jump into this fray. But I must say that I am astonished at how much PMC is paying for XML tagging. Most vendors looking for the small amount of business my press can offer (say, maybe 10,000 pages a year at most) charge considerably less than $0.50 per page for XML tagging. Assuming a journal article is about 30 pages long, it should cost no more than $15 for XML tagging. Add another few bucks for quality assurance, and you might cross the $20 threshold. Does PMC have to pay a federally mandated minimum rate, like bridge construction projects? Where can I submit a bid?

I find the idea of 50-cent-per-page typesetting hard to swallow — it’s more than an order of magnitude cheaper than the River Valley/PMC level, and I’d like to know more about Ault’s operation. Is what they’re doing really comparable with what the others are doing?

Are there other estimates out there?

 

Re-reading an email that Matt sent me back in January, I see this:

One quick point about [an interesting sauropod specimen]. I can envision writing that up as a short descriptive paper, basically to say, “Hey, look at this weird thing we found! Morrison sauropod diversity is still underestimated!” But I honestly doubt that we’ll ever get to it — we have literally years of other, more pressing work in front of us. So maybe we should just do an SV-POW! post about the weirdness of [that specimen], so that the World Will Know.

Although as soon as I write that, I think, “Screw that, I’m going to wait until I’m not busy* and then just take a single week* and rock out a wiper* on it.”

I realize that this way of thinking represents a profound and possibly psychotic break with reality. *Thrice! But it still creeps up on me.

(For anyone not familiar with the the “wiper”, it refers to a short paper of only one or two pages. The etymology is left as an exercise to the reader.)

It’s just amazing how we keep on and on falling for this delusion that we can get a paper out quickly, even when we know perfectly well, going into the project, that it’s not going to work out that way. To pick a recent example, my paper on quantifying the effect of intervertebral cartilage on neutral posture was intended to be literally one page, an addendum to the earlier paper on cartilage: title, one paragraph of intro, diagram, equation, single reference, DONE! Instead, it landed up being 11 pages long with five illustrations and two tables.

I think it’s a reasonable approximation to say that any given project will require about an order of magnitude more work than we expect at the outset.

Even as I write this, the top of my palaeo-work priority list is a paper that I’m working on with Matt and two other colleagues, which he kicked off on 6 May, writing:

I really, really want to kill this off absolutely ASAP. Like, seriously, within a week or two. Is that cool? Is that doable?

To which I idiotically replied:

IT SHALL BE SO!

A month and a bit later, the answers to Matt’s questions are clear. Yes, it’s cool; and no, it’s not doable.

The thing is, I think that’s … kind of OK. The upshot is that we end up writing reasonably substantial papers, which is after all what we’re meant to be trying to do. If the reasonably substantial papers that end up getting written aren’t necessarily the ones we thought they were going to be, well, that’s not a problem. After all, as I’ve noted before, my entire Ph.D dissertation was composed of side-projects, and I never got around to doing the main project. That’s fine.

In 2011, Matt’s tutorial on how to find problems to work on discussed in detail how projects grow and mutate and anastamose. I’m giving up on thinking that this is a bad thing, abandoning the idea that I ought to be in control of my own research program. I’m just going to keep chasing whatever rabbits look good to me at the time, and see what happens.

Onwards!

Somehow this seems to have slipped under the radar: National Science Foundation announces plan for comprehensive public access to research results. They put it up on 18 March, two whole months ago, so our apologies for not having said anything until now!

This is the NSF’s rather belated response to the OSTP memo on Open Access, back in January 2013. This memo required all Federal agencies that spend $100 million in research and development each year to develop OA policies, broadly in line with the existing one of the NIH which gave us PubMed Central. Various agencies have been turning up with policies, but for those of us in palaeo, the NSF’s the big one — I imagine it funds more palaeo research than all the others put together.

So far, so awesome. But what exactly is the new policy? The press release says papers must “be deposited in a public access compliant repository and be available for download, reading and analysis within one year of publication”, but says nothing about what repository should be used. It’s lamentable that a full year’s embargo has been allowed, but at least the publishers’ CHORUS land-grab hasn’t been allowed to hobble the whole thing.

There’s a bit more detail here, but again it’s oddly coy about where the open-access works will be placed: it just says they must be “deposited in a public access compliant repository designated by NSF”. The executive summary of the actual plan also refers only to “a designated repository”

Only in the full 31-page plan itself does the detail emerge. From page 5:

In the initial implementation, NSF has identified the Department of Energy’s PAGES (Public Access Gateway for Energy and Science) system as its designated repository and will require NSF-funded authors to upload a copy of their journal articles or juried conference paper to the DOE PAGES repository in the PDF/A format, an open, non-proprietary standard (ISO 19005-1:2005). Either the final accepted version or the version of record may be submitted. NSF’s award terms already require authors to make available copies of publications to the Cognizant Program Officers as part of the current reporting requirements. As described more fully in Sections 7.8 and 8.2, NSF will extend the current reporting system to enable automated compliance.

Future expansions, described in Section 7.3.1, may provide additional repository services. The capabilities offered by the PAGES system may also be augmented by services offered by third parties.

So what is good and bad about this?

Good. It makes sense to me that they’re re-using an existing system rather than wasting resources and increasing fragmentation by building one of their own.

Bad. It’s a real shame that they mandate the use of PDF, “the hamburger that we want to turn back into a cow”. It’s a terrible format for automated analysis, greatly inferior to the JATS XML format used by PubMed Central. I don’t understand this decision at all.

Then on page 9:

In the initial implementation, NSF has identified the DOE PAGES system to support managing journal articles and juried conference papers. In the future, NSF may add additional partners and repository services in a federated system.

I’m not sure where this points. In an ideal world, it would mean some kind of unifying structure between PAGES and PubMed Central and whatever other repositories the various agencies decide to use.

Anyone else have thoughts?

Update from Peter Suber, later that day

Over on Google+, Peter Suber comments on this post. With his permission, I reproduce his observations here:

My short take on the policy’s weaknesses:

  • will use Dept of Energy PAGES, which at least for DOE is a dark archive pointing to live versions at publisher web sites
  • plans to use CHORUS (p. 13) in addition to DOE PAGES
  • requires PDF
  • silent on open licensing
  • only mentions reuse for data (pp. v, 18), not articles, and only says it will explore reuse
  • silent on reuse for articles even tho it has a license (p. 10) authorizing reuse
  • silent on the timing of deposits

I agree with you that a 12 month embargo is too long. But that’s the White House recommended default. So I blame the White House for this, not NSF.

To be more precise, PAGES favors publisher-controlled OA in one way, and CHORUS does it in another way. Both decisions show the effect of publisher lobbying on the NSF, and its preference for OA editions hosted by publishers, not OA editions hosted by sites independent of publishers.

So all in all, the NSF policy is much less impressive than I’d initially thought and hoped.

In response to my post Copyright from the lens of reality and other rebuttals of his original post, Elseviers General Counsel Mark Seeley has provided a lengthy comment. Here’s my response (also posted as a comment on the original article, but I’m waiting for it to be moderated.)


 

Hi, Mark, thanks for engaging. You write:

With respect to the societal bargain, I would simply note that, in my view, the framers believed that by providing rights they would encourage creative works, and that this benefits society as a whole.

Here, at least, we are in complete agreement. Where we part company is that in my view the Eldred v. Ashcroft decision (essentially that copyright terms can be increased indefinitely) was a travesty of the original intent of copyright, and clearly intended for the benefit of copyright holders rather than that of society on general. (I further note in passing that those copyright holders are only rarely the creative people, but rights-holding corporations whose creative contribution is negligible.)

You continue:

[Journal] services and competencies need to be supported through a business model, however, and in the mixed economy that we have at the moment, this means that many journals will continue to need subscription and purchase models.

This is a circular argument. It comes down to “we use restrictive copyright on scholarly works at present, so we therefore need to continue to do so”. In fact, this this is not an argument at all, merely an assertion. If you want it to stick, you need to demonstrate that the present “mixed economy” is a good thing — something that is very far from evident.

The alternatives to a sound business model rooted in copyright are in my view unsustainable. I worry about government funding, patronage from foundations, or funding by selling t-shirts—I am not sure that these are viable, consistent or durable. Governments and foundations can change their priorities, for example.

If governments and foundations decide to stop funding research, we’re all screwed, and retention of copyright on the papers we’re no longer able to research and write will be the least of our problems. The reality is that virtually everyone in research is already dependent on governments and foundations for the 99% of their funding that covers all the work before the final step of publication. Taking the additional step of relying on those same sources for the last 1% of funding is eminently sensible.

On Creative Commons licences, I don’t think we have any material disagreement.

Now we come to the crucial question of copyright terms (already alluded to via Eldred v. Ashcroft above). You content:

Copyright law was most likely an important spur for the author or publisher to produce and distribute the work [that is now in the public domain] in the first place.

In principle, I agree — as of course did the framers of the US Constitution and other lawmakers that have passed copyright laws. But as you will well know, the US’s original copyright act of 1790, which stated its purpose as “encouragement of learning”, offered a term of 14 years, with an optional renewal of a further 14 years if the author was still alive at the end of the initial term. This 14-year was considered quite sufficient to incentivise the creation of new works. The intent of the present law seems to be that authors who have been dead for 70 years still need to receive royalties for their works, and in the absence of such royalties would not have created in the first place. This is self-evident nonsense. No author in the history of the world every said “I would have written a novel if I’d continued to receive royalties until 70 years after my death, but since royalties will only last 28 years I’m not going to bother”.

But — and this can’t be stated strongly enough — even if there were some justification for the present ridiculous copyright terms in the area of creative works, it would still say nothing whatsoever about the need to copyright scientific writing. No scientific researcher ever wrote a paper who would not have written it in the absence of copyright. That’s what we’re talking about here. One of the tragedies of copyright is that it’s been extruded from a domain where it has some legitimate purpose into a domain where it has none.

The Budapest Open Access Initiative said it best and most clearly: “the only role for copyright in this domain [scholarly research] should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited“. (And several of the BOAI signatories have expressed regret over even the controlling-integrity-of-the-work part of this.)


 

See also David Roberts’ response to Seeley’s posting.

This post is a response to Copyright from the lens of a lawyer (and poet), posted a couple of days ago by Elsevier’s General Counsel, Mark Seeley. Yes, I am a slave to SIWOTI syndrome. No, I shouldn’t be wasting my time responding to this. Yes, I ought to be working on that exciting new manuscript that we SV-POW!er Rangers have up and running. But but but … I can’t just let this go.

duty_calls

Copyright from the lens of a lawyer (and poet) is a defence of Elsevier’s practice of having copyright encumber scientific publishing. I tried to read it in the name of fairness. It didn’t go well. The very first sentence is wrong:

It is often said that copyright law is about a balance of interests and communities, creators and users, and ultimately society as a whole.

No. Copyright is not a balance between competing interests; it’s a bargain that society makes. We, the people, give up some rights in exchange for incentivising creative people to make new work, because that new work is of value to society. To quote the US constitution’s helpful clause, copyrights exist “To promote the Progress of Science and useful Arts” — not for authors, but for wider society. And certainly not of publishers who coerce authors to donate copyright!

(To be fair to Seeley, he did hedge by writing “It is often said that copyright law is about a balance”. That is technically true. It is often said; it’s just wrong.)

Well, that’s three paragraphs on the first sentence of Elsevier’s defence of copyright. I suppose I’d better move on.

The STM journal publishing sector is constantly adjusting to find the right balance between researcher needs and the journal business model, as refracted through copyright.

Wrong wrong wrong. We don’t look for a balance between researchers needs (i.e. science) and the journal business model. Journals are there to serve science. That’s what they’re for.

Then we have the quote from Mark Fischer:

I submit that society benefits when the best creative spirits can be full-time creators and not part-timers doing whatever else (other than writing, composing, painting, etc.) they have to do to pay the rent.

This may be true. But it is totally irrelevant to scholarly copyright. That should hardly need pointing out, but here it is for those hard of thinking. Scholars make no money from the copyright in the work they do, because (under the Elsevier model) they hand that copyright over to the publisher. Their living comes in the form of grants and salaries, not royalties.

Ready for the next one?

The alternatives to a copyright-based market for published works and other creative works are based on near-medieval concepts of patronage, government subsidy […]

Woah! Governments subsidising research and publication is “near-medieval”? And there we were thinking it was by far the most widespread model. Silly us. We were all near-medieval all this time.

Someone please tell me this is a joke.

Moving swiftly on …

Loud advocates for “copyright reform” suggest that the copyright industries have too much power […] My comparatively contrarian view is that this ignores the enormous creative efforts and societal benefits that arise from authoring and producing the original creative work in the first place: works that identify and enable key scientific discoveries, medical treatments, profound insights, and emotionally powerful narratives and musical experiences.

Wait, wait. Are we now saying that … uh, the only reason we get scientific discoveries and medical treatment because … er … because of copyright? Is that it? That can’t be it. Can it?

Copyright has no role in enabling this. None.

In fact, it’s worse than that. The only role of copyright in modern scholarly publishing is to prevent societal benefits arising from scientific and medical research.

The article then wanders off into an (admittedly interesting) history of Seeley’s background as a poet, and as a publisher of literary magazines. The conclusion of this section is:

Of course creators and scientists want visibility […] At the very least, they’d like to see some benefit and support from their work. Copyright law is a way of helping make that happen.

This article continues to baffle. The argument, if you want to dignify it with that name, seems to be:

  • poets like copyright
  • => we copyright other people’s science
  • => … profit!

Well, that was incoherent. But never mind: finally we come to part of the article that makes sense:

  • There is the “idea-expression” dichotomy — that copyright protects expression but not the fundamental ideas expressed in a copyright work.

This is correct, of course. That shouldn’t be cause for comment, coming from a copyright lawyer, but the point needs to be made because the last time an Elsevier lawyer blogged, she confused plagiarism with copyright violation. So in that respect, this new blog is a step forward.

But then the article takes a sudden left turn:

The question of the appropriateness of copyright, or “authors’ rights,” in the academic field, particularly with respect to research journal articles, is sometimes controversial. In a way quite similar to poets, avant-garde literary writers and, for that matter, legal scholars, research academics do not rely directly on income from their journal article publishing.

Er, wait, what? So you admit that scholarly authors do not benefit from copyright in their articles? We all agree, then, do we? Then … what was the first half of the article supposed to be about?

And in light of this, what on earth are we to make of this:

There is sometimes a simplistic “repugnance” about the core publishing concept that journal publishers request rights from authors and in return sell or license those rights to journal subscribers or article purchasers.

Seeley got that much right! (Apart from the mystifyingly snide use of “simplistic” and the inexplicable scare-quotes.) The question is why he considers this remotely surprising. Why would anyone not find such a system repugnant? (That was a rhetorical question, but here’s the answer anyway: because they make a massive profit from it. That is the only reason.)

Well, we’re into the final stretch. The last paragraph

Some of the criticism of the involvement of commercial publishing and academic research is simply prejudice, in my view;

Yes. Some of us are irrationally prejudiced against a system where, having laboriously created new knowledge, it’s then locked up behind a paywall. It’s like the irrational prejudice some coal-miners have against the idea of the coal they dig up being immediately buried again.

And finally, this:

Some members of the academic community […] base their criticism on idealism.

Isn’t that odd? I have never understood why some people consider “idealism” to be a criticism. I accept it as high praise. People who are not idealists have nothing to base their pragmatism on. They are pragmatic, sure, but to what end?

So what are we left with? What is Seeley’s article actually about? It’s very hard to pick out a coherent thread. If there is one, it seems to be this: copyright is helpful for some artists, so it follows that scholarly authors should donate their copyright to for-profit publishers. That is a consequence that, to my mind, does not follow particularly naturally from the hypothesis.

While Mike’s been off having fun at the Royal Society, this has been happening:

Lots of feathers flying right now over the situation at the Medical Journal of Australia (MJA). The short, short version is that AMPCo, the company that publishes MJA, made plans to outsource production of the journal, and apparently some sub-editing and administrative functions as well, to Elsevier. MJA’s editor-in-chief, Professor Stephen Leeder, raised concerns about the journal getting involved with one of the most ethically problematic publishing companies in existence. And also about this having been done without consultation.

He was sacked for his trouble.

After Leeder was pushed out, his job was offered to MJA’s deputy editor, Tania Janusic. She declined, and resigned from the journal, as did 19 of the 20 members of the journal’s editorial advisory committee. (Some accounts say 18. Anyway, 90%+ of the committee is gone.)

When we first discussed the situation via email, Mike wrote, “My take is that at the present stage of the OA transition, editorial board resignations from journals controlled by predatory legacy publishers are about the most important visible steps that can be taken. Very good news for the world, even though it must be a mighty pain for the people involved.”

Yes. I feel pretty bad for the people involved, but I’m hugely supportive of what they’re doing.

I don’t know what we can do to materially contribute here, beyond amplifying the signal and lending our public support to Leeder, Janusic, and the 19 editors who resigned. That’s a courageous thing to do, but no-one should have to do it. The sooner we move to a world where scientific results and other forms of scholarly publication are freely available to all, instead of under the monopolistic control of a handful of exploitative, hugely profitable corporations, the better.

A short list of links, nowhere near exhaustive, if you’d like to read more:

UPDATE: In the first comment below, Alex Holcombe pointed us to this post written by Leeder himself, explaining the reasoning and consequences of his decision.

Also, dunno how I forgot this – if you haven’t already, you might be interested in signing the Cost of Knowledge boycott against Elsevier. Here’s the link.

Follow

Get every new post delivered to your Inbox.

Join 529 other followers