UmbaranStarfighter-SWE

Update, January 21, 2013: YES, it was! Scroll down for links to the entire saga.

Because it’s doing a hell of an impression of one, if not. It’s got the huge cervical rib loops (wings), bifurcated neural spine (top fins), and even a condyle on the front of the centrum (cockpit pod). About all it’s missing are the zygapophyses and the cervical ribs themselves.

Some actual Apatosaurus cervicals for comparison, from previous posts:

Apatosaurus ajax NSMT-PV 20375, cervical vertebrae 3, 6 and 7 in anterior and posterior views. Modified from Upchurch et al. (2005: plate 2)

Apatosaurus parvus CM 563/UWGM 15556 cervicals 7, 5, 4 and 3 in anterior and right lateral views, from Gilmore (1936:pl. 31)

Various Apatosaurus cervicals–see Wedel and Sanders (2002) for specimen numbers and sources.

And of course Mike’s magisterial work photographing the Apatosaurus ajax holotype YPM 1860 cervical:

More on the Umbaran Starfighter here.

The complete Umbaran Starfighter Saga–at least as told on SV-POW!:

For other Star Wars/paleontology crossovers, please see:

The sauropods of Star Wars

The sauropods of Star Wars: Special Edition

and–mostly as shameless self-promotion since the paleo link is pretty tenuous:

Tales of the Flaming Vagabond

References

We know that most academic journals and edited volumes ask authors to sign a copyright transfer agreement before proceeding with publication. When this is done, the publisher becomes the owner of the paper; the author may retain some rights according to the grace or otherwise of the publisher.

Plenty of authors have rightly railed against this land-grab, which publishers have been quite unable to justify. On occasion we’ve found ways to avoid the transfer, including the excellent structured approach that is the SPARC Author Addendum and my tactic of transferring copyright to my wife.

Works produced by the U.S. Federal Government are not protected by copyright. For example, papers written by Bill Parker as part of his work at Petrified Forest National Park are in the public domain.

Journals know this, and have clauses in their copyright transfer agreements to deal with it. For example, Elsevier’s template agreement has a box to check that says “I am a US Government employee and there is no copyright to transfer”, and the publishing agreement itself reads as follows (emphasis added):

Assignment of publishing rights
I hereby assign to <Copyright owner> the copyright in the manuscript identified above (government authors not electing to transfer agree to assign a non-exclusive licence) and any supplemental tables, illustrations or other information submitted therewith that are intended for publication as part of or as a supplement to the manuscript (the “Article”) in all forms and media (whether now known or hereafter developed), throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication.

So journals and publishers are already set up to deal with public domain works that have no copyright. And that made me wonder why this option should be restricted to U.S. Federal employees.

What would happen if I just unilaterally place my manuscript in the public domain before submitting it? (This is easy to do: you can use the Creative Commons CC0 tool.)

Once I’d done that, I would be unable to sign a copyright transfer agreement. Not merely unwilling — I wouldn’t need to argue with publishers, “Oh, I don’t want to sign that”. It would be simpler than this. It’s would just be “There is no copyright to transfer”.

What would publishers say?

What could they say?

“We only publish public-domain works if they were written by U.S. federal employees”?

It’s an oddity to me that when publishers try to justify their existence with long lists of the valuable services they provide, they usually skip lightly over one of the few really big ones. For example, Kent Anderson’s exhausting 60-element list omitted it, and it had to be pointed out in a comment by Carol Anne Meyer:

One to add: Enhanced content linking, including CrossREF DOI reference linking, author name linking cited-by linking, related content linking, updates and corrections linking.

(Anderson’s list sidles up to this issue in his #28, “XML generation and DTD migration” and #29, “Tagging”, but doesn’t come right out and say it.)

Although there are a few journals whose PDFs just contain references formatted as in the manuscript — as we did for our arXiv PDF — nearly all mainstream publishers go through a more elaborate process that yields more information and enables the linking that Meyer is talking about. (This is true of the new kids on the block as well as the legacy publishers.)

The reference-formatting pipeline

When I submit a manuscript with formatted reference like:

Taylor, M.P., Hone, D.W.E., Wedel, M.J. and Naish, D. 2011. The long necks of sauropods did not evolve primarily through sexual selection. Journal of Zoology 285(2):150–161. doi:10.1111/j.1469-7998.2011.00824.x

(as indeed I did in that arXiv paper), the publisher will take that reference and break it down into structured data describing the specific paper I was referring to. It does this for various reasons: among them, it needs to provide this information for services like the Web Of Knowledge.

Once it has this structured representation of the reference, the publication process plays it out in whatever format the journal prefers: for example, had our paper appeared in JVP, Taylor and Francis’s publication pipeline would have rendered it:

Taylor, M. P., D. W. E. Hone, M. J. Wedel, and D. Naish. 2011. The long necks of sauropods did not evolve primarily through sexual selection. Journal of Zoology 285:150–161.

(With spaces between multiple initials, initials preceding surnames for all authors except the first, an “Oxford comma” before the last author, no italics for the journal name, no bold for the volume number, the issue number omitted altogether, and the DOI inexplicably removed.)

What’s needed in a submitted reference

Here’s the key point: so long as all the relevant information is included in some format (authors, year, article title, journal title, volume, page-range), it makes no difference how it’s formatted. Because the publication process involves breaking the reference down into its component fields, thus losing all the formatting, before reassembling it in the preferred format.

And this leads us the key question: why do journals insist that authors format their references in journal style at all? All the work that authors do to achieve this is thrown away anyway, when the reference is broken down into fields, so why do it?

And the answer of course is “there is no good reason”. Which is why several journals, including PeerJ, eLifePLOS ONE and certain Elsevier journals have abandoned the requirement completely. (At the other end of the scale, JVP has been known to reject papers without review for such offences as using the wrong kind of dash in a page-range.)

Like so much of how we do things in scholarly publishing, requiring journal-style formatting at the submission stage is a relic of how things used to be done and makes no sense whatsoever in 2012. Before we had citation databases, the publication pipeline was much more straight-through, and the author’s references could be used “as is” in the final publication. Not any more.

How far can we go?

All of this leads me to wonder how far we can go in cutting down the author burden of referencing. Do we actually need to give all the author/title/etc. information for each reference?

In the case of references that have a DOI, I think not (though I’ve not yet discussed this with any publishers). I think that it suffices to give only the DOI. Because once you have a DOI, you can look up all the reference data. Go try it yourself: go to http://www.crossref.org/guestquery/ and paste my DOI “10.1111/j.1469-7998.2011.00824.x” into the DOI Query box at the bottom of the page. Select the “unixref” radio button and hit the Search button. Scroll down to the bottom of the results page, and voila! — an XML document containing everything you could wish to know about the referenced paper.

And the data in that structured document is of course what the publication process uses to render out the reference in the journal’s preferred style.

Am I missing something? Or is this really all we need?

A few months ago, Matt and Darren saw a picture someone had done of an Apatosaurus with huge neck-flaps. Since they, they’ve tried to find it again but without success. Then, happily, I stumbled across it in this All Yesterdays review, so here it is:

tumblr_mdqur507GE1rgw4eto1_500

Unfortunately, I can’t tell you much about it. I know it’s the work of Emiliano Troco, but I’ve not been able to find his web-site, nor a description of the piece, nor a version in a decent resolution. So all we have to go on at the moment is this thumbnail. If you know more, please leave a comment!

Is it credible? Who’s to say? The one thing we know for certain about Apatosaurus is that it had truly crazy cervical vertebrae, unlike those of any other animal. In our recent arXiv paper, we wrote:

It is difficult to see the benefit in Apatosaurus excelsus of cervical ribs held so far below the centrum – an arrangement that seems to make little sense from any mechanical perspective, and may have to be written off as an inexplicable consequence of sexual selection or species recognition.

It certainly seems to have been doing something weird with its neck. It’s not obvious why big flaps like these would require honking great cervicals ribs to hang down from, but maybe it was swinging them around or something?

[We’ve featured bizarrely ornamented sauropods here before, notably Brian Engh’s pouch-throated Sauroposeidon.]

There’s been a lot of concern in some corners of the world about the Finch Report‘s preference for Gold open access, and the RCUK policy‘s similar leaning. Much of the complaining has focussed on the cost of Gold OA publishing: Article Processing Charges (APCs) are very offputting to researchers with limited budgets. I thought it would be useful to provide a page that I (and you) can link to when facing such concerns.

This is long and (frankly) a bit boring. But I think it’s important and needs saying.

1. How much does the Finch Report suggest APCs cost?

Worries about high publishing costs are exacerbated by the widely reported estimate of £2000 for a typical APC, attributed to the Finch Report. In fact, that is not quite what the report (page 61) says:

Subsequent reports also suggest that the costs for open access journals average between £1.5k and £2k, which is broadly in line with the average level of APCs paid by the Wellcome Trust in 2010, at just under £1.5k.

Still, the midpoint of Finch’s “£1.5k-£2k” range is £1750, which is still a hefty amount. Where does it come from? A footnote elucidates:

Houghton J et al, op cit; Heading for the Open Road: costs and benefits of transitions in scholarly communications, RIN, PRC, Wellcome Trust, JISC, RLUK, 2011. See also Solomon, D, and Björk, B-Christer,. A study of Open Access Journals using article processing charges. Journal of the American Society for Information Science and Technology , which suggests an average level of APCs for open access journal (including those published at very low cost in developing countries) of just over $900. It is difficult to judge – opinions differ – whether costs for open access journals are on average likely to rise as higher status journals join the open access ranks; or to fall as new entrants come into the market.

[An aside: these details would probably be better known, and the details of the Finch report would be discussed in a more informed way, if the report were available on the Web in a form where individual sections could be linked, rather than only as a PDF.]

The first two cited sources look good and authoritative, being from JISC and a combination of well-respected research organisations. Nevertheless, the high figure that they cite is misleading, and unnecessarily alarming, for several reasons.

2. Why the Finch estimate is misleading

2.1. It ignores free-to-the-author journals.

The Solomon and Björk analysis that the Finch Report rather brushes over is the only one of the three to have attempted any rigorous numerical analysis, and it found as follows (citing an earlier study, subsequently written up):

Almost 23,000 authors who had published an article in an OA journal where asked about how much they had paid. Half of the authors had not paid any fee at all, and only 10% had paid fees exceeding 1,000 Euros [= £812, less than half of the midpoint of Finch’s range].

And the proportion of journals that charge no APC (as opposed to authors who paid no fee) is even higher — nearly three quarters:

As of August 2011 there were 1,825 journals listed in the Directory of Open Access Journals (DOAJ) that, at least by self-report, charge APCs. These represent just over 26% of all DOAJ journals.

So there are a lot of a zero-cost options. And there are by no means all low-quality journals: they include, for example, Acta Palaeontologica Polonica and Palaeontologia Electronica in our own field of palaeontology, the Journal of Machine Learning Research in computer science and Theory and Applications of Categories in maths.

2.2. It ignores the low average price found by the Solomon and Björk analysis.

The Solomon and Björk paper is full of useful information and well worth detailed consideration. They make it clear in their methodology section that their sample was limited only to those journals that charge a non-zero APC, and their analysis concluded:

[We studied] 1,370 journals that published 100,697 articles in 2010. The average APC was 906 US Dollars (USD) calculated over journals and 904 US Dollars USD calculated over articles.

(The closeness of the average across journals and dollars is important: it shows that the average-by-journals is not being artificially depressed by a large number of very low-volume journals that have low APCs.)

2.3. It focusses on authors who are spending Other People’s Money.

Recall that Finch’s “£1.5k-£2k” estimate is justified in part by the observation that the APC paid by the Wellcome Trust in 2010 was just under £1.5k. But it’s well established that people spending Other People’s Money get less good value than when they spend their own: that’s why travellers who fly business class when their employer is paying go coach when they’re paying for themselves. (This is an example of the principal-agent problem.)

It’s great that the Wellcome Trust, and some other funders, pay Gold OA fees. For researchers in this situation, APCs should not be problem; but for the rest of us (and, yes, that includes me — I’ve never had a grant in my life) there are plenty of excellent lower-cost options.

And as noted above, lower cost, or even no cost, does not need to mean lower quality.

2.4. It ignores the world’s leading open-access journal.

PLOS ONE publishes more articles than any other journal in the world, has very high production values, and for those who care about such things has a higher impact-factor than almost any specialist palaeontology journal. Its APC is $1350, which is currently about £839 — less than half of the midpoint of Finch’s “£1.5k-£2k” range.

Even PLOS’s flagship journal — PLOS Biology, which is ranked top in the JCR’s biology section, charges $2900, about £1802, which is well within the Finch range.

Meanwhile, over in the humanities (where much of the negative reaction to Finch and RCUK is to be found), the leading open-access megajournal is much cheaper even than PLOS ONE: SAGE Open currently offers an introductory APC of $195 (discounted from the regular price of $695).

2.5. It ignores waivers

The most important, and most consistently overlooked fact among those who complain about how they don’t have any funds for Gold-OA publishing is that many Gold-OA journals offer waivers.

For example, PLOS co-founder Michael Eisen affirms (pers. comm.) that it’s explicitly part of the PLOS philosophy that no-one should be prevented from publishing in a PLOS journal by financial issues. And that philosophy is implemented in the PLOS policy of offering waivers to anyone who asks for one. (For example, my old University of Portsmouth colleagues, Mark Witton and Darren Naish certainly had no funds from UoP to support publication of their azhdarchid palaeobiology paper in PLOS ONE; they asked for a waiver and got it, no questions asked.)

Other major open-access publishers have similar polices.

2.6. It doesn’t recognise how the publishing landscape is changing.

It’s not really a criticism of the Finch Report — at least, not a fair one — that its coverage of eLife and PeerJ is limited to a single passing mention on page 58. Neither of these initiatives had come into existence when the report was drafted. Nevertheless, they have quickly become hugely important in shaping the world of publishing — it’s not a stretch to say that they have already joined BMC and PLOS in defining the shape of the open access world.

For the first few years of operation, eLife is waiving all APCs. It remains to be seen what will happen after that, but I think there are signs that their goal may be to retain the no-APC model indefinitely. PeerJ does charge, but is ridiculously cheap: a one-off payment of $99 pays for a publication every year for life; or $299 for any number of publications at any time. Those numbers are going to skew the average APC way, way down even from their current low levels.

2.7. I suspect it concentrates on hybrid-OA journals.

There are all sorts of reasons to mistrust hybrid journals, including the difficulty of finding the open articles; the very high APCs that they charge is only one.

Why do people use hybrid journals when they are more expensive than fully OA journals and offer so much less (e.g. limited length, no colour, number of figures)? I suspect hybrid OA is the lazy option for researchers who have to conform to an OA mandate but don’t want to invest any time or effort in thinking about open-access options. It’s easy to imagine such researchers just shoving their work into in the traditional paywalled journal, and letting the Wellcome grant pick up the tab. After all, it’s Other People’s Money.

If grant-money for funding APCs becomes more scarce as it’s required to stretch further, then researchers who’ve been taking this sort of box-checking approach to fulfilling OA mandates are going to be forced to think more about what they’re doing. And that’s a good thing.

3. What is the true average cost?

If we put all this together, and assume that researchers working from RCUK funds will make some kind of effort to find good-value open-access journals for their work instead of blindly throwing it at traditional subscription journals and expecting RCUK to pick up the fee, here’s where we land up.

  • About half of authors currently pay no fee at all.
  • Among those that do pay a fee, the average is $906.
  • So the overall average fee is about $453.
  • That’s about £283, which is less than one sixth of what Finch suggests.

4. What are we comparing with?

It’s one thing to find a more realistic cost for an average open-access article. But we also need to realise that we’re not comparing with zero. Authors have always paid publication fees in certain circumstances — subscription journals have levied page charges, extra costs for going past a certain length, for colour figures, etc. For example, Elsevier’s American Journal of Pathology charges authors “$550 per color figure, $50 per black & white or grayscale figure, and $50 per composed table, per printed page”. So a single colour figure in that journal costs more than the whole of a typical OA article.

But that’s not the real cost to compare with.

The real cost is what the world at large pays for each paywalled article. As we discussed here in some detail, the aggregate subscription paid to access an average paywalled article is about $5333. That’s as much as it costs to publish nearly twelve average open-access articles — and for that, you get much less: people outside of universities can’t get it even after the $5333 has been paid.

5. Directing our anger properly

Now think about this: the Big Four academic publishers have profit-margins between 32.4% and 42%. Let’s pick an typical profit margin of 37% — a little below the middle of that range. Assuming this is pretty representative across all subscription publishers — and it will be, since the Big Four control so much of the volume of subscription publishing — that means that 37% of the $5333 of an average paywalled article’s subscription money is pure profit. So $1973 is leaving academia every time a paper is “published” behind a paywall.

So every time a university sends a paper behind a paywall, the $1973 that it burns could have funded four average-priced Gold-OA APCs. Heck, even if you want to discount all the small publishers and put everything in PLOS — never taking a waiver — it would pay for one and a half PLOS ONE articles.

So let me leave you with this. In recent weeks, I’ve seen a fair bit of anger directed at the Finch Report and the RCUK policy. Some researchers have been up in arms at the prospect of having to “pay to say“. I want to suggest that this anger is misdirected. Rather than being angry with a policy that says you need to find $453 when you publish, direct your anger at publishers who remove $1973 from academia every time you give them a paper.

Folks, we have to have the vision to look beyond what is happening right now in our departments. Gold OA does, for sure, mean a small amount of short-term pain. It also means a massive long-term win for us all.

I just saw this tweet from palaeohistologist Sarah Werning, and it summed up what science is all about so well that I wanted to give it wider and more permanent coverage:

This is exactly right. Kudos to Sarah for saying it so beautifully.

(Sarah’s work can most recently be seen in Nesbitt et al.’s (2012) paper on a newly recognised ancient dinosaur or near dinosaur relative, and especially in the high-resolution supplementary images that she deposited at MorphoBank.)


[backup image]

The problem

Its often been noted that under the author-pays model of publication (Gold open access), journals have a financial incentive to publish as many articles as possible so as to collect as many article processing charges as possible. In the early days of PLOS ONE, Nature famously described it as “relying on bulk, cheap publishing of lower quality papers“.

As the subsequent runaway success of PLOS ONE has shown, that fear was misplaced: real journals will always value their reputation above quick-hit APC income, and that’s reflected by the fact that PLOS ONE papers are cited more often, on average, than those in quality traditional palaeo journals such as JVP and Palaeontology.

But the general concern remains a real one: for every PLOS, there is a Bentham Open. It’s true that anyone who wants to publish an academic paper, however shoddy, will certainly be able to find a “publisher” of some kind that will take it — for a fee. This problem of “predatory publishers” was highlighted in a Nature column three months ago; and the ethical standard of some of the publishers in question was neatly highlighted as they contributed comments on that column, posing as well-known open-access advocates.

A solution

The author of that Nature column, Jeffrey Beall, maintains Beall’s List of Predatory, Open-Access Publishers, a useful annotated list of publishers that he has judged to fall into this vanity-publishing category. The idea is that if you’re considering submitting a manuscript to a journal that you don’t already know and trust, you can consult the list to see whether its publisher is reputable.

[An aside: I find a simpler solution is not to send my work to journals that I don’t already know and trust, and I don’t really understand why anyone would do that.]

Towards a better solution

Beall’s list has done sterling work over the last few years, but as the number of open access publishers keeps growing, it’s starting to creak. It’s not really possible for one person to keep track of the entire field. More important, it’s not really desirable for any one person to exercise so much power over the reputation of publishers. For example, the comment trail on the list shows that Hindawi was quite unjustly included for some time, and even now remains on the “watchlist” despite having a good reputation elsewhere.

We live in a connected and open world, where crowdsourcing has built the world’s greatest encyclopaedia, funded research projects and assembled the best database of resources for solving programming problems. We ought to be able to do better together than any one person can do alone — giving us better coverage, and freeing the resource from the potential of bias, whether intended or unintended.

I’ve had this idea floating around for a while, but I was nudged into action today by a Twitter discussion with Richard Poynder and Cameron Neylon. [I wish there was a good way to link to a tree-structured branching discussion on Twitter. If you want to try to reconstruct it you could start here and trying following some links.]

What might a better solution look like?

Richard was cautious about how this might work, as he should be. He suggested a wiki at first, but I think we’d need something more structured, because wikis suffer from last-edit-wins syndrome. I imagine some kind of voting system — perhaps resembling how stories are voted up and down (and commented on) in Reddit, or maybe more like the way questions are handled in Stack Exchange.

Either way, it would be better if we could use and adapt an existing service rather than building new software from the ground up (even though that’s always my natural tendency). Maybe better still would be to use an existing hosted service: for example, we might be able to get a surprisingly long way just by creating a subreddit, posting an entry for each publisher, then commenting and voting as in any other subreddit.

Cameron had another concern: that it’s hard to build and maintain a blacklist, because the number of predatory publishers is potentially unlimited. There are other reasons to prefer a whitelist — it’s nice to be positive! — and Cameron suggested that the membership of the Open Access Scholarly Publishers Association (OASPA) might make a good starting point.

My feeling is that, while a good solution could certainly say positive things about good publishers as well as negative things about bad publishers, we do need it to produce (among other things) a blacklist, if only to be an alternative to Beall’s one. Since that’s the only game in town, it has altogether too much power at the moment. Richard Poynder distrusts voting systems, but the current state of the art when it comes to predatory publisher lists is that we have a one-man-one-vote system, and Jeffrey Beall is the one man who has the one vote.

We have to be able to do better than that.

Thoughts? Ideas? Suggestions? Offers? Disagreements?