New (but very old) preprint: A survey of dinosaur diversity by clade, age, place of discovery and year of description
July 11, 2014
Today, available for the first time, you can read my 2004 paper A survey of dinosaur diversity by clade, age, place of discovery and year of description. It’s freely available (CC By 4.0) as a PeerJ Preprint. It’s one of those papers that does exactly what it says on the tin — you should be able to find some interesting patterns in the diversity of your own favourite dinosaur group.
“But Mike”, you say, “you wrote this thing ten years ago?”
Yes. It’s actually the first scientific paper I ever wrote (bar some scraps of computer science) beginning in 2003. It’s so old that all the illustrations are grey-scale. I submitted it to Acta Palaeontologica Polonica way back on on 24 October 2004 (three double-spaced hard-copies in the post!) , but it was rejected without review. I was subsequently able to publish a greatly truncated version (Taylor 2006) in the proceedings of the 2006 Symposium on Mesozoic Terrestrial Ecosystems, but that was only one tenth the length of the full manuscript — much potentially valuable information was lost.
My finally posting this comes (as so many things seem to) from a conversation with Matt. Off work sick, he’d been amusing himself by re-reading old SV-POW! posts (yes, we do this). He was struck by my exhortation in Tutorial 14: “do not ever give a conference talk without immediately transcribing your slides into a manuscript”. He bemoaned how bad he’s been at following that advice, and I had to admit I’ve done no better, listing a sequence of old my SVPCA talks that have still never been published as papers.
The oldest of these was my 2004 presentation on dinosaur diversity. Commenting on this, I wrote in email: “OK, I got the MTE four-pager out of this, but the talk was distilled from a 40ish-page manuscript that was never published and never will be.” Quick as a flash, Matt replied:
If I had written this and sent it to you, you’d tell me to put it online and blog about how I went from idea to long paper to talk to short paper, to illuminate the process of science.
And of course he was right — hence this preprint.
I will never update this manuscript, as it’s based on a now wildly outdated database and I have too much else happening. (For one thing, I really ought to get around to finishing up the paper based on my 2005 SVPCA talk!) So in a sense it’s odd to call it a “pre-print” — it’s not pre anything.
Despite the data being well out of date, this manuscript still contains much that is (I think) of interest, and my sense is that the ratios of taxon counts, if not the absolute numbers, are still pretty accurate.
I don’t expect ever to submit a version of this to a journal, so this can be considered the final and definitive version.
- Taylor, Michael P. 2006. Dinosaur diversity analysed by clade, age, place and year of description. pp. 134-138 in Paul M. Barrett and Susan E. Evans (eds.), Ninth international symposium on Mesozoic terrestrial ecosystems and biota, Manchester, UK. Cambridge Publications. Natural History Museum, London, UK. 187 pp.
- Taylor, Michael P. 2014 (written in 2004). A survey of dinosaur diversity by clade, age, place of discovery and year of description. PeerJ PrePrints 2:e434v1. doi:10.7287/peerj.preprints.434v1
June 5, 2014
Back in 2008, when I did the GDI of Giraffatitan and Brachiosaurus for my 2009 paper on those genera, I came out with estimates of 28688 and 23337 kg respectively. At the time I said to Matt that I was suspicious of those numbers because they seemed too low. He rightly told me to shut up and put my actual results in the paper.
More recently, Benson et al. (2014) used limb-bone measurements to estimate the masses of the same individuals as 56000 and 34000 kg. When Ian Corfe mentioned this in a comment, my immediate reaction was to be sceptical: “I’m amazed that the two more recent papers have got such high estimates for brachiosaurs, which have the most gracile humeri of all sauropods“.
So evidently I have a pretty strong intuition that Brachiosaurus massed somewhere in the region of 35000 kg and Giraffatitan around 30000 kg. But why? Where does that intuition come from?
I can only assume that my strongly held ideas are based only on what I’d heard before. Back when I did my 2008 estimate, I probably had in mind things like Paul’s (1998) estimate of 35000 kg for Brachiosaurus, and Christiansen’s (1997:67) estimate of 37400 for Giraffatitan. Whereas by the time the Benson et al. paper came out I’d managed to persuade myself that my own much lower estimates were right. In other words, I think my sauropod-mass intuition is based mostly on sheer mental inertia, and so should be ignored.
I’m guessing I should ignore your intuitions about sauropod masses, too.
- Benson Roger B. J., Nicolás E. Campione, Matthew T. Carrano, Philip D. Mannion, Corwin Sullivan, Paul Upchurch, and David C. Evans. (2014) Rates of Dinosaur Body Mass Evolution Indicate 170 Million Years of Sustained Ecological Innovation on the Avian Stem Lineage. PLoS Biology 12(5):e1001853. doi:10.1371/journal.pbio.1001853
- Christiansen, Per. 1997. Locomotion in sauropod dinosaurs. Gaia 14:45-75.
- Paul, Gregory S. 1998. Terramegathermy and Cope’s Rule in the land of titans. Modern Geology 23:179-217.
- Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrate Paleontology 29(3):787-806.
February 21, 2013
Matt and I were discussing “portable peer-review” services like Rubriq, and the conversation quickly wandered to the subject of PeerJ. Then I realised that that seems to be happening with all our conversations lately. Here’s a partial transcript.
Mike: I don’t see portable peer-review catching on. Who’s going to pay for it unless journals give an equal discount from APCs? And what journal is going to do that when they get the peer-review done for free anyway? If I was Elsevier, I wouldn’t say “OK, we’ll accept your external review and give you a $700 discount”, I’d charge the full $3000 and get two more free reviews done.
Plus, you know, I can get all the peer-review I want, free of charge, at PeerJ.
Matt: Yeah, that was pretty much my take. Even as I was sending that I thought about adding, “I wonder if this is one more thing that PeerJ will kill.” Only ‘abort’ is more the verb I want, in that I don’t see this ever getting off the ground anyway.
Mike: I think the world at large has yet to realise what a black hole PeerJ is, in the sense that it’s warping all the space near it. Pretty much every time I have any thought at all about scholarly publishing now, that thought it swiftly followed by “… or, wait, I should just use PeerJ for that.”
Matt: Exactly. It makes me think that we may be discovering the contours of that space-warping effect for some time, in that we’re used to one model, and that, among all the other things PeerJ does, it quacks something like that old model so we tend to think of it as a very cool duck, and not the freakin’ tyrannosaur that is going to eat scholarly publishing.
Also makes me think of that Paul Graham thing about noticing that the door is open, and there being a lag between the freedom to do something and the adoption of that newly facilitated action or behavior.
New thought: assuming PeerJ does not implode, will the established powers try to start PeerJ-alikes, and if so, what will they charge (amount), and what will they charge for (lifetime membership? decadal? annual? per 1000 pages published?).
Mike: Sweet metaphor. It’s true. It’s qualitatively different from other journals in two respects.
First, the APC is literally an order of magnitude less — and at that point, a quantitative difference becomes qualitative. Someone like [NAME REDACTED] would worry about paying $1350 to PLOS ONE, but didn’t even stop and think before saying, yeah, I’ll do that.
Second, the lifetime membership changes the game for all subsequent submissions. Now when you have a manuscript ready to go, your question isn’t going to be “where shall I send this?”, it’s going to be “is there are compelling reason not to send this to PeerJ?”
Legacy publishers won’t start PeerJ-alikes because they can’t. As noted in many SV-POW! posts, Elsevier takes about $5000 for each article they put behind a paywall. Slice away the 40% profit and you get $3000 which not coincidentally is what they charge as an APC. They have old, slow, encumbered systems and processes and top-heavy organisation. At $3000 they are only breaking even. They can’t compete at a PLOS-like $1350 level and they can’t even think about competing at PeerJ levels. If they offered a lifetime membership they’d have to ask $10k or something stupid.
I don’t think it’s that they don’t want to change. They can’t. They’ve ossified into 1990s companies running on 1990s software. It’s hard to steer a ship with a $2bn turnover, and impossible to replace the engines while still cruising.
Matt: I think it is probably a mistake to think that PeerJ will only encroach “upward”, onto the territory of more traditional journals (which is “all of them”). We’ve already talked about it taking business from arXiv (at least ours, although there is the large non-overlap in their respective subject domains–for now, anyway).
But my point is, the question, “Why wouldn’t I send this to PeerJ?” may not only kick in for papers that you might conceivably send elsewhere, but also for manuscripts that you might not conceivably send anywhere.
Matt: Right. And if one is on the fence, shove it on the PeerJ preprint server and see what people have to say.
Mike: I think it’s the first megajournal to have an associated preprint server, and that may yet prove the most important of all its innovations.
Matt: It feels almost … struggling to find the right word, in part because it’s late and I need to go sleep. “Seditious” is not quite it, and neither is “seductive”.
At that point we started talking about something else, so I never did find out what word Matt was groping for. But what’s only gradually become clear to us is how much PeerJ is changing how we think about the academic publishing process. It’s shaking us out of mental ruts that we didn’t even know we were in. Exciting.
December 23, 2012
After the authors’ own work, the biggest contribution to a published paper is the reviews provided, gratis, by peers. When peer-review works as it’s supposed to, they add significant value to the final paper. But the actual reviews are never seen by anyone except the authors and the handling editor.
This is bad for several reasons.
First, good reviewers don’t get the credit they deserve. That’s unfair on those who do a good job — who generously invest a lot of time and effort in others’ work.
Second, bad reviewers don’t get the blame they deserve. That leaves them free to act in bad faith: blocking papers by people they don’t like, or whose work is critical of their own; or just doing a completely inadequate job. Because there are no negative consequences for doing a bad job, people have no external incentive to straighten up and fly right.
Third, the effort that goes into reviewing is largely wasted. Often the reviews themselves are significant pieces of work (that’s certainly true when I’m the one giving the review) and the wider community could benefit from seeing them. Frequently reviews contain extended discussion, not only of the paper’s subject matter but of scientific philosophy such as approaches to taxonomy or narrative structure.
Fourth, editors’ decisions remain unexplained. Most editors handle manucripts efficiently and fairly, but there are cases when this isn’t the case — as for example when I was one of three reviewers who wholeheartedly recommended acceptance but the editor rejected the paper. Even discussing that situation was difficult, because the reviews in question were not available for the world to read.
Fifth, and more general than any of the above, the reviewing process is opaque to the world. In times past, logistical reasons such as lack of space in printed journals meant that the sausage-machine approach to the review process was the only feasible one: no-one wants to see what goes into the machine or what goes on inside, we only want the final product. But we live in an increasingly open world, and consensus is that pretty much all processes benefit from openness.
There are various initiatives under way to change the legacy system of reviewing, including F1000 Research and the eLife decision-letter system. But at the moment only a small minority of papers are submitted to such venues.
What to do about the others?
And so I found myself wondering … what would happen if I just unilaterally posted the reviews I receive? I already make pages on this site for each of my published papers (example): it would be easy to extend those pages by also adding:
- The submitted version of the manuscript
- All the reviews I received
- The editor’s decision letter
- My response letter to the editor
- The final published paper.
I know this is “not done”. My question is: why not? Is there an actual reason, other than inertia? Wouldn’t we all be better off if this was standard operating procedure?
[Note that this is orthogonal to reviewer anonymity. As it happens, I think that is also a bad thing, but it's independent of what I'm proposing here. I could post an unsigned review as-is, without revealing who wrote it even if I knew.]
December 13, 2012
We know that most academic journals and edited volumes ask authors to sign a copyright transfer agreement before proceeding with publication. When this is done, the publisher becomes the owner of the paper; the author may retain some rights according to the grace or otherwise of the publisher.
Plenty of authors have rightly railed against this land-grab, which publishers have been quite unable to justify. On occasion we’ve found ways to avoid the transfer, including the excellent structured approach that is the SPARC Author Addendum and my tactic of transferring copyright to my wife.
Works produced by the U.S. Federal Government are not protected by copyright. For example, papers written by Bill Parker as part of his work at Petrified Forest National Park are in the public domain.
Journals know this, and have clauses in their copyright transfer agreements to deal with it. For example, Elsevier’s template agreement has a box to check that says “I am a US Government employee and there is no copyright to transfer”, and the publishing agreement itself reads as follows (emphasis added):
Assignment of publishing rights
I hereby assign to <Copyright owner> the copyright in the manuscript identified above (government authors not electing to transfer agree to assign a non-exclusive licence) and any supplemental tables, illustrations or other information submitted therewith that are intended for publication as part of or as a supplement to the manuscript (the “Article”) in all forms and media (whether now known or hereafter developed), throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication.
So journals and publishers are already set up to deal with public domain works that have no copyright. And that made me wonder why this option should be restricted to U.S. Federal employees.
What would happen if I just unilaterally place my manuscript in the public domain before submitting it? (This is easy to do: you can use the Creative Commons CC0 tool.)
Once I’d done that, I would be unable to sign a copyright transfer agreement. Not merely unwilling — I wouldn’t need to argue with publishers, “Oh, I don’t want to sign that”. It would be simpler than this. It’s would just be “There is no copyright to transfer”.
What would publishers say?
What could they say?
“We only publish public-domain works if they were written by U.S. federal employees”?
December 12, 2012
It’s an oddity to me that when publishers try to justify their existence with long lists of the valuable services they provide, they usually skip lightly over one of the few really big ones. For example, Kent Anderson’s exhausting 60-element list omitted it, and it had to be pointed out in a comment by Carol Anne Meyer:
One to add: Enhanced content linking, including CrossREF DOI reference linking, author name linking cited-by linking, related content linking, updates and corrections linking.
(Anderson’s list sidles up to this issue in his #28, “XML generation and DTD migration” and #29, “Tagging”, but doesn’t come right out and say it.)
Although there are a few journals whose PDFs just contain references formatted as in the manuscript — as we did for our arXiv PDF — nearly all mainstream publishers go through a more elaborate process that yields more information and enables the linking that Meyer is talking about. (This is true of the new kids on the block as well as the legacy publishers.)
The reference-formatting pipeline
When I submit a manuscript with formatted reference like:
Taylor, M.P., Hone, D.W.E., Wedel, M.J. and Naish, D. 2011. The long necks of sauropods did not evolve primarily through sexual selection. Journal of Zoology 285(2):150–161. doi:10.1111/j.1469-7998.2011.00824.x
(as indeed I did in that arXiv paper), the publisher will take that reference and break it down into structured data describing the specific paper I was referring to. It does this for various reasons: among them, it needs to provide this information for services like the Web Of Knowledge.
Once it has this structured representation of the reference, the publication process plays it out in whatever format the journal prefers: for example, had our paper appeared in JVP, Taylor and Francis’s publication pipeline would have rendered it:
Taylor, M. P., D. W. E. Hone, M. J. Wedel, and D. Naish. 2011. The long necks of sauropods did not evolve primarily through sexual selection. Journal of Zoology 285:150–161.
(With spaces between multiple initials, initials preceding surnames for all authors except the first, an “Oxford comma” before the last author, no italics for the journal name, no bold for the volume number, the issue number omitted altogether, and the DOI inexplicably removed.)
What’s needed in a submitted reference
Here’s the key point: so long as all the relevant information is included in some format (authors, year, article title, journal title, volume, page-range), it makes no difference how it’s formatted. Because the publication process involves breaking the reference down into its component fields, thus losing all the formatting, before reassembling it in the preferred format.
And this leads us the key question: why do journals insist that authors format their references in journal style at all? All the work that authors do to achieve this is thrown away anyway, when the reference is broken down into fields, so why do it?
And the answer of course is “there is no good reason”. Which is why several journals, including PeerJ, eLife, PLOS ONE and certain Elsevier journals have abandoned the requirement completely. (At the other end of the scale, JVP has been known to reject papers without review for such offences as using the wrong kind of dash in a page-range.)
Like so much of how we do things in scholarly publishing, requiring journal-style formatting at the submission stage is a relic of how things used to be done and makes no sense whatsoever in 2012. Before we had citation databases, the publication pipeline was much more straight-through, and the author’s references could be used “as is” in the final publication. Not any more.
How far can we go?
All of this leads me to wonder how far we can go in cutting down the author burden of referencing. Do we actually need to give all the author/title/etc. information for each reference?
In the case of references that have a DOI, I think not (though I’ve not yet discussed this with any publishers). I think that it suffices to give only the DOI. Because once you have a DOI, you can look up all the reference data. Go try it yourself: go to http://www.crossref.org/guestquery/ and paste my DOI “10.1111/j.1469-7998.2011.00824.x” into the DOI Query box at the bottom of the page. Select the “unixref” radio button and hit the Search button. Scroll down to the bottom of the results page, and voila! — an XML document containing everything you could wish to know about the referenced paper.
And the data in that structured document is of course what the publication process uses to render out the reference in the journal’s preferred style.
Am I missing something? Or is this really all we need?
December 8, 2012
I just saw this tweet from palaeohistologist Sarah Werning, and it summed up what science is all about so well that I wanted to give it wider and more permanent coverage:
The second best part of science is knowing, just for a little while, something nobody else knows. The best part is sharing it with someone.—
Sarah Werning (@sarahwerning) December 08, 2012
This is exactly right. Kudos to Sarah for saying it so beautifully.
(Sarah’s work can most recently be seen in Nesbitt et al.’s (2012) paper on a newly recognised ancient dinosaur or near dinosaur relative, and especially in the high-resolution supplementary images that she deposited at MorphoBank.)